image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"replication"
] | [
{
"code": "mongodb://<username>:<pwd>@blph1024.bhdc.att.com:25001,blph1025.bhdc.att.com:25001,blph1026.bhdc.att.com:25001/CR?authMechanism=SCRAM-SHA-1&replicaSet=tdataRS&connectTimeoutMS=60000&minPoolSize=0&maxPoolSize=10&maxIdleTimeMS=900000{\"logType\":\"DEBUG\",\"logLevel\":\"INFO\",\"logTimestamp\":\"2023-07-05T15:48:35.539Z\",\"logger\":\"org.mongodb.driver.cluster\",\"label\":\"Exception in monitor thread while connecting to server blph1025.bhdc.att.com:25001\",\"runtime\":{\"hostName\":\"N/A\",\"ip\":\"N/A\",\"instance\":\"N/A\",\"clusterName\":\"N/A\",\"namespace\":\"unknown\",\"image\":\"unknown\",\"environment\":\"PROD\",\"version\":\"1.0.6\",\"routeOffer\":\"AE2S01-SB\"},\"application\":{\"deploymentUnitName\":\"unknown\",\"motsApplicationAcronym\":\"unknown\"},\"exception\":{\"exceptionDetails\":\"Exception receiving message\",\"stackTrace\":\"com.mongodb.MongoSocketReadException: Exception receiving message\\n\\tat com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:569)\\n\\tat com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:448)\\n\\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:299)\\n\\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259)\\n\\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\\n\\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\\n\\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)\\n\\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)\\n\\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)\\n\\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\\n\\tat java.lang.Thread.run(Thread.java:748)\\nCaused by: java.net.SocketException: Connection reset\\n\\tat java.net.SocketInputStream.read(SocketInputStream.java:210)\\n\\tat java.net.SocketInputStream.read(SocketInputStream.java:141)\\n\\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109)\\n\\tat com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:580)\\n\\tat com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:445)\\n\\t... 9 more\\n\"}}\n",
"text": "I am using below Mongo URI in my application. The application is being deployed on the Azure environment. The operation team has whitelisted ports for each server. Below are the bastion ports configured which are overridden with the actual port (25001). After deploying the application on Kubernetes we are getting MongoSocketOpenException\nBastion Port configured:\nblph1024.bhdc.att.com → 20366 → Secondary\nblph1025.bhdc.att.com → 20368 → Primary\nblph1026.bhdc.att.com → 20369 → Secondary\nURI → mongodb://<username>:<pwd>@blph1024.bhdc.att.com:25001,blph1025.bhdc.att.com:25001,blph1026.bhdc.att.com:25001/CR?authMechanism=SCRAM-SHA-1&replicaSet=tdataRS&connectTimeoutMS=60000&minPoolSize=0&maxPoolSize=10&maxIdleTimeMS=900000Logs",
"username": "RAKESH_SURYAVANSHI"
},
{
"code": "",
"text": "Hi @RAKESH_SURYAVANSHI and welcome to MongoDB community forums!!The error message below{“logType”:“DEBUG”,“logLevel”:“INFO”,“logTimestamp”:“2023-07-05T15:48:35.539Z”,“logger”:“org.mongodb.driver.cluster”,“label”:“Exception in monitor thread while connecting to server blph1025.bhdc.att.com:25001”,“runtime”:{“hostName”:“N/A”,“ip”:“N/A”,“instance”:“N/A”,“clusterName”:“N/A”,“namespace”:“unknown”,“image”:“unknown”,“environment”:“PROD”,“version”:“1.0.6”,“routeOffer”:“AE2S01-SB”},“application”:{“deploymentUnitName”:“unknown”,“motsApplicationAcronym”:“unknown”}shows that the hosts and other details have been missing from the URI you are trying to connect your application to.\nCan you confirm if you are able to connect to the MongoDB server outside the application?After deploying the application on Kubernetes we are getting MongoSocketOpenExceptionCan you share the required yaml file configuration you have set up to make the connection. These files would help me to reproduce the issue in my local environment.\nFinally please help us with the bastion port and the MongoDB version you are on.Regards\nAasawari",
"username": "Aasawari"
}
] | Getting MongoSocketOpenException while using bastion port | 2023-07-05T17:00:08.302Z | Getting MongoSocketOpenException while using bastion port | 627 |
null | [] | [
{
"code": "│ Error: error creating MongoDB Cluster: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/64b0c9XXXXXXXc45372d56/clusters: 409 (request \"OUT_OF_CAPACITY\") The requested region is currently out of capacity for the requested instance size.\n│ \n│ with module.vpc.mongodbatlas_cluster.mongo_db_cluster,\n│ on ../modules/aws_network/main.tf line 267, in resource \"mongodbatlas_cluster\" \"mongo_db_cluster\":\n│ 267: resource \"mongodbatlas_cluster\" \"mongo_db_cluster\" {\n",
"text": "I am attempting to provision an M10 60gb cluster in the ap-southeast-2 region and end up with this error in my terraform.Is this a lack of resources in MongoDB Cloud or an issue with my account and resource capacity?",
"username": "Tony_Edward"
},
{
"code": "Error: error creating MongoDB Cluster: (request \"OUT_OF_CAPACITY\") The requested region is currently out of capacity for the requested instance size.",
"text": "Hi @Tony_Edward,Welcome to the MongoDB Community!Error: error creating MongoDB Cluster: (request \"OUT_OF_CAPACITY\") The requested region is currently out of capacity for the requested instance size.This error message indicates that the chosen cloud provider (AWS, Azure, GCP) currently does not have the capacity to deploy the instance size in the selected region.The workaround is to wait a few minutes and try to deploy the cluster again. Cloud providers are aware of the capacity issue and are actively working on it.Another option is to select a different instance size (a higher-tiered cluster) or a different region for deploying the cluster until the capacity issue is resolved. You can later modify the cluster to scale back down to your preferred instance size and region.I hope it helps!Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | The requested region is currently out of capacity for the requested instance size | 2023-07-14T04:55:25.783Z | The requested region is currently out of capacity for the requested instance size | 326 |
null | [
"aggregation",
"queries",
"atlas-search",
"text-search"
] | [
{
"code": "facetsstringnumberdatesrc_nsrc_oidObjectIdresults_2 = list(\n my_collection.aggregate(\n [\n {\n \"$searchMeta\": {\n \"index\": \"TextIndex\",\n \"facet\": {\n \"facets\": {\n \"src_n\": {\n \"type\": \"string\",\n \"path\": \"src_n\",\n \"numBuckets\": 1000,\n },\n \"src_oid\": {\n \"type\": ObjectId,\n \"path\": \"src_oid\",\n \"numBuckets\": 1000,\n },\n },\n },\n },\n },\n ]\n )\n)\n",
"text": "As you know, we have 3 types of facets: string, number, and date facets.Well, I want my facest for 2 fields in my document, src_n which is a string, and src_oid which is ObjectId.How can I apply a facet since it is an object, not a number or string? Any idea to deal with it, please?",
"username": "ahmad_al_sharbaji"
},
{
"code": "src_oidstringstringFacet",
"text": "Hi @ahmad_al_sharbaji , we currently do not support faceting on ObjectIds. You can vote on this feature request here.As a workaround, you can transform the src_oid into a string type and include it in your search index as a stringFacet.",
"username": "amyjian"
},
{
"code": "",
"text": "Thank you for your response!\nConsidering the size of our database, which is approximately 55 million documents, implementing that suggestion may not be ideal. However, I appreciate your link and have voted for it.If you have any other suggestions, please feel free to let me know.Best regards.",
"username": "ahmad_al_sharbaji"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to perform MongoDB Atlas facets for ObjectId | 2023-07-12T22:07:19.779Z | How to perform MongoDB Atlas facets for ObjectId | 501 |
null | [
"queries",
"node-js",
"transactions"
] | [
{
"code": "",
"text": "Hi I am wondering whether mongodb transactions implement optimistic concurrency by default? For example let’s say I start a transaction then do some updates in the session and right before I commit an update was made to a document in that transaction seperately. Does that transaction still go through?I’m thinking if it does not I should read from the database at the documents getting updated before commit but I feel there’s a flaw if somehow by some chance the document gets updated before the commit which would compromise data integrity.",
"username": "Allan_Vu"
},
{
"code": "var session = db.getMongo().startSession()\nvar sessionColRef = session.getDatabase('TranTest').Test\nsession.startTransaction()\nsessionColRef.updateOne(\n{_id:1},\n{\n $set:{\n house:false\n }\n}\n)\nsessionColRef.find()\ndb.getCollection(\"Test\").updateOne({_id:1}, {$set:{house:true}})\n",
"text": "I just tested this on a local instance and it seemed to work as expected with defaults:So we’ve done an update to document 1 (yes, I know…it’s not an ObjectID).If outside of that transaction I attempt to update the same document:The update hangs, it cannot complete as there is a transaction locking that document.If I update another field that the transaction had not changed then the update will go throughA .find in another session will not show this new property until the transaction is committed.Upon the first transaction completing the second update can go through and writes house:true to the record.If I wrap both updates in transactions so:Session1 : Update house to 9\nSession2 : Update house to ‘A’If I try and commit session 2 I get a transaction aborted as the data has already been updated by another session.\nI can commit session 1 which saves 9 to the house property.Obviously you can also change the read and write isolation levels as per the documentation, but have a play and test out.If you wrap everything in transactions it seems to work the way you want, but if the second update is run outside a transaction, it waits but then goes through overwriting the transaction update. If both were wrapped then the second update would fall over as the first transaction has already done an update on that record.",
"username": "John_Sewell"
},
{
"code": "`db.getCollection(\"Test\").updateOne({_id:1}, {$set:{house:true}})`\n",
"text": "Hey John thanks so much for the reply, it helped me a lot! When you refer to the second update outside a transaction overwriting the transaction update do you mean this happens after the commit or the period between the session actions and commit?You wrote:If outside of that transaction I attempt to update the same document:The update hangs, it cannot complete as there is a transaction locking that document.\nIf I update another field that the transaction had not changed then the update will go throughSo if I retrieve or update a document in the session does it lock out non-transactional updates as well until that’s session abort or commit?Like let’s say in your example you update “name” in the non-transactional update that would go through but not be detectable when the transaction queries for the document in the session or do you mean another transactions uncommitted update would just not be visible?Do you think checking for document versioning right at the end of my transaction is sufficient to ensure data integrity or is it not necessary?",
"username": "Allan_Vu"
},
{
"code": "var session = db.getMongo().startSession()\nvar sessionColRef = session.getDatabase('TranTest').Test\nsession.startTransaction()\nsessionColRef.updateOne(\n{_id:1},\n{\n $set:{\n amazing:10\n }\n}\n)\nsessionColRef.find()\nsession.commitTransaction()\n\nvar session2 = db.getMongo().startSession()\nvar sessionColRef2 = session2.getDatabase('TranTest').Test\nsession2.startTransaction()\nsessionColRef2.updateOne(\n{_id:1},\n{\n $set:{\n amazing:false\n }\n}\n)\nsessionColRef2.find()\nsession2.commitTransaction()\n\ndb.getCollection(\"Test\").updateOne({_id:1}, {$set:{ttt:7}})\n\n",
"text": "I didnt test a read, but I suspect that just a read within a session would not affect a read outside of the session.\nI did see a question on here recently about forcing a lock on a record in preparation of a later update, but I imagine that’s a pretty niche use case, you want to have a light touch generally and not lock what you’re not updating now.I’d strongly recommend having a play as I did with a local instance, the only caveat is you’ll need to have a replicaset to use transactions, but that’s easy enough to setup:Then you can open two shells (or query windows in your tool of choice, I was using Studio3T free edition) and play.This is the script I was playing with:and in the other one:Just running the parts that I needed, so in the first window run up to before the commit is run and then in the second window you can play about with different updates and sessions etc.I’m not an expert on this, but it was something I’d not played about with enough that it was worth a play and get some hands on experience of using transactions. I have used them a while back for a migration project (node.js) where I was inserting batches of transactions and I needed to ensure that in the event of a failure, I could roll back the complete batch and re-try.",
"username": "John_Sewell"
},
{
"code": "db.accounts.insertOne({\n _id:1,\n email: '[email protected]',\n name: 'Jake',\n});\nconst session = db.getMongo().startSession()\nconst sessionColRef = session.getDatabase('Testing').accounts\nsession.startTransaction()\n\n\nsessionColRef.updateOne(\n{_id:1},\n{\n $set:{\n name: 'ses1Update'\n }\n}\n)\n\ndb.accounts.updateOne({_id:1}, {$set:{\n job: 'Interfering update'\n }\n})\n\n\nsessionColRef.updateOne(\n{_id:1},\n{\n $set:{\n job: 'ses1update'\n }\n}\n)\nsessionColRef.find()\nsession.commitTransaction()\n",
"text": "Hey John, so I tried it for myself and it seems to work they way I want it to but it seems when I update the field that is not being updated in mongodb it still is processed after the transaction. Which I think you said that the document would be updated if the fields didn’t collide but for me it seems it was locked anyways and the non-transactional update was updated later.So the end result for me was “job: ‘interfering update’”. The whole document is locked it seems and mongo processes the non-transactional update after the transaction commit if the document was already going to be updated in a transaction.",
"username": "Allan_Vu"
},
{
"code": "",
"text": "Trying again, I do seem to be getting the non-transaction locked when the trasnsaction has been started and an update sent for the same document. After the transaction completes, the update continues and updates the document as expected.\nIn the case of a different field being updated then both updates show in the final document, in the case of them both updating the same field then the blocked transaction update shows in the final output, having taken place after the lock was released.\nIn the event of both being within a transaction, the second update returns an error when it picks up that you’re trying to update a document that’s locked.So if you’re in an application using transactions, I guess the take-away is…use transactions, keep them short and have robust error handling to pick up these situations and a well defined process flow for how you deal with them!",
"username": "John_Sewell"
}
] | Does mongodb transactions implement optimistic concurrency/locking by default? | 2023-07-13T03:42:26.005Z | Does mongodb transactions implement optimistic concurrency/locking by default? | 815 |
null | [] | [
{
"code": "",
"text": "I’m planning to drop unused collections.\nBefore that, I checked the log messages and found two patterns of log messages below.Why do I get this log message when I don’t fully use the collection and what does it mean?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "collStats and getMore, especially for getMore, i’m not sure what automated operations can trigger getMore command.Are you sure really no one is ever using that collection? How frequent do you see these two messages?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi, @Kobe_W\nIt’s aperiodic, but it’s printed almost once a month.\nI confirmed that I don’t use it 100%.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "then likely it’s safe to drop. getMore can be called by like find, which can exist in backup related tools.collStats is for monitoring mostly.",
"username": "Kobe_W"
},
{
"code": "",
"text": "@Kobe_W\nWhat is this monitoring for?\nThere is no monitoring solutions",
"username": "Kim_Hakseon"
}
] | Q. Logs from unused collections | 2023-07-14T00:54:51.252Z | Q. Logs from unused collections | 462 |
null | [
"queries",
"crud"
] | [
{
"code": "> error: \n{\"message\":\"Cannot access member 'db' of undefined\",\"name\":\"TypeError\"}\nexports = async function() {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const salonsCollection = mongodb.db(\"clippersDB\").collection(\"salons\");\n\n try {\n await salonsCollection.updateMany({}, [\n {\n $inc: {\n waitTime: -1,\n \"barbers.$[].workingTime\": -1\n }\n }\n ]);\n\n console.log(\"Update operation completed successfully.\");\n } catch (error) {\n console.error(\"Error occurred during update operation:\", error);\n }\n};\nexports = async function() {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const client = mongodb.db(\"clippersDB\").getClient();\n const collection = client.db(\"clippersDB\").collection(\"salons\");\n\n // Fetch all salons\n const salons = await collection.find({}).toArray();\n\n // Update workingTime of barbers and waitTime of salons\n const updatedSalons = salons.map(salon => {\n const updatedBarbers = salon.barbers.map(barber => {\n const updatedWorkingTime = barber.workingTime > 0 ? barber.workingTime - 1 : 0;\n return { ...barber, workingTime: updatedWorkingTime };\n });\n const updatedWaitTime = salon.waitTime > 0 ? salon.waitTime - 1 : 0;\n return { ...salon, barbers: updatedBarbers, waitTime: updatedWaitTime };\n });\n\n // Update the salons in the database\n await Promise.all(updatedSalons.map(updatedSalon => collection.updateOne({ _id: updatedSalon._id }, updatedSalon)));\n\n console.log(\"WorkingTime and WaitTime updated successfully.\");\n};\n\ndb",
"text": "Hello guys, Please help.\nI am trying to set a trigger in my Atlas, unfortunately, I cannot set the trigger.\nI get this error:Here’s my function code:Although I created an app in Realm and set up a trigger there and it is working with the following code:Then why am I getting errors accessing the db database which exists?",
"username": "Topu_Rayhan"
},
{
"code": " const mongodb = context.services.get(\"mongodb-atlas\");\n",
"text": "The error seems to indicate that this is failing:I.e. the call to the context could not find a database link called “mongodb-atlas”.Are you sure the name is correct? Is the trigger within an Atlas application, if so you can check in the “Linked Data Sources” from the left have navigation area to view what’s configured and their name?\nimage1237×395 17.7 KB\nIn the above case, I’d use my-app-connection as the string to lookup from the line above.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks a lot!\nI thought all services were “mongodb-atlas”. Fixed.",
"username": "Topu_Rayhan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't set Triggers in Atlas | 2023-07-09T07:04:53.689Z | Can’t set Triggers in Atlas | 634 |
null | [
"queries",
"python",
"crud",
"database-tools",
"backup"
] | [
{
"code": "mongodumpmongorestoreSLEEP_TIME = 0.1\n\n# Get the current time\ncurrent_time = time.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n# Start the timer\nstart_time = time.time()\n\n# Iterate over all collections in the source database\nfor coll_name in tqdm(src_db.list_collection_names(), desc=\"Importing collections and creating indexes\"):\n if coll_name == \"...\":\n print(f\"\\nCopying collection {coll_name}\")\n\n # Create a new collection in the destination database with the same name\n dst_coll = dst_db[coll_name]\n for name, index_info in src_db[coll_name].index_information().items():\n keys = index_info[\"key\"]\n if \"ns\" in index_info:\n del index_info[\"ns\"]\n del index_info[\"v\"]\n del index_info[\"key\"]\n dst_coll.create_index(keys, name=name, **index_info)\n\n # Iterate over all documents in the source collection\n coll_size = src_db[coll_name].count_documents({})\n CHUNK_SIZE = 100\n LIMIT = 50000\n cursor = src_db[coll_name].find(\n {\"...\": {\"$ne\": []}},\n batch_size=CHUNK_SIZE,\n limit=LIMIT\n )\n\n def yield_rows(cursor, chunk_size):\n \"\"\"\n Generator to yield chunks from cursor\n :param cursor:\n :param chunk_size:\n :return:\n \"\"\"\n chunk = []\n for i, row in enumerate(cursor):\n if i % chunk_size == 0 and i > 0:\n yield chunk\n del chunk[:]\n chunk.append(row)\n yield chunk\n\n chunks = yield_rows(cursor, CHUNK_SIZE)\n\n for chunk in tqdm(\n chunks, desc=\"Copying JSON documents in batches\", total=round(LIMIT / CHUNK_SIZE)\n ):\n operations = [\n pymongo.UpdateOne({\"_id\": doc[\"...\"]}, {\"$set\": doc}, upsert=True)\n for doc in chunk\n ]\n result = dst_coll.bulk_write(operations)\n\n sleep(SLEEP_TIME)\n\n# Print the last sync time and duration\nprint(f\"\\nLast sync time: {current_time}\")\nprint(\"Sync duration: {:.2f} seconds\".format(time.time() - start_time))\n\n# Close the connection\nsrc_client.close()\ndst_client.close()\n\n",
"text": "Hey all,\nI wanna sync all the collections of two mongoDB Atlas projects (staging and production) in databricks more than once in a day. So it is gonna be the replication of production to staging (updating the existed documents and adding the new entries). I noticed there are some options to do that e.g. mongodump and mongorestore but I noticed these are typically used for one-time backups and restorations, not for ongoing replication scenario.\nI am looking for the fastest and efficient way to do that, cause the database is quite large.\nI appreciate an helps based on that.\nMy code is in the following, which it is taking long time to run and that is not that much efficient:",
"username": "Nazila_Hashemi"
},
{
"code": "",
"text": "Hi @Nazila_Hashemi\nWhat products are you using from databricks?For continuous replication scenario using “MongoDB Spark Connector” in streaming mode could be a good pattern to utilize.How to Seamlessly Use MongoDB Atlas and Databricks Lakehouse TogetherYou could also use the changestreams directly depending on the use case: https://www.mongodb.com/docs/manual/changeStreams/",
"username": "Prakul_Agarwal"
},
{
"code": "# define the source and target (destination)\nmongo_source_uri = dbutils.secrets.get(\n \"keyvault\", \"...\"\n)\n\nmongo_target_uri = dbutils.secrets.get(\n \"keyvault\", \"...\"\n\nsrc_client = MongoClient(mongo_source_uri)\nsrc_db = src_client[\"collection\"]\n\ndst_client = MongoClient(mongo_target_uri)\ndst_db = dst_client[\"collection\"]\n",
"text": "Hi @Prakul_Agarwal,Thank you for your response.\nSorry, I guess I was not that much clear!\nI am using the Databricks notebook just to connect to MongoDB databases, my aim is synchronize two different projects on MongoDB Atlas.\nIn databricks:and the rest code is the same in the first post.\nSo Databricks is kind of a bridge to sync my both projects on MongoDB Atlas.",
"username": "Nazila_Hashemi"
},
{
"code": "",
"text": "I think something similar has been answered here",
"username": "Prakul_Agarwal"
}
] | Synchoronize two different MongoDB projects | 2023-07-10T14:37:38.569Z | Synchoronize two different MongoDB projects | 705 |
null | [
"java",
"c-driver",
"spark-connector"
] | [
{
"code": "",
"text": "I am using Java mongo-spark-connector to query a collection. The collection has a field that is defined as TImestamp. I can read this collection properly using mongo-java-driver. WHen I try to create a RDD from this collection using Spark. I get an error BsonTimestamp is not serializable.Is there a workaround for this problem?",
"username": "Vinay_Avasthi2"
},
{
"code": "BsonTimestamp",
"text": "BsonTimestamp class in the MongoDB Java driver is not marked as serializable. When Spark tries to serialize objects to distribute them across the cluster, it requires that all objects are serializable.By converting the BsonTimestamp objects to a serializable format before creating the RDD, you can avoid the serialization error and process the collection using Spark.Let us know if this helps",
"username": "Prakul_Agarwal"
},
{
"code": "",
"text": "The exception is thrown before I have access to Document. Is there a way to intercept, or add my own Serializer in pipeline before it is read.",
"username": "Vinay_Avasthi2"
},
{
"code": "import org.apache.spark.SparkConf;\nimport org.apache.spark.api.java.JavaSparkContext;\nimport org.bson.BsonTimestamp;\nimport java.util.Date;\n\npublic class MongoSparkConnectorExample {\n public static void main(String[] args) {\n SparkConf conf = new SparkConf().setAppName(\"MongoSparkConnectorExample\").setMaster(\"local\");\n JavaSparkContext sc = new JavaSparkContext(conf);\n\n // Your existing code to read the collection using mongo-spark-connector\n // ...\n\n // Perform the transformation to convert BsonTimestamp to Date\n JavaRDD<Document> transformedRDD = mongoRDD.map(document -> {\n BsonTimestamp bsonTimestamp = (BsonTimestamp) document.get(\"timestamp\");\n Date timestamp = new Date(bsonTimestamp.getTime() * 1000L);\n document.put(\"timestamp\", timestamp);\n return document;\n });\n\n // Continue working with the transformed RDD\n // ...\n }\n}\n",
"text": "Try something on this line",
"username": "Prakul_Agarwal"
}
] | Timestamp giving error | 2023-06-01T09:28:52.842Z | Timestamp giving error | 972 |
null | [
"connecting",
"atlas"
] | [
{
"code": "from llama_index import download_loader\nimport os\n\nSimpleMongoReader = download_loader('SimpleMongoReader')\n\nhost = \"<host>\"\nport = \"<port>\"\ndb_name = \"<db_name>\"\ncollection_name = \"<collection_name>\"\n# query_dict is passed into db.collection.find()\nquery_dict = {}\nreader = SimpleMongoReader(host, port)\ndocuments = reader.load_data(db_name, collection_name, query_dict=query_dict)\n",
"text": "Hi All,\nI am new to Atlas MongoDB and I have recently created an account with a cluster and some data on it in collections. I am kind of developing a llamaIndex question-retrieval sample app and I plan to query the data directly from MongoDB. Since I am new to MongoDB I am finding some hard time to complete the required info for the host and port. Is there a way to provide some screenshots from an existing account where I could see where to take these values? The llamaIndex mongo loader as follows:",
"username": "Paulo_Chilela"
},
{
"code": "",
"text": "Hi Paulo,I hope you’re doing well.All the connection info you need is available in your database deployment.Click at the “connect” button, then in the next screen you select “Drivers” in “Connect to your application”, next you select “python” in the driver box and you should see how your connection string has to look like.I hope it helps.All the best,–Rodrigo",
"username": "logwriter"
},
{
"code": "",
"text": "Hi Rodrigo,Many thanks for your response. I followed your instructions and accessed. Could you please indicate from the connection string where to find the Host and the Port? I am still blind and unable to grasp it.\n\nScreenshot from 2023-07-12 22-06-22762×652 57.8 KB\n",
"username": "Paulo_Chilela"
},
{
"code": "",
"text": "Do you know if the download_loader support mongodb uri?Have you seen this MongoDB Atlas connector: MongoDB Atlas - LlamaIndex 🦙 0.7.6",
"username": "logwriter"
},
{
"code": "class SimpleMongoReader(BaseReader):\n def __init__(\n self,\n host: Optional[str] = None,\n port: Optional[int] = None,\n uri: Optional[str] = None,\n max_docs: int = 1000,\n ) -> None:\n",
"text": "Hi Paulo,There are 2 options to use mongo loader - 1) Use rhe ‘host’ & ‘port’ (more relevant for self-hosted mongoDB) , or 2) specify the uri field directly (more relevant for atlas)In your case please use the URI as it shows up on AtlasWould love to know more about what you are buildingYou can also use mongoDB for memory (docstore and vector store) - Build a ChatGPT with your Private Data using LlamaIndex and MongoDB | by Jerry Liu | LlamaIndex Blog | May, 2023 | MediumThis is how the SimpleMongoReader is definedhttps://github.com/emptycrown/llama-hub/blob/main/llama_hub/mongo/base.py",
"username": "Prakul_Agarwal"
},
{
"code": "",
"text": "Hello Prakul_Agarwal,Thanks for the reply, very useful here for the time being. Just to provide some background on the work I am kind of building here, it uses under the hood the paper that You and Liu published on medium along with the notebook. Essentially, what I am trying to come around here is the deployment phase on streamlit of the vector_response based on the query/question. I’ve planned a dedicated section where we solely upload each/multiple files directly to MongoDB without any execution, after that, we can route our queries at each specific dedicated MONGODB_DATABASE on the persisted documents stored at each database within the cluster. I think of in that way (I am not familiar with the gears under the speed process of the index) we could direct each query to the specific database and take less effort and more speed response to the client. Two calls here, 1. to upload files related to the nature of each database and 2. route the query to the specific database and force the vector_response to be bounded in indexing only on the docs stored under that specific database (not sure if it is possible though).The problem I am facing is talking with the database directly from MongoDB using host, port, collections, and like. Appreciate if you could indicate the best way to share the line code to get further support from your side whenever possible. I am really stuck here.\n\nScreenshot from 2023-07-13 11-47-401438×681 49.2 KB\n",
"username": "Paulo_Chilela"
},
{
"code": "",
"text": "Good luck on this project! Feel free to reach out to me at prakul (dot) agarwal (at) MongoDB (dot) com",
"username": "Prakul_Agarwal"
}
] | How to find host and port on MongoDB? | 2023-07-12T18:55:52.727Z | How to find host and port on MongoDB? | 1,433 |
null | [
"java",
"spark-connector",
"scala"
] | [
{
"code": "Mongo Atlas - 4.4.22\nSpark - 3.3\nSpark-Mongo Connector - 10.1.1\nDataproc Cluster - Master n2-highmem-16 (1), Worker n2-highmem-96 (5) -> [496 vCPU]\nNote: Also used standard nodes with similar configs\nspark-shell --conf 'spark.executor.extraJavaOptions=--add-exports=jdk.naming.dns/com.sun.jndi.dns=java.naming' --packages org.mongodb.spark:mongo-spark-connector_2.12:10.1.1\nimport org.apache.spark.sql.SparkSession\nimport com.mongodb.spark.sql._\nurl = mongodb+srv://user:pwd@host/?authSource=admin&readPreference=secondary\nval spark = SparkSession.builder().appName(\"Test\")\n .getOrCreate()\nval df = spark.read.format(\"mongodb\")\n .option(\"connection.uri\", uri)\n .option(\"database\", \"db\")\n .option(\"collection\", \"table\")\n .load()\ndf.show(false)\n",
"text": "I have created Spark-Scala application which uses Spark-Mongo Connector MongoDB Atlas hosted in AWS to GCP Dataproc. I have also used Cloud NAT Gateway to establish the connection. Below are the version detailsI was able to read the smaller collections having data of around 10 Million records and around 200GB in size. My challenge is to read a collection having 2.2TB in size and close to 2 Billion records. I have used all the possible Dataproc cluster combinations (max combination 500 vCPU) and am unable to process the data. In any of the cluster configurations, I could not see Spark triggering any job. The Dataproc job kept running for hours without Spark executing any tasks and I had to kill it. I am using the below sample code to process data.What additionally I can do here to process this huge Mongo data? I have also tried applying filters, but could not even get explained plan since it was taking too long (a few hours)",
"username": "Shruthi_Madumbu"
},
{
"code": "SamplePartitioner",
"text": "Hello @Shruthi_Madumbu ,\nIs the mongoDB cluster using any sharding? Have you tried using the “partitioners” in MongoDB spark connector?\nYou can partition your dataset using that and then be able to process the data in parallel across multiple workers in your Dataproc cluster.The Default partitioner is the SamplePartitioner.\nConfiguration information for the various partitioners:Let us know if this helps you get the processing started.",
"username": "Prakul_Agarwal"
},
{
"code": "import org.apache.spark.sql.SparkSession\nimport com.mongodb.spark.sql._\nurl = mongodb+srv://user:pwd@host/?authSource=admin&readPreference=secondary\nval spark = SparkSession.builder().appName(\"Test\")\n .getOrCreate()\nval df = spark.read.format(\"mongodb\")\n .option(\"connection.uri\", uri)\n .option(\"database\", \"db\")\n .option(\"collection\", \"table\") \n.option(\"spark.mongodb.input.partitioner\",\"com.mongodb.spark.sql.connector.read.partitioner.ShardedPartitioner\")\n.option(\"partitioner.options.partition.size\",\"100\")\n .load()\n\ndf.filter(col(\"owner\") === ownerId).filter(col(\"product\") === productId)\n",
"text": "Thanks for your reply Prakul. I did use ShardedPartitioner. I also used few filters to limit the data volume. Now I could atleast see that Spark job is being triggered. But the job is running very slow. It has all the cluster resources. However, the issue is from Mongo side. Every-time I run the Spark job, I see the Disk Utilization becomes 100% and the performance is degraded. Since I am only reading from secondary it is not affecting primary. Is there any better way of handling this issue ?",
"username": "Shruthi_Madumbu"
},
{
"code": "",
"text": "Seems like the reads are expensive and consuming all resources. Can you try ensuring that the filter is covered by an index. You can setup indexes using Atlas UI or cli and can also use hint in your queries to nudge the use of those indexes. If using the sharded partitioner then the shard key should also be part of the index.",
"username": "Prakul_Agarwal"
}
] | Unable to run GCP Dataproc job to read MongoDB Atlas on AWS | 2023-06-29T21:29:58.825Z | Unable to run GCP Dataproc job to read MongoDB Atlas on AWS | 735 |
null | [
"app-services-user-auth"
] | [
{
"code": "item.owner_id = app.currentUser.id",
"text": "When creating a traditional server, I set the user_id/owner_id field of a document on the server by reading the cookies of the (authenticated) POST request. This prevents clients from setting an arbitrary user_id.Can Realm Sync automatically set user_id?My goal is to let users use the app offline without logging in. (I would rather not use anonymous login to not collect data (privacy) and in case they are inactive for 90 days, after which the data will be deleted (maybe it’s just deleted in Atlas but not locally?)) They can decide later to login if they need to sync or share data.Since I wouldn’t be able to do item.owner_id = app.currentUser.id client-side if the user is not logged in, I am hoping Realm Sync can automatically add this field when syncing data.",
"username": "BPDev"
},
{
"code": "",
"text": "Hi, we do not automatically add in that data since not everyone would want to do that. We have discussed the idea of some sort of “computed field” that allows a function to be run when we receive an upload but we have yet to hear that the need for this is worth the complexity.We have people who do things similar to this where their app has a free tier that is only local realm and a paid tier that involves syncing data and they do this by just manually copying the objects and inserting them into a new synced realm. The easiest thing might be to implement this sort of logic where during this migration your code appends the user_id field.Let me know if the above works or if you have any other ideas. I would be happy to help think of solutions to your issue.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Should I be concerned about validating user_id? For example, if there is an API endpoint for the web version (not sure if that’s the case), someone could send POST requests with an injected user_id pretending to be someone else. However, guessing a valid user_id would be difficult (?).",
"username": "BPDev"
},
{
"code": " \"document_filters\": {\n \"read\": { \"owner_id\": \"%%user.id\" },\n \"write\": { \"owner_id\": \"%%user.id\" }\n }\n",
"text": "When you use permissions in Device Sync you can set them up with something like:This means that you can upload documents with the owner_id set to your user_id and we will verify that the value provided is indeed the user.id of the connected client. So you could figure out someone’s user.id and send documents with owner_id equal to that value, but our permission system would reject that since you are not logged in and authenticated as that %%user.id. Spoofing that is not possible.Let me know if that makes sense?\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "@ObservedRealmObjectrealm?.syncSession",
"text": "Makes sense! Thank you. Does the local version get deleted when server-side validation fails? (Or is there a way to tell that synchronization failed? I notice that @ObservedRealmObject has a realm?.syncSession field)",
"username": "BPDev"
},
{
"code": "",
"text": "In this case a compensating write is sent. I am not sure which SDK you are using but here are the swift docs on it: https://www.mongodb.com/docs/realm/sdk/swift/sync/write-to-synced-realm/#compensating-writes",
"username": "Tyler_Kaye"
},
{
"code": "userIdclass Item: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var userId: String\n @Persisted var name: String\n \n convenience init(name: String, userId: String? = nil) {\n self.init()\n self.name = name\n if let userId = userId {\n self.userId = userId\n }\n }\n}\nstruct ContentView: View {\n \n @ObservedObject var app = Realm.app\n \n var body: some View {\n if let config = app.createFlexibleConfiguration() {\n MainScreenSync()\n .environment(\\.realmConfiguration, config)\n .environmentObject(app)\n } else {\n MainScreen()\n .environmentObject(app)\n }\n }\n}\nstruct MainScreenSync: View{\n @EnvironmentObject var app: RealmSwift.App\n // @Environment(\\.realm) var syncedRealm\n @ObservedResults(Item.self) var syncedItems\n \n var body: some View {\n VStack {\n MainScreen()\n Text(app.currentUser?.description ?? \"not logged in\")\n }\n .onAppear {\n if let localRealm = try? Realm(), let user = app.currentUser {\n let localItems = localRealm.objects(Item.self)\n for item in localItems {\n // local -> synced\n let syncedItem = Item(value: item)\n syncedItem.userId = user.id\n $syncedItems.append(syncedItem)\n // delete local\n try? localRealm.write {\n localRealm.delete(item)\n }\n }\n }\n }\n }\n}\n",
"text": "My approach is to let userId be an empty string if there are no users.I change the configuration environment depending on whether there is a logged in user.Then, I rely on the SwiftUI syntax to easily access the synced realm.Full code",
"username": "BPDev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Setting the user_id/owner_id field of a document server-side | 2023-07-11T22:56:32.531Z | Setting the user_id/owner_id field of a document server-side | 700 |
null | [
"atlas-cluster",
"flutter"
] | [
{
"code": "",
"text": "I’m getting the following error when trying to connect to mongodb database in a flutter app:ConnectionException (MongoDB ConnectionException: Could not connect to xxxxxx.mongodb.net:27017There is no problem when I try to connect while I am using Wi-Fi, but when I connect to 4G this message appears.I’m out of ideas and trying several Stackoverflow solutions have taken me back to square one.",
"username": "jesus"
},
{
"code": "",
"text": "most likely, your URI is wrong. xxxxx.mongodb.net is a cluster address and you are trying to connect with mongodb:// rather than mongodb+srv://.less likely your 4G provider uses deprecated DNS software that cannot resolve SRV records, override the DNS config to use 8.8.8.8",
"username": "steevej"
},
{
"code": "",
"text": "my ISP force us to use their DNS, even if I use google DNS it doesn’t work,I’m using mongodb:// to connect because I can’t use mongodb+srv:// Except when I use a vpn the mongodb+srv:// works.The problem is that even mongodb:// does not work when I connect to 4G.Did you get my point?\nmongodb+srv:// I can use it if I’m using my home internet, it doesn’t work if I use 4GThe mongodb:// works when connected to the home internet only, but when using the 4g network, it does not work.it is not possible to connect to the database via 4G at all.",
"username": "jesus"
},
{
"code": "",
"text": "the full message is:",
"username": "jesus"
},
{
"code": "",
"text": "The country I live in has VERY restrictive internet policies.Most of the ports are closed and they use DPI to fully control the Internet.\nI’m in prison not a country.Is there a way to bypass it other than using a vpn?",
"username": "jesus"
},
{
"code": "",
"text": "If SRV records are not working you may try the long URI as supply by the Atlas UI for your cluster.",
"username": "steevej"
},
{
"code": "",
"text": "it works only when I use Wi-Fi but not 4G",
"username": "jesus"
}
] | OS Error: No address associated with hostname | 2023-07-08T12:50:15.243Z | OS Error: No address associated with hostname | 2,017 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 6.0.8 is out and is ready for production deployment. This release contains only fixes since 6.0.7, and is a recommended upgrade for all 6.0 users.Fixed in this release:6.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Maria_Prinus"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 6.0.8 is released | 2023-07-13T20:57:23.426Z | MongoDB 6.0.8 is released | 1,064 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 5.0.19 is out and is ready for production deployment. This release contains only fixes since 5.0.18, and is a recommended upgrade for all 5.0 users.Fixed in this release:5.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Maria_Prinus"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.19 is released | 2023-07-13T20:44:58.548Z | MongoDB 5.0.19 is released | 854 |
null | [
"queries",
"data-modeling"
] | [
{
"code": " {\"t\":{\"$date\":\"2023-03-15T16:02:14.635+05:30\"},\"s\":\"I\", \"c\":\"WRITE\", \"id\":51803, \"ctx\":\"conn5559\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"update\",\"ns\":\"db_name.XXXXXXX\",\"command\":{\"q\":{\"uid\":308793847},\"u\":{\"$push\":{\"ps\":{\"$each\":[{\"mid\":5109,\"aid\":1412,\"trid\":\"89461-5109-308793847-1412-230315072428\",\"guid\":\"3919037b-18ce-4973-ad00-1891bc7365e3\",\"st\":\"sent\",\"dt\":230315072428,\"adw\":4,\"ad\":230315,\"at\":72428}],\"$slice\":-5000}}},\"multi\":false,\"upsert\":false},\"planSummary\":\"IXSCAN { uid: 1 }\",\"keysExamined\":1,\"docsExamined\":1,\"nMatched\":1,\"nModified\":1,\"nUpserted\":0,\"keysInserted\":1,\"keysDeleted\":0,\"numYields\":1,\"queryHash\":\"B34121E2\",\"planCacheKey\":\"CFF4BBD8\",\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":289}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":290}},\"Global\":{\"acquireCount\":{\"w\":289}},\"Database\":{\"acquireCount\":{\"w\":289}},\"Collection\":{\"acquireCount\":{\"w\":289}},\"Mutex\":{\"acquireCount\":{\"r\":670}}},\"flowControl\":{\"acquireCount\":155,\"timeAcquiringMicros\":131},\"storage\":{\"data\":{**\"bytesRead\":64292681**,\"timeReadingMicros\":337574}},\"remote\":\"172.31.22.28:60864\",\"durationMillis\":103}}\n",
"text": "I got this below the log in the Mongodb slow query log. I have the Mongodb version running 5.0 with 3 shards in PSS mode.The query is running on the primary key and the total document size itself is only 4.4 Mb, But in the slow query log it’s showing 64MB data transferred, how can one document do so for my data transfer?can someone help me with this?",
"username": "Kathiresh_Nadar"
},
{
"code": "bytesRead0bytesReadbytesRead",
"text": "Hey @Kathiresh_Nadar,Welcome to the MongoDB Community forums Apology for the late reply.storage”:{“data”:{“bytesRead”:64292681The bytesRead is the number of bytes read by the operation from the disk to the cache. However, if the data is already in the cache, then the number of bytes read from disk could be 0.The query is running on the primary key and the total document size itself is only 4.4 Mb, But in the slow query log it’s showing 64MB of data transferred, how can one document do so for my data transfer?The bytesRead value may include more than just the queried documents since WiredTiger reads in units of pages, which can contain multiple documents. All documents on that page are read into the cache and included in the bytesRead value.Furthermore, if the index is not in the cache or is stale, WiredTiger reads several internal and leaf pages from the disk to reconstruct the index in the cache.Please refer to the Database Profiler Output - storage.data.bytesRead to read more about this.I hope it addresses your question. Let us know if you have any further questions.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Kushagra_Kesav ,Thanks for responding. I just did not looked back as it took too much time to respond. I thought i might not get any response.Coming to my point, we are seeing that the writes are taking a lot of time and we are writing to the primary key only so no more indexing can solve the problem. But i see that memory can be a problem as you mentioned that it brings lots of other records also to cache.We have 256Gb Ram on the server and if each document updates bring 200mb of data, then we will have ot take memory in TB’s.So how do i solve the problem, is there any mongodb fine tuning that can be done.Thanks\nKathiresh",
"username": "Kathiresh_Nadar"
},
{
"code": "",
"text": "problemwhat is the problem here? what issues do you see from the server metrics?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi @Kobe_W ,The writes are taking too much time, like for eg, even for an upsert into the documents takes 10 to 20 seconds. And we are doing the upsert in the primary key, so it should be fast i think.So how can we bring down the updates to less than 1 sec.Regards",
"username": "Kathiresh_Nadar"
},
{
"code": "",
"text": "it’s hard to say without more info.Anything in the end to end flow can slow down the whole path. resources like cpu/disk/mem, network conditions, connection pooling, etc all those can potentially be the cause of slow write.i would suggest you take a look at all available dashboards and try to narrow down the scope of the problem. e.g. it’s on client side or server side or network issue ? that’s why you need logging and all those metrics.",
"username": "Kobe_W"
}
] | Mongodb Logs what does bytesRead mean in slow query log | 2023-03-15T13:34:41.667Z | Mongodb Logs what does bytesRead mean in slow query log | 1,506 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 5.0.19-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.18. The next stable release 5.0.19 will be a recommended upgrade for all 5.0 users.Fixed in this release:SERVER-71985 Automatically retry time series insert on DuplicateKey errorSERVER-74551 WriteConflictException unnecessarily logged as warning during findAndModify after upgrade to mongo 5.0SERVER-77018 Deadlock between dbStats and 2 index buildsSERVER-78126 For specific kinds of input, mongo::Value() always hashes to the same result on big-endian platformsWT-10253 Run session dhandle sweep and session cursor sweep more often5.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Maria_Prinus"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.19-rc0 is released | 2023-07-05T16:31:21.321Z | MongoDB 5.0.19-rc0 is released | 753 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.4.23 is out and is ready for production deployment. This release contains only fixes since 4.4.22, and is a recommended upgrade for all 4.4 users.Fixed in this release:4.4 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.23 is released | 2023-07-13T20:11:37.869Z | MongoDB 4.4.23 is released | 806 |
null | [] | [
{
"code": "",
"text": "Hi all,\nsuper new to Mongo and also to Databases.\nJust a quick question: Can I use my DataBase directly to answer api queries? Maybe just clicking something int the console?\nThanks\nParalosva",
"username": "Paralosva_Rosos"
},
{
"code": "",
"text": "You can use Atlas for minimal code apisFailing that youll need to get hands dirty with some code.Its not that clear what you actually want to do…",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you for answering JohnWell, I only need people to connect and READ the data. Same daily data, simple query, just some thousand people.Maybe it’s enough?\nBecase if I use code, I must place it somewhere…\nTks",
"username": "Paralosva_Rosos"
}
] | API question: Can I just set it in the console? | 2023-07-13T18:18:06.490Z | API question: Can I just set it in the console? | 175 |
null | [
"connecting",
"configuration"
] | [
{
"code": "",
"text": "Hello TeamI configure mongodb enterprise edition in amazon ec2 instance\nand also configured ssl certificates and security authorization and after configuring ssl certificates\ni am not able to login to the mongo shell\ngetting below error:\nMongoServerSelectionError: unable to verify the first certificatecan anyone help me on thisThank you",
"username": "vamsi_Krishna4"
},
{
"code": "",
"text": "We’ve got this problem too. Did you get a solution? The mongodb we’re trying to connect to is also in ec2.",
"username": "Richard_Gaunt"
},
{
"code": "",
"text": "Were you able to resolve this?",
"username": "galenspikes"
}
] | MongoServerSelectionError: unable to verify the first certificate | 2022-01-20T16:25:45.627Z | MongoServerSelectionError: unable to verify the first certificate | 4,500 |
null | [] | [
{
"code": "",
"text": "Hi everyone.I want to create a centralized login for my apps.I’m using the MongoDB Atlas free-tier and Reactjs.I have got various applications and I want to create the same login system for all the apps.\nI am thinking in creating a separate cluster that will have configured the email & password provider. The problem is that if I create a custom authentication function I already need another authentication method. No?Can you inform me about how to do that? Which are the best practises?Thanks in advance,\nJuanjo Asensio García",
"username": "Juan_Jose_Asensio_Garcia"
},
{
"code": "",
"text": "The problem is that if I create a custom authentication function I already need another authentication method.i don’t understand what you mean by this.Maybe explain your high level use case and/or work flow first so everyone can understand it better.",
"username": "Kobe_W"
},
{
"code": "failed to execute source for 'node_modules/realm-web/dist/bundle.cjs.js': TypeError: Cannot access member 'name' of undefined\n\tat node_modules/realm-web/dist/bundle.cjs.js:4260:29(737)\n\tat require (native)\n\tat function.js:15:3(12)\n\tat <eval>:3:8(7)\n\tat <eval>:2:15(7)\n\tat native\n",
"text": "The problem is that I don’t know how to write a custom authentication function properly.\nI first thought in creating a custom authorization function that:But I get the following error:Do you know how to fix it?",
"username": "Juan_Jose_Asensio_Garcia"
},
{
"code": "realm-webprocess",
"text": "Hi @Juan_Jose_Asensio_Garcia,The app services function engine has some limitations on the dependencies it supports, and it appears that realm-web is using an unsupported process call so you cannot use it as a dependency for now. In the meantime, I would recommend using the client API to authenticate the user using this endpoint.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Thank you very much! I have fixed it.",
"username": "Juan_Jose_Asensio_Garcia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Centralized login for various apps | 2023-07-11T22:28:47.318Z | Centralized login for various apps | 289 |
null | [
"aggregation",
"java",
"kafka-connector"
] | [
{
"code": "java.lang.StringJsonToken.START_OBJECT",
"text": "i am facing some error on source connector config . my config is to collect data from multiple databases and collections from same mongodb host and publish to same topic .below is config i am using , but getting error< {\n“name” : “mongo-source”,\n“config” : {\n“batch.size” : “1000”,\n“connection.uri” : “mongodb://: @*********************:1025/?ssl”,\n“connector.class” : “com.mongodb.kafka.connect.MongoSourceConnector”,\n“key.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“pipeline”: “[ {“$match”: {$or: [ {“ns.db”: “uat_move5app”, “ns.coll”: “AccessToken”}, {“ns.db”: “uat_move5app”, “ns.coll”: “Account”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “Achievement”}, {“ns.db”: “uat_move5health”, “ns.coll”:“AppleRing”}, {“ns.db”: “uat_move5app”, “ns.coll”: “Application”}, {“ns.db”: “uat_move5app”, “ns.coll”: “AuditLog”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “Badge”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “Challenge”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “Code”}, {“ns.db”: “uat_move5app”, “ns.coll”: “Country”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “Goal”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “GoalReward”}, {“ns.db”: “uat_move5tracker”, “ns.coll”: “HealthNotification”}, {“ns.db”: “uat_move5health”, “ns.coll”: “HealthSummary”}, {“ns.db”: “uat_move5tracker”, “ns.coll”: “HealthTracker”}, {“ns.db”: “uat_move5app”, “ns.coll”: “Installation”}, {“ns.db”: “uat_move5cas”, “ns.coll”: “HPMember”}, {“ns.db”: “uat_move5cas”, “ns.coll”: “MoveKey”}, {“ns.db”: “uat_move5app”, “ns.coll”: “Muser”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “Participation”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “Program”}, {“ns.db”: “uat_move5notification”, “ns.coll”: “PushNotification”}, {“ns.db”: “uat_move5notification”, “ns.coll”: “PushResponse”}, {“ns.db”: “uat_move5notification”, “ns.coll”: “PushSubscription”}, {“ns.db”: “uat_move5queue”, “ns.coll”: “QueueError”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “Reward”}, {“ns.db”: “uat_move5app”, “ns.coll”: “RoleMapping”}, {“ns.db”: “uat_move5app”, “ns.coll”: “Role”}, {“ns.db”: “uat_move5queue”,“ns.coll”: “Task”}, {“ns.db”: “uat_move5queue”, “ns.coll”: “TaskConfig”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “UserBadge”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “UserCode”}, {“ns.db”: “uat_move5challenge”, “ns.coll”:“UserGoal”}, {“ns.db”: “uat_move5challenge”, “ns.coll”: “UserReward”}, {“ns.db”: “uat_move5app”, “ns.coll”: “UserState”}, {“ns.db”: “uat_move5message”, “ns.coll”: “DestinationMapping”}, {“ns.db”: “uat_move5message”, “ns.coll”: “FollowUpMapping”}, {“ns.db”: “uat_move5health-score”, “ns.coll”: “HealthProfile”}, {“ns.db”: “uat_move5health-score”, “ns.coll”: “HealthScore”}, {“ns.db”: “uat_move5health-score”, “ns.coll”: “HealthScoreDelta”}, {“ns.db”: “uat_move5health-score”, “ns.coll”: “ProviderAccount”}, {“ns.db”: “uat_move5health-score”, “ns.coll”: “SurveyQuestion”}, {“ns.db”: “uat_move5message”, “ns.coll”: “SystemMessage”}, {“ns.db”: “uat_move5message”, “ns.coll”: “UserMessage”},{“ns.db”: “uat_move5health-score”, “ns.coll”: “UserSurvey”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “HealthSummary”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “AppleRing”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “UserReward”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “UserGoal”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “UserState”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “Participation”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “HealthScore”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “Muser”}, {“ns.db”: “perf_move5edl”, “ns.coll”: “Account”} ] } } ]”,\n“topic.prefix”: “SG_uat_move5app.Installation”\n}\n}Error:\ncurl -X PUT -H “Content-Type: application/json” --data @./test.json http://localhost:8083/connectors/MongoSourceConnectorV1/config\n{“error_code”:500,“message”:“Cannot deserialize value of type java.lang.String from Object value (token JsonToken.START_OBJECT)\\n at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 1, column: 53] (through reference chain: java.util.LinkedHashMap[“config”])”}",
"username": "vasireddy_prasanth"
},
{
"code": "",
"text": "I faced a similar error here:",
"username": "Matteo_Tarantino"
},
{
"code": "",
"text": "Hi @vasireddy_prasanth,Seems like Kafka is throwing an error when trying to read the configuraiton.Please escape quotes in the pipeline config and that should help.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Hi Ross, Thanks for info\nmay i know which quotes you want me to escape",
"username": "vasireddy_prasanth"
},
{
"code": "“pipeline”: \"[ {\\\"$match\\\": {$or: [ {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"AccessToken\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"Account\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"Achievement\\\"}, {\\\"ns.db\\\": \\\"uat_move5health\\\", \\\"ns.coll\\\":\\\"AppleRing\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"Application\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"AuditLog\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"Badge\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"Challenge\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"Code\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"Country\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"Goal\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"GoalReward\\\"}, {\\\"ns.db\\\": \\\"uat_move5tracker\\\", \\\"ns.coll\\\": \\\"HealthNotification\\\"}, {\\\"ns.db\\\": \\\"uat_move5health\\\", \\\"ns.coll\\\": \\\"HealthSummary\\\"}, {\\\"ns.db\\\": \\\"uat_move5tracker\\\", \\\"ns.coll\\\": \\\"HealthTracker\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"Installation\\\"}, {\\\"ns.db\\\": \\\"uat_move5cas\\\", \\\"ns.coll\\\": \\\"HPMember\\\"}, {\\\"ns.db\\\": \\\"uat_move5cas\\\", \\\"ns.coll\\\": \\\"MoveKey\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"Muser\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"Participation\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"Program\\\"}, {\\\"ns.db\\\": \\\"uat_move5notification\\\", \\\"ns.coll\\\": \\\"PushNotification\\\"}, {\\\"ns.db\\\": \\\"uat_move5notification\\\", \\\"ns.coll\\\": \\\"PushResponse\\\"}, {\\\"ns.db\\\": \\\"uat_move5notification\\\", \\\"ns.coll\\\": \\\"PushSubscription\\\"}, {\\\"ns.db\\\": \\\"uat_move5queue\\\", \\\"ns.coll\\\": \\\"QueueError\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"Reward\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"RoleMapping\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"Role\\\"}, {\\\"ns.db\\\": \\\"uat_move5queue\\\",\\\"ns.coll\\\": \\\"Task\\\"}, {\\\"ns.db\\\": \\\"uat_move5queue\\\", \\\"ns.coll\\\": \\\"TaskConfig\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"UserBadge\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"UserCode\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\":\\\"UserGoal\\\"}, {\\\"ns.db\\\": \\\"uat_move5challenge\\\", \\\"ns.coll\\\": \\\"UserReward\\\"}, {\\\"ns.db\\\": \\\"uat_move5app\\\", \\\"ns.coll\\\": \\\"UserState\\\"}, {\\\"ns.db\\\": \\\"uat_move5message\\\", \\\"ns.coll\\\": \\\"DestinationMapping\\\"}, {\\\"ns.db\\\": \\\"uat_move5message\\\", \\\"ns.coll\\\": \\\"FollowUpMapping\\\"}, {\\\"ns.db\\\": \\\"uat_move5health-score\\\", \\\"ns.coll\\\": \\\"HealthProfile\\\"}, {\\\"ns.db\\\": \\\"uat_move5health-score\\\", \\\"ns.coll\\\": \\\"HealthScore\\\"}, {\\\"ns.db\\\": \\\"uat_move5health-score\\\", \\\"ns.coll\\\": \\\"HealthScoreDelta\\\"}, {\\\"ns.db\\\": \\\"uat_move5health-score\\\", \\\"ns.coll\\\": \\\"ProviderAccount\\\"}, {\\\"ns.db\\\": \\\"uat_move5health-score\\\", \\\"ns.coll\\\": \\\"SurveyQuestion\\\"}, {\\\"ns.db\\\": \\\"uat_move5message\\\", \\\"ns.coll\\\": \\\"SystemMessage\\\"}, {\\\"ns.db\\\": \\\"uat_move5message\\\", \\\"ns.coll\\\": \\\"UserMessage\\\"},{\\\"ns.db\\\": \\\"uat_move5health-score\\\", \\\"ns.coll\\\": \\\"UserSurvey\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"HealthSummary\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"AppleRing\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"UserReward\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"UserGoal\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"UserState\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"Participation\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"HealthScore\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"Muser\\\"}, {\\\"ns.db\\\": \\\"perf_move5edl\\\", \\\"ns.coll\\\": \\\"Account\\\"} ] } } ]\"\n\n",
"text": "Hi @vasireddy_prasanth,Its the pipeline configuration:",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Thanks Ross let me check",
"username": "vasireddy_prasanth"
}
] | Mongodb connector to Kafka | 2023-07-12T04:14:59.706Z | Mongodb connector to Kafka | 730 |
[
"node-js"
] | [
{
"code": "",
"text": "This is the question that I was answering and got wrong:\nimage1178×509 12.3 KB\nLooking at the documentation for the MongoClient class:\nhttps://mongodb.github.io/node-mongodb-native/api-generated/mongoclient.htmlThe methods are available:The question making says that open() is not valid, am I missing something?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Hello @John_Sewell,Looking at the documentation for the MongoClient class:\nhttps://mongodb.github.io/node-mongodb-native/api-generated/mongoclient.html This link shows the older version of the documentation,The current version is 5.7, refer to main page and click on the latest version of the API,\nDirect link to that class,Documentation for mongodb",
"username": "turivishal"
},
{
"code": "",
"text": "Awesome, thanks for that, I managed to find a truly ancient version of the documentation!",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Question on Associate Developer Node.js Practice Exam | 2023-07-13T14:07:25.334Z | Question on Associate Developer Node.js Practice Exam | 638 |
|
null | [
"mongodb-shell",
"security"
] | [
{
"code": "",
"text": "After enabling TLS/SSL i am able to connect to mongo shell remotely but unable to connect from inside the VM neither my microservices are able to connect. Can anyone please help?\nI am really stucked on this from very long time.Service Error: MongoNetworkError: unable to verify the first certificate\nMongo shell error: connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: unable to verify the first certificate",
"username": "Mohit_Kumar2"
},
{
"code": "",
"text": "We are getting the same issue",
"username": "Lisa_Caradonna"
}
] | SSL peer certificate validation failed: unable to verify the first certificate | 2021-04-14T19:20:26.866Z | SSL peer certificate validation failed: unable to verify the first certificate | 6,567 |
null | [] | [
{
"code": "{\"log\": \"time=\\\"Jul 13 18:10:09\\\" level=fatal msg=\\\"Could not start storage\\\" error=\\\"connection() error occurred during connection handshake: auth error: unable to authenticate using mechanism \\\\\\\"SCRAM-SHA-256\\\\\\\": (AuthenticationFailed) Authentication failed.\\\"\\n\",\"stream\":\"stderr\",\"time\":\"2023-07-13T08:10:09.472843706Z\"}\nusing mongo_url \n \"mongo_url\": \"mongodb://DbOwner:E%5B%3Cj%5E%[email protected]:27017,svssp2001002ps.nbndc.local:27017,svssp2001003ps.nbndc.local:27017/tyk_analytics?replicaSet=mongo-replica\",\n \"mongo_driver\": \"mongo-go\",\n",
"text": "Hi I m trying to connect to MongoDB v 5.0.18 in the password when I use percentage encoding it is giving me an error asbut when I do not use percentage encoding it is working perfectly\nI m using the dashboard version 5.0.2 please help it is a bit urgent",
"username": "Anshul_Chopra"
},
{
"code": "import \"net/url\"\n...\npassword, err := url.QueryUnescape(\"p@sword%20\")\n if err != nil {\n password = \"p@sword%20\"\n }\n",
"text": "Hi @Anshul_Chopra,Welcome to the MongoDB Community!Based on the shared details, it appears that your password contains a special character that requires URL encoding.So, you have another option. You can use a code snippet in Golang to perform the encoding within your code.Sharing example code snippet for reference:Hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Mongodb error in v5.0.18 Authentication failed | 2023-07-13T14:03:45.115Z | Mongodb error in v5.0.18 Authentication failed | 524 |
null | [
"python",
"production"
] | [
{
"code": "ConfigurationError: Invalid SRV host",
"text": "We are pleased to announce the 4.4.1 release of PyMongo - MongoDB’s Python Driver. This release fixes the following bugs:See the changelog for a high-level summary of what is in this release or see the PyMongo 4.4.1 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongo 4.4.1 documentation \nSource: GitHub - mongodb/mongo-python-driver at 4.4.1 Thank you to everyone who contributed to this release!",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | PyMongo 4.4.1 Released | 2023-07-13T14:55:59.818Z | PyMongo 4.4.1 Released | 1,082 |
null | [
"replication"
] | [
{
"code": "",
"text": "i want to run a singleNode replicaset in my development environment.\nShould i set voting count as 3 for my replica in this case or voting count should be default which is 1?",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "Hi @Divine_Cutler, the values for votes, if I’m not mistaken are 0 and 1.In the case of a single node replica set, since there is only a single node the vote count would need to be set at 1. Changing the vote count, even if it could be greater than 1, wouldn’t have any impact as it’s the only node that can vote. If the machine is running it’s automatically the primary node. If it’s down, well then it doesn’t matter how many votes it gets. While a single node replica set might be fine for development purposes, it is strongly cautioned against putting that into production.What are you trying to accomplish with having a single node replica set?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Doug_Duncan for testing changeEvents in mongodb",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "Hi @Divine_Cutler, you can follow this documentation to convert your stand alone to a single node replica set.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "If you going to create a repl set to test think you need atlest two members to witness replication and voting would only matter if there is more than one member.Else single node I dont think would be much different from a single mongod instance.",
"username": "Kirk-PatrickBrown"
},
{
"code": "",
"text": "@Kirk-PatrickBrown For what @Divine_Cutler is doing(change streams) using a replicaset of one is perfectly fine.But for a production system where you want some kind of HA, yes you need three nodes. PSA or PSS.",
"username": "chris"
},
{
"code": "",
"text": "@chris @Doug_Duncan yes, but i do have a doubt when comparing a singleNodeReplicaset with a 3node-replicaset.When i run a 3node replicaset node1,node2,node3 and if node1 goes down, then among node2,node3 a primary node is elected.\nScreenshot 2020-05-23 at 05.28.121124×352 36.3 KBWhen node2 goes down, then node3 remains as secondary.\nScreenshot 2020-05-23 at 05.34.33900×334 32.8 KBis it possible to have the node3 as a primaryNode when node1,node2 is down? if it is not possible, then i would like to know the reason on how is it possible for a singleNodeReplica set to act as a primaryNode.",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "@Divine_Cutler when you need to have a majority of the nodes up for a replica set to have a primary. This means in a three node replica set that two members must be available.Check out the Consider Fault Tolerance section of the Replica Set Architectures document.Having a single node running from a three node replica set is not the same as running a single node replica set.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Doug_Duncan thank you. i understand this post. but here is my doubt.Let’s say that i have a singleNodeReplicaSet - it accepts both read/write. then i add 2 additionalnodes to this singleNodeReplicaSet which becomes a 3NodeReplicaSet.So, Once a singleNodeReplicaSet that runs on its own is converted to 3NodeReplicaSet, i could never scaledown the 3NodeReplicaSet back to a singleNodeReplicaSet and make this singleNodeReplicaSet to accept read/write requests?",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "@Divine_Cutler, you can force reconfigure your remaining member to remove the downed nodes, but that’s not recommended unless absolutely necessary.If you’re running a three node replica set and two nodes go down I would first look into why I have two nodes down at the same time. While bad things can, and do, happen, it’s rare in my experience for two of the three nodes in a replica set to go down at the same time. Your time is better spent trying to bring the downed nodes back online. If you can get just one of them up and running you have a majority again and one will be promoted to PRIMARY status.Forcing your three node replica set back down to a single node replica set is very dangerous, especially in a production environment as you’ve lost any sort of HA. I would strongly caution against doing that.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Doug_Duncan thank you.i have one more question and it troubles me a little bit.Please assume this scenario.i have a 3nodeReplicaSet (node1,node2,node3). There are no problems in the configuration, so everything is functioning as expected.Assume, there are huge number number of write/update/delete operations going on.Then, at one point, the oplog collection in primaryNode(node1) reached the memory limit and as it is a capped collection, the old data in oplog collection gets overridden and the data from oplog gets synced successfully to the secondaryNodes(node2,node3) without any problems.Suddenly Node3 goes down for some reason(may be the developer shut it down), but the application is functioning successfully as node1,node2 are running fine. But still there are huge number of write/update/delete ongoing.My understanding of data sync between nodes in replicaSet is, the secondaryNodes replicate the data from primaryNode by reading the oplog collection of the primaryNode.Coming back to our scenario,\nAssume, that the oplog got filled completely thrice counting from the moment node3 was down till the moment node3 was started again. Now, if the node3 was started again, then",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "@Doug_Duncan i got answer for my above question.even though i went throught that article previously, i forgot about it yesterday",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "Hi,\nWhat is this tool name ?",
"username": "Kadir_USTUN"
},
{
"code": "",
"text": "While a single node replica set might be fine for development purposes, it is strongly cautioned against putting that into production.Can you please expand on why this is the case when all 3 nodes would be running on a single VPS anyway? What is the failure case that a single node replica does not catch that a 3-node replica does catch - when the context is a single physical server that regularly does a mongodump?Thank you.",
"username": "q3DM17"
},
{
"code": "",
"text": "Can you please expand on why this is the case when all 3 nodes would be running on a single VPS anyway?Running all the nodes of a replica set on the same piece of hardware is really not the best because you have a single point of failure.An upgrade is one scenario that multi-node replica set, even on single hardware, is something that can be done without down time. With a single node replica set you cannot.",
"username": "steevej"
},
{
"code": "",
"text": "I’m curious if @Doug_Duncan concurs…\nThis doesn’t really answer the question; I realize that the hardware is a single point of failure running with and without replicas (if they’re all contained on the same hardware). The question is: how or why is a single node replica more dangerous than mongodb running without replicas?If we only want to scale vertically in production, then why isn’t a single node replica usable in production (considering the same careful backup process we use with our single node non-replica mongodb)? The point is to be able to use transactions.",
"username": "q3DM17"
},
{
"code": "",
"text": "The question is: how or why is a single node replica more dangerous than mongodb running without replicas?This is not the same question and it is not what was said. What was said was that running a single node replica set was risky.A single node replica set IS NOT MORE dangerous than an instance without replication. They represents the SAME HIGH risk of losing data. Replication is needed for both transactions and change streams.",
"username": "steevej"
},
{
"code": "",
"text": "Sorry for the lack of clarity on the question.If that is true, then why is it officially OK to run mongodb in production without replicas and running a single node replica in production is listed as only for development environments / testing (everywhere I’ve seen it talked about so far)? Surely, then, it is a reasonable workaround in production for the transaction support issue?\nThanks again for entertaining my inquieries.",
"username": "q3DM17"
},
{
"code": "",
"text": "why is it officially OK to run mongodb in production without replicas and running a single node replica in production is listed as only for development environments / testingI did not know it was Okay to run mongod in production without replication. What I am aware is that the recommendation is PSS.",
"username": "steevej"
}
] | How to run a single node replica set? | 2020-05-12T23:42:20.643Z | How to run a single node replica set? | 20,152 |
[] | [
{
"code": "",
"text": "Step No. 1 (Download, Mongo DB Database Tools From Download MongoDB Command Line Database Tools | MongoDB) Select package - msi.Step No. 2 ( Install all the Executable stand for (.EXE Files) Setup)Step No. 3 ( Copy the files from (C:\\Program Files\\MongoDB\\Tools\\100\\bin) only in case if your method of installed (default setup ) Some users change the MongoDB installation files location somewhere else in the system.Step No. 4 ( Then Paste the copied files to (C:\\Program Files\\MongoDB\\Server\\5.0\\bin)\nbe sure ( the name of the files are existed named by (1. mongoimport.exe and other files are optional required to that Scenario where you need to export some files ( example. 2. **mongoexport.exe to that case.)Step No. 5 ( In addition if you are coping the data from Excel sheet or importing file from example.JSON then ( Copy the work file example.JSON or contents.csv files to the same path or in same location (C:\\Program Files\\MongoDB\\Server\\5.0\\bin) .Step No. 6 ( Open the command prompt (run as a administrator) type (the addressing line. Example ( cd C:\\Program Files\\MongoDB\\Server\\5.0\\bin** ) .CD Command used for ( to Change the Directory ) Means that find that file in the given location.Step No. 7 ( mongoimport persons.json -d contactData -c contacts --jsonArray ) In my case take as example (your may different).-d (used for creating Database) and\n-c (used for the creating the collection inside the database)\n–jsonArray (used to import JSON files correctly to the database.Note: Majority of the Errors are created by the spell mistake (example created database by the name of (flights) and your typing mistakes are (filghts) please make sure to correct it.\nDB.1875×122 20.7 KB\n",
"username": "Muhammad_Feroz_Khan"
},
{
"code": "",
"text": "here is the result i get after following the step carefully C:\\Program Files\\MongoDB\\Server\\6.0\\bin>mongoimport -db mongo-exercises -collection courses -file exercise-data.json --jsonArray\n‘mongoimport’ is not recognized as an internal or external command,\noperable program or batch file.",
"username": "Dennis_Dillion"
},
{
"code": "",
"text": "Have you installed mongodb tools?\nYou can check under bin dir whether mongoimport executable exists or not\nIf it is installed it could be path problem\nYou need to add mongotools/bin to your path",
"username": "Ramachandra_Tummala"
}
] | Import Data into MongoDB (Using CMD for Windows OS users On local Database | 2022-02-06T12:01:32.526Z | Import Data into MongoDB (Using CMD for Windows OS users On local Database | 5,336 |
|
[
"cxx"
] | [
{
"code": "hint.hppclient_session.hppkey_context.hppbsoncxx::v_noabi::page_with_curl: \nhint.hppclient_session.hppkey_context.hppbsoncxx::v_noabi::document\n",
"text": "am developing software based on C++ Mongodb driver 3.7.1 version.I’m confronting a compiler error:bsoncxx::document is ambiguousin hint.hpp, client_session.hpp,‘change_stream.hpp’ and key_context.hpp etc.Why does this happen and how do I fix it?additional info: when I useThe compiler error disappears. But I know I should change the package code in C++ MongoDB Driver.I am developing software based on C++ Mongodb driver 3.7.1 version.I’m confronting a compiler error:bsoncxx::document is ambiguousin hint.hpp, client_session.hpp,‘change_stream.hpp’ and key_context.hpp etc.Why does this happen and how do I fix it?additional info: when I useThe compiler error disappears. But I know I should change the package code in C++ MongoDB Driver.",
"username": "Fan_Rex"
},
{
"code": "",
"text": "Hello @Fan_Rex,Welcome to the MongoDB Community.bsoncxx::document is ambiguous,\nWhy does this happen and how do I fix it?The error message “bsoncxx::document is ambiguous” typically occurs in the compiler when there is a naming conflict in the code. In this case, it appears that there may be a conflict between different definitions or declarations of the bsoncxx::document type.I recommend checking for namespace conflicts in your code.Here are a couple of links to similar discussions that might be helpful for you:Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | "bsoncxx::document is ambiguous." error while using C++ Mongodb driver 3.7.1 | 2023-07-11T00:58:28.916Z | “bsoncxx::document is ambiguous.” error while using C++ Mongodb driver 3.7.1 | 522 |
|
null | [
"java",
"python",
"spark-connector",
"scala"
] | [
{
"code": "Py4JJavaError: An error occurred while calling o49.showString.\n: java.lang.NoSuchMethodError: org.apache.spark.sql.types.StructType.toAttributes()Lscala/collection/immutable/Seq;\nat com.mongodb.spark.sql.connector.schema.InternalRowToRowFunction.<init>\n...\nfrom pyspark.sql import SparkSession\n\n# Jars to pass to spark configuration through \"spark.driver.extraClassPath\" property\n\njars = [\n\"mongo-spark-connector_2.13-10.1.1.jar\",\n\"mongodb-driver-sync-4.10.0.jar\",\n\"mongodb-driver-core-4.10.0.jar\",\n\"bson-4.10.0.jar\",\n]\njar_path = \"/Users/matt/Downloads\"\nmongo_jar = \"\"\nfor jar in jars:\nmongo_jar += jar_path + \"/\" + jar + \":\"\n\n# Create a spark session\nuri = \"mongodb+srv://<username>:<pwd>@<cluster_network>/<database>\"\ndatabase = \"maps\"\ncollection = \"users\"\nspark = SparkSession.builder \\\n.appName(\"MongoDB Spark Connector\") \\\n.config(\"spark.driver.extraClassPath\", mongo_jar) \\\n.getOrCreate()\n\n# Read data from MongoDB\ndf = spark.read.format(\"mongodb\") \\\n.option(\"connection.uri\", uri) \\\n.option(\"database\", database) \\\n.option(\"collection\", collection) \\\n.load()\n\n# Print schema\ndf.printSchema() #It correctly print schema\n\n# Show rows\ndf.show() # It throws the error above\n",
"text": "Hi everyone,I’m trying to launch a spark JOB locally that connects to my production Atlas cluster (M20). For testing purposes, I have opened the cluster to the whole network (0.0.0.0).It seems to connect correctly, in fact when I create the dataframe of a collection and use the “df.printSchema()” method, the collection schema is printed correctly on the screen.However if I run other commands, such as “df.show()” I get this error of a mongoDB library (the spark connector):I’m using:Spark version: 3.4.1\nScala version: 2.12Jars passed to spark configuration:jars = [\n“mongo-spark-connector_2.13-10.1.1.jar”,\n“mongodb-driver-sync-4.10.0.jar”,\n“mongodb-driver-core-4.10.0.jar”,\n“bson-4.10.0.jar”,\n]For extreme clarity and trasparency, this is the code:",
"username": "Matteo_Tarantino"
},
{
"code": "",
"text": "SOLVEDIt worked by downgrading the mongodb jar version of the spark connector from “10.1.1” to “10.0.2”.",
"username": "Matteo_Tarantino"
},
{
"code": "",
"text": "Hi @Matteo_Tarantino,I see the issue:Scala version: 2.12\n“mongo-spark-connector_2.13-10.1.1.jar”,Spark is compiled using either Spark 2.12 or Spark 2.13Here you have mixed the versions and its causing the error. Updating to use the spark 2.12 jar will fix it eg:“mongo-spark-connector_2.12-10.1.1.jar”,Hope that helps,Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to read data from MongoDB Atlas Cluster (M20) using the MongoDB Spark Connector (10.1.1) | 2023-07-12T12:55:19.969Z | Unable to read data from MongoDB Atlas Cluster (M20) using the MongoDB Spark Connector (10.1.1) | 743 |
[
"aggregation",
"queries",
"data-modeling"
] | [
{
"code": "{\n First 2 properties are the Course Properties\n \"_id\": \"647666fb1296a965aa631cfa\",\n \"name\": \"Front end fundamentals\",\n \"s1\": true,\n \"topics\": [\n {\n \"topicId\": \"647856b01296a965aa6321cd\",\n \"s2\": true,\n \"_id\": \"647856b01296a965aa6321cd\",\n \"name\": \"UX Design\",\n \"courseId\": \"647666fb1296a965aa631cfa\",\n \"userId\": \"64766648ef43feb4849b2cea\",\n \"channels\": [\n {\n \"topicId\": \"647856b01296a965aa6321cd\",\n \"channelId\": \"647856cd1296a965aa63221e\",\n \"s3\": true,\n \"_id\": \"647856cd1296a965aa63221e\",\n \"name\": \"Introduction\",\n \"description\": \"\",\n \"courseId\": \"647666fb1296a965aa631cfa\",\n \"userId\": \"64766648ef43feb4849b2cea\",\n },\n {\n \"topicId\": \"647856b01296a965aa6321cd\",\n \"channelId\": \"64785a2a1de1c55ba02df099\",\n \"s3\": true,\n \"_id\": \"64785a2a1de1c55ba02df099\",\n \"name\": \"Lecture 1\",\n \"description\": \"\",\n \"courseId\": \"647666fb1296a965aa631cfa\",\n \"userId\": \"64766648ef43feb4849b2cea\",\n },\n\n\n {\n \"topicId\": \"647856b01296a965aa6321cd\",\n \"channelId\": \"43534tg4566trretwert23\",\n \"s3\": true,\n \"_id\": \"64785a2a1de1c55ba02df099\",\n \"name\": \"Lecture 1\",\n \"description\": \"\",\n \"courseId\": \"647666fb1296a965aa631cfa\",\n \"userId\": \"64766648ef43feb4849b2cea\",\n },\n \n {\n \"topicId\": \"647856b01296a965aa6321cd\",\n \"channelId\": \"64785a2a1de1c5dasfrewtdfsg35\",\n \"s3\": true,\n \"_id\": \"64785a2a1de1c55ba02df099\",\n \"name\": \"Lecture 1\",\n \"description\": \"\",\n \"courseId\": \"647666fb1296a965aa631cfa\",\n \"userId\": \"64766648ef43feb4849b2cea\",\n },\n ],\n \"createdAt\": \"2023-06-01T08:28:32.154Z\",\n \"updatedAt\": \"2023-06-01T08:28:32.154Z\",\n \"__v\": 14\n },\n\n {\n \"topicId\": \"647ef264e2c03f930e3b3854\",\n \"s2\": true,\n \"_id\": \"647856b01296a965aa6321cd\",\n \"name\": \"UX Design\",\n \"courseId\": \"647666fb1296a965aa631cfa\",\n \"userId\": \"64766648ef43feb4849b2cea\",\n \"channels\": [\n {\n \"topicId\": \"647ef264e2c03f930e3b3854\",\n \"channelId\": \"7893214hjbf801346y\",\n \"s3\": true,\n \"_id\": \"dasdf78032343nkjhkjnkj234\",\n \"name\": \"test 2:35\",\n \"description\": \"Teaching about xAPI Notifications once again\",\n \"courseId\": \"647666fb1296a965aa631cfa\",\n \"userId\": \"64766648ef43feb4849b2cea\",\n },\n {\n \"topicId\": \"647ef264e2c03f930e3b3854\",\n \"channelId\": \"23574934057934057\",\n \"s3\": true,\n \"_id\": \"648705fdf5cf921815ae5960\",\n \"name\": \"test 2:35\",\n \"description\": \"Teaching about xAPI Notifications once again\",\n \"courseId\": \"647666fb1296a965aa631cfa\",\n \"userId\": \"64766648ef43feb4849b2cea\",\n }\n ],\n }\n\n}\nconst BlockingNotifications = new Schema({\n userId: {\n type: Schema.Types.ObjectId,\n ref: \"user\",\n required: true,\n index: true,\n },\n courseId: {\n type: Schema.Types.ObjectId,\n required: true,\n ref: \"course\",\n index: true,\n },\n s1: { type: Boolean },\n\n topics: [\n {\n topicId: {\n type: Schema.Types.ObjectId,\n ref: \"topic\",\n },\n s2 : { type: Boolean },\n },\n ],\n channels: [\n {\n topicId: {\n type: Schema.Types.ObjectId,\n ref: \"topic\",\n },\n channelId: {\n type: Schema.Types.ObjectId,\n ref: \"channel\",\n },\n s3 :{ type: Boolean },\n }\n ],\n\n})\nuserId$lookup$map",
"text": "\nimage1176×889 150 KB\nContext: I need some tips to model the parts shaded in blue in the entity diagram shown above.The Entities User, Channel, Course and Topic and their relations are already modelled in the database by collections of the same name.\nThe are modelled in the following simple manner:Question: How can I now model the new many to many has Notification Settings For Relationship betweenas shown in the diagram.Note that this relation also contains an attribute called s1, s2, s3 for simplicity’s sake.The reason for asking this question is, that I have a Query that a User does everytime when he/she uses the application.Everytime a Course is fetched, I want to fetch :AND I want to send the result in the following manner to the Front end:\nBasically, Channels that belong to particular topic are put in an array and sent with the respective topics object. Example below for illustrative purposePossible Solution\nThis is the solution that I have come up with and it is working but I am not sure if this is the best possible way to model it.I introduced a new Collection called Notification Settings in my database.It has the following shape:A particular document of the above collection, would for a particular Course and a User, contain all the Topics, and Channels of the Course along with their respective attributes (s1, s2, s3) .When I make the query for a particular course, I first look for the document of the above collection with the userId equal to the User making the query.and using a aggregation pipeline, I perform a $lookup on all the Topics and Channels.Then once again using aggregation I arrange the channels into the respective topic to which they belong. I do this using the $map Operator. To figure out which channel belongs in which topic. The result looks like the first code snippet.I would like to know if this is the best possible way to model this data and Query this data.If needed I can also post the aggregation pipeline if interested.",
"username": "Ebrahim_Karjatwala"
},
{
"code": "",
"text": "Hi @Ebrahim_Karjatwala and welcome to MongoDB community forums!!Firstly, thank you for sharing all the information in detail.\nIn order to make the most accurate recommendation for the data modelling of the shared ER diagram, it would be greatly beneficial if you could provide additional information regarding the following:The are modelled in the following simple manner:Instead of adding reference to other entity, MongoDB gives you the leverage of using Embedding in MongoDB which would definitely help in efficient query processing.\nReferencing is however, one another way to create the relationships but in order to build it between the collections, but you might end up creating expensive operations like $lookup which in turn might end up being a slow query.MongoDB’s flexible data modelling allows us to design the database schema to optimise query execution time and efficiency, based on the specific use case. By carefully considering the data model, we can structure your data for efficient querying and performance.\nTo ensure the best performance, you can leverage MongoDB’s data modelling capabilities and discuss your use case requirements. This way, we can design a schema that aligns with your application’s needs and maximises query execution efficiency.This would make the query you wish to have more easier to fetch the data from the collections.This is the solution that I have come up with and it is working but I am not sure if this is the best possible way to model it.This could be one solution but this way your design might completely follow a relational approach and you might miss on feature and flexibility that MongoDB provides.However, in saying so, the data modelling always depends on the way the query has to be executed in the application.This is the solution that I have come up with and it is working but I am not sure if this is the best possible way to model it.Suggesting the best possible solution would be difficult without knowing how you’re expecting the data to grow over time.\nAs I mentioned earlier, if the collection is expected to grow significantly, you might need to evaluate the performance impact of your current approach as $lookup may become expensive in such conditions.For more assistance, you can also reach out to the MongoDB consulting who can guide you with the appropriate designs and implementation methods which would be best suited according to your use case.I would also recommend you to follow our MongoDB Courses and Trainings | MongoDB University for further understanding and learnings.Do reach out to us if you have any further queries.Regards\nAasawari",
"username": "Aasawari"
}
] | Need help in modelling the following Entity Diagram in MongoDB | 2023-07-08T16:10:33.048Z | Need help in modelling the following Entity Diagram in MongoDB | 586 |
|
null | [
"replication"
] | [
{
"code": "",
"text": "I currently using the Mongodb atlas there are three nodes in Mongodb one was primary other two are belong too secondary and Replication sets .I want to convert that has single node i dont want other Replication node is that possible how?",
"username": "Jegadesh_A"
},
{
"code": "",
"text": "Hi @Jegadesh_A,Welcome to the MongoDB community.I currently using the Mongodb atlas there are three nodes in Mongodb one was primary other two are belong too secondary and Replication sets .I want to convert that has single node i dont want other Replication node is that possible how?If I understand correctly, you want to convert your MongoDB Atlas from a replica set to a single node. However, it is not possible to configure this change because “MongoDB Atlas” is a cloud database service managed by MongoDB. By default, in the free shared tier it spins up a 3 replica set member (PSS) that provides redundancy and high availability.If you are looking to run a single-node replica, you can do so either locally or on-premises.May I ask why you want to convert it to a single node? Can you help me understand your use case?Looking forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Converting MongoDB Atlas from Replica Sets to a Single Node | 2023-07-06T14:40:22.243Z | Converting MongoDB Atlas from Replica Sets to a Single Node | 504 |
[
"java",
"atlas-cluster"
] | [
{
"code": "import com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.MongoException;\nimport com.mongodb.ServerApi;\nimport com.mongodb.ServerApiVersion;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\n\npublic class connection {\n public static void main(String[] args) {\n String connectionString = \"mongodb+srv://*****:******@productmaster.weamfdx.mongodb.net/?retryWrites=true&w=majority\";\n\n ServerApi serverApi = ServerApi.builder()\n .version(ServerApiVersion.V1)\n .build();\n\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(connectionString))\n .serverApi(serverApi)\n .build();\n\n // Create a new client and connect to the server\n try (MongoClient mongoClient = MongoClients.create(settings)) {\n try {\n // Send a ping to confirm a successful connection\n MongoDatabase database = mongoClient.getDatabase(\"admin\");\n database.runCommand(new Document(\"ping\", 1));\n System.out.println(\"Pinged your deployment. You successfully connected to MongoDB!\");\n } catch (MongoException e) {\n }\n }\n }\n}\n",
"text": "Hello,\nI was trying to connect MongoDB with Netbeans using Java and Maven, for doing so, I followed these steps:But when I pasted the code, I’m having errors in the import statement that states that the ServerAPI class and the ServerAPIVersion class are missing from the package.Can anyone help me with this?\nScreenshot 2023-07-10 at 5.21.03 PM2356×694 123 KB\n",
"username": "Sneha_Patel1"
},
{
"code": "",
"text": "Can you share details of the artifact name on Maven as well as the Java driver version?",
"username": "Ashni_Mehta"
},
{
"code": "",
"text": "Hello,\nThank you for replying.\nPlease refer to this screenshot.\n\nScreenshot 2023-07-10 at 8.41.19 PM3584×1268 461 KB\nThe artifact id is: mongodb-driver-sync\nand the driver version mentioned is: 4.10.1",
"username": "Sneha_Patel1"
},
{
"code": "",
"text": "I’m seeing two driver dependencies – one on mongodb-driver (which is no longer maintained or updated by MongoDB) and one on mongodb-driver-sync (which is maintained and updated). What happens if you remove the first dependency?",
"username": "Ashni_Mehta"
},
{
"code": "",
"text": "So when I remove the first one, It starts giving me errors like, “getCollection” is not found, etc.",
"username": "Sneha_Patel1"
}
] | ServerAPI missing from package com.mongodb | 2023-07-10T11:51:19.812Z | ServerAPI missing from package com.mongodb | 640 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "Hi All,I have configured alert under project and i am using email to notify the alert when thresholds are breached. whenever the threshold is breached i am receiving the mail subjectline as Alert - ProjectName - datetime but with this subjectline i wanted to add metrics and few more metrics and remove the datetime.Can anyone please guide me how can i achieve the above one",
"username": "akhil_yl"
},
{
"code": "",
"text": "Hi @akhil_yl,i am receiving the mail subjectline as Alert - ProjectName - datetime but with this subjectline i wanted to add metrics and few more metrics and remove the datetime.Can anyone please guide me how can i achieve the above oneIt is currently not possible to configure the title / subject of the email alerts. You can raise this as a form of feedback by creating a post on our feedback engine which the product team monitor and others can vote for.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you for the reply.",
"username": "akhil_yl"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mail Subject to be Changed | 2023-07-08T04:34:57.343Z | Mail Subject to be Changed | 328 |
null | [
"aggregation",
"serverless"
] | [
{
"code": "db.users.aggregate([\n {\n $lookup: {\n from: 'organizations',\n localField: 'organizationId',\n foreignField: '_id',\n as: 'organization',\n },\n },\n {\n $unwind: {\n path: '$organization',\n preserveNullAndEmptyArrays: true,\n }\n }\n ]);\n",
"text": "How many RPUs does it take in MongoDB aggregation, if multiple documents in the collection loopup to a single document in another collection?I have a serverless instance in MongoDB Atlas.I have two collections: Organizations and Users.Every User is part of one Organization i.e. organizationId is stored in every user document.Let’s say right now I have only one document in the Organization collection. How many RPUs does it take for the below aggregate?The result of the above query is a list of 100 users.If each document scan is 1 RPU under 4kb, if there is only a single document in the organization collection and all 100 users are part of it.1.) If each user lookup is considered.total RPUs = 100 RPUs (100 user documents) + 100 RPUs(for each lookup in organization) = 200 RPUs2.) Since there is only one organization.total RPUs = 100 RPUs (100 user documents) + 1 RPUs = 101 RPUsWhich one is the correct value, is it 200 or 101 RPU?",
"username": "B_Sai_Uttej"
},
{
"code": "",
"text": "I don’t believe i’ll be able to advise how much RPU this scenario would generate but I would recommend reading over my response in the following post for more information that may help : Serverless RPU model - #3 by Jason_TranIn saying so, there’s also a feedback for a Serverless RPU estimator post which you can vote on which I believe would help with your use case noted here.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | How many RPUs does it take in MongoDB aggregate, if multiple documents in the collection loopup to a single document in another collection? | 2023-07-12T02:49:59.721Z | How many RPUs does it take in MongoDB aggregate, if multiple documents in the collection loopup to a single document in another collection? | 773 |
null | [] | [
{
"code": "",
"text": "I could not receive the paste or enter the verification code i need help plz",
"username": "Ebenezer_Fraol"
},
{
"code": "atlascli",
"text": "Hi @Ebenezer_Fraol,What is this in regards to? E.g. Are you trying to authenticate atlascli and you aren’t seeing the code you need to enter?We’ll need more information to understand the context of this post.Regards,\nJason",
"username": "Jason_Tran"
}
] | Verification code | 2023-07-07T21:34:52.663Z | Verification code | 357 |
null | [
"react-native",
"flexible-sync",
"react-js"
] | [
{
"code": "import React from 'react';\nimport {AppProvider, RealmProvider, UserProvider} from '@realm/react';\n\nimport LoginComponent from './App/Components/Views/Login';\nimport DemoComponent from './App/Components/Views/Demo';\nimport LoadingSpinner from './App/Shared/LoadingSpinner';\nimport {TemplateCategorySchema} from './App/RealmModels/schema';\nimport {TemplateCategory} from './App/RealmModels/realmClasses';\nimport {OpenRealmBehaviorType} from 'realm';\n\nfunction App(): JSX.Element {\n const realmAccessBehavior: Realm.OpenRealmBehaviorConfiguration = {\n type: OpenRealmBehaviorType.DownloadBeforeOpen,\n timeOutBehavior: 'openLocalRealm',\n timeOut: 1000,\n };\n\n return (\n <AppProvider id={'dynamicforms-qa-jqtpx'}>\n <UserProvider fallback={LoginComponent}>\n <RealmProvider\n schema={[TemplateCategorySchema]}\n sync={{\n newRealmFileBehavior: realmAccessBehavior,\n existingRealmFileBehavior: realmAccessBehavior,\n onError: console.error,\n flexible: true,\n }}\n fallback={() => <LoadingSpinner />}>\n <DemoComponent />\n </RealmProvider>\n </UserProvider>\n </AppProvider>\n );\n}\n\nexport default App;\n",
"text": "HiI’m using @realm/react with flexible sync. My problem is after I login the app is stuck on the spinner from the fallback prop of the RealmProvider. However if I close and open the app again I see the correct data from DemoComponent.Here is my code:I’ve tried using initialSubscription / adding subscriptions after, using/not using newRealmFileBehavior/existingRealmFileBehavior, using createRealmContext etc.The spinner stays for minutes but if I close/open the app my data shows right away.I’m not sure what I’m doing differently from https://www.mongodb.com/docs/realm/sdk/react-native/sync-data/configure-a-synced-realm/#add-a-query-to-the-list-of-subscriptions.Thanks,",
"username": "Tam_Nguyen1"
},
{
"code": "",
"text": "Well this was a pbcak. In my LoginComponent I was calling app.logIn() twice while testing stuff and forgot about it. Once I removed the extra line it was fine.",
"username": "Tam_Nguyen1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | RealmProvider stuck in fallback spinner | 2023-07-11T19:34:05.604Z | RealmProvider stuck in fallback spinner | 631 |
null | [
"data-modeling",
"connecting",
"atlas-device-sync",
"atlas-cluster",
"charts"
] | [
{
"code": "",
"text": "Hello everyone,I was working with MongoDB atlas and everything was working well but once I clicked on the charts tab it returns a blank white screen. I tried to reload the page many times, used different browsers and networks, and tried to log out and in many times, but I’m still getting the same blank screen. Does anyone has faced this issue before and how it can be solved?",
"username": "Hutaf"
},
{
"code": "",
"text": "Hi @Hutaf and welcome in the MongoDB Community !I’m logged in and I have access to my Charts dashboards at the moment. Maybe it’s related to some cookies or cache? Seems impossible as you tried different browsers but… Maybe try to clear the cache / cookies ?\nAre your browsers up-to-date? I’m using Google Chrome Version 102.0.5005.115 (Official Build) (64-bit)Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@Hutaf,Could you please share your Atlas Project ID (it’s in the URL) so the Atlas engineers can investigate?Thanks,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Atlas Project ID (it’s in the URL)I have the same issue. this is my Atlas Project ID: 634441bf0fb5fd7e2341fcac.",
"username": "ESRAA_ALZAHRANI"
},
{
"code": "",
"text": "Hi all,Sorry for the delay, I was on extended paternity leave.Is this problem still happening?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "It is for me, and it’s April 2023.",
"username": "Sebastian_James"
},
{
"code": "",
"text": "If anyone is having this issue, please send your Atlas project ID so we can investigate. I’m not aware of any widespread issues though.",
"username": "tomhollander"
},
{
"code": "",
"text": "i am also facing same issue …how did you resolve it/",
"username": "Vishal_Mishra3"
}
] | Charts and dashboard - Blank white screen | 2022-06-16T12:18:53.448Z | Charts and dashboard - Blank white screen | 4,006 |
null | [
"production",
"c-driver"
] | [
{
"code": "mongoc_cursor_new_from_command_reply_with_optsserverId",
"text": "Announcing 1.24.2 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.No changes since 1.24.1. Version incremented to match the libmongoc version.Fixes:Thanks to everyone who contributed to this release.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB C Driver 1.24.2 Released | 2023-07-12T14:18:28.537Z | MongoDB C Driver 1.24.2 Released | 659 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hi there!\nWe get Query Targeting alerts from our mongo Atlas cluster, and rightly so, as I could spot one of our query was lacking an index, causing the entire document set to be scanned.However, looking at the Query Targeting metric, I still see less-than-ideal ratios, and so I would like to identify what queries are problematic. I have enabled diagnostic on some of our dbs, but from other posts on this forum/the doc, what the diagnostic/performance advisor/profiler show will be only slow queries, which is not the same thing as poorly targeting queries.For instance, the query I got the alert for still executed in under 30ms, so it never appeared on the profiler.What tools are there to identify queries (or at least the collections) that have a poor targeting performance ?",
"username": "Remi_Sormain"
},
{
"code": "{$gte: [{$divide: [\\\"$docsExamined\\\", \\\"$nreturned\\\"]}, 10000]}",
"text": "Hi @Remi_Sormain,Thank you for your question! You bring up an excellent point. As you mentioned, relying solely on a slowms threshold does not holistically capture inefficient operations, especially queries with poor query targeting performance that may still be completing within the slowms threshold.We are actively working on a project this quarter that will update how we profiler operations for Atlas Query Profiler/Performance Advisor. In an upcoming Atlas release, we will be updating Query Profiler/Performance Advisor to also show operations where query targeting ratio exceeds 10k.In the meantime, however, you can manually set your profiling levels to filter on operations that exceed a query targeting value by adding something like this to your profiler settings. Note, however, that you will need to disable the managed slowms so that your filter is not overwritten. Also, please consider any rate limiting that could occur on Query Profiler due to too many logged operations if you set the query targeting filter too low. The Query Profiler currently has a limit of ingesting 86400 slow query log lines per 24 hours per host.{$gte: [{$divide: [\\\"$docsExamined\\\", \\\"$nreturned\\\"]}, 10000]}Thanks!\nFrank",
"username": "Frank_Sun"
},
{
"code": "db.setProfilingLevel( 1, { filter: { $expr: { $gte: [{ $divide: ['$docsExamined', '$nreturned'] }, 500] } } } )\ndb.setProfilingLevel(1, { docsExamined: { $gte: 500 } }); \n",
"text": "Thanks a lot @Frank_Sun for your quick reply!I’m keen to see this profiler update on Atlas. Thanks for the guidance on the profiler filter, this is a real gem ! I struggled a bit to configure the filter via mongosh, is this right?For now I went with the following to be sure I didn’t filter out poorly targeting requests, given that I know none of my queries should result in too many documents scanned:",
"username": "Remi_Sormain"
},
{
"code": "",
"text": "Hi @Remi_Sormain,Yes! Your setProfilingLevel command looks great. One thing to note, though, if you run that command it will overwrite the slowms filter so you will not see operations over the slowms threshold. You can also verify your filters are set correctly by running the “db.getProfilingStatus()” command.Thanks!\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to identify bad targeting queries (Scanned/Returned ratio) | 2023-07-10T14:59:35.363Z | How to identify bad targeting queries (Scanned/Returned ratio) | 316 |
null | [
"node-js",
"sharding"
] | [
{
"code": "",
"text": "I am using sharded collection to stream data but it keep giving me cursorTimeOut even after putting addCursorFlag(“noCursorTimeout”, true).",
"username": "Aman_Saxena"
},
{
"code": "",
"text": "How are you setting that flag? What does your code look like? Also what version of the driver are you using?",
"username": "John_Sewell"
},
{
"code": "",
"text": "let cursor=await db.collection(“business”).find({}).sort({ _id: 1 }).addCursorFlag(“noCursorTimeout”, true);\nwhile (await cursor.hasNext()) {\n//process here\n}driver version is 4.5",
"username": "Aman_Saxena"
},
{
"code": "db.collection(“business”).find({},{timeout: false}).sort({ _id: 1 })\n",
"text": "Looking on SO, I did see a similar issue where they resolved it by passing the option as an option within the find:I’ve had the cursor expire several times when running large scripts, typically when using one collection to feed the script data to perform other actions.\nIn those cases I typically use filter (not skip) and limits (with sorting!) to ensure that my cursor is short lived, just in case it times out or a fallover happens or something.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Oh Ok, make sense, I will filter the data up and process it in small batches.\nThanks",
"username": "Aman_Saxena"
}
] | Cursor timeout in find stream() even after addCursorFlag in sharded collection | 2023-07-12T11:30:57.988Z | Cursor timeout in find stream() even after addCursorFlag in sharded collection | 559 |
null | [] | [
{
"code": "bool DBClass::GetADoc(bsoncxx::document::value& resultdoc)\n{\n bool bRtn = false;\n\n mongocxx::uri uri(\"mongodb://localhost:27017\");\n mongocxx::client client(uri);\n mongocxx::database database = client[\"test_database\"];\n mongocxx::collection collection = database[\"test_collection\"];\n\n if(collection)\n {\n auto tmpDoc = collection.find_one();\n if(tmpDoc)\n {\n resultdoc = tmpDoc->view();\n bRtn = true;\n }\n }\n\n return bRtn;\n}\n",
"text": "I’m sure there’s a way, but I can’t find the proper way to populate my document in a method so I can work with it in other methods. Here’s the non-working version of what I’m trying to do.",
"username": "Randy_Culler1"
},
{
"code": "#include <iostream>\n#include <cstdint>\n#include <bsoncxx/builder/stream/document.hpp>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/client.hpp>\n#include <mongocxx/instance.hpp>\nusing bsoncxx::builder::stream::close_array;\nusing bsoncxx::builder::stream::close_document;\nusing bsoncxx::builder::stream::document;\nusing bsoncxx::builder::stream::finalize;\nusing bsoncxx::builder::stream::open_array;\nusing bsoncxx::builder::stream::open_document;\n\nint main(int, char**) {\n mongocxx::instance inst{};\n mongocxx::uri uri(\"mongodb+srv://findThief:[email protected]/?retryWrites=true&w=majority\");\n std::cout << \"Connecting to MongoDB Atlas ...\\n\";\n mongocxx::client conn{uri};\n auto collection = conn[\"test\"][\"cxxexample\"];\n auto find_one_result = collection.find_one({});\n if(find_one_result) {\n std::cout << bsoncxx::to_json(*find_one_result) << \"\\n\";\n }\n std::cout << \"Completed\\n\";\n}\nConnecting to MongoDB Atlas ...\n{ \"_id\" : { \"$oid\" : \"64abbab0bd85fb38a501e2b4\" }, \"filed_value\" : \"ABC\" }\nCompleted\n",
"text": "Hi @Randy_Culler1 and welcome to MongoDB community forums!!With the latest MongoDB driver version as 3.7.0, I tried the below code to view the documents of the collection.and the output was successfully printed as:Could you please try the above code and let us know if the above works for you. If not, could you please share the driver version along with the error message that you see while executing the above code.\nFor more, you can refer to the documentation for Tutorial for Mongocxx.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "bool DBClass::GetADoc(bsoncxx::document::value& resultdoc)\n{\n bool bRtn = false;\n\n mongocxx::uri uri(\"mongodb://localhost:27017\");\n mongocxx::client client(uri);\n mongocxx::database database = client[\"test_database\"];\n mongocxx::collection collection = database[\"test_collection\"];\n \n if (collection)\n {\n auto tmpDoc = collection.find_one({});\n if (tmpDoc)\n {\n resultdoc = tmpDoc.get();\n bRtn = true;\n }\n }\n\n return bRtn;\n}\n",
"text": "Sorry, I should have been more clear. My question is how to return a document value in a parameter in C++.I found a solution that seems to work:",
"username": "Randy_Culler1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Passing documents as parameters | 2023-07-07T12:41:42.355Z | Passing documents as parameters | 501 |
[
"atlas-functions"
] | [
{
"code": "exports = async function() {\n const from = 'FROM_EMAIL'\n const to = 'TO_EMAIL'\n\n // get email HTML \n const emailHtml = await context.http.get({\n url: 'https://www.typographicposters.com/newsletters/the-world-in-you'\n })\n const body = emailHtml.body.text()\n \n // send email\n const { SESv2Client, SendEmailCommand } = require('@aws-sdk/client-sesv2')\n \n const client = new SESv2Client({\n region: 'us-east-1',\n credentials: {\n accessKeyId: context.values.get('AwsSesKey'),\n secretAccessKey: context.values.get('AwsSesSecret'),\n },\n })\n const command = new SendEmailCommand({\n FromEmailAddress: from,\n Destination: {\n ToAddresses: [to],\n },\n Content: {\n Simple: {\n Subject: {\n Data: 'Test from Realm',\n Charset: 'UTF-8',\n },\n Body: {\n Html: {\n Data: body,\n Charset: 'UTF-8', // tried with ISO-8859-1, the garbled characters just change\n },\n },\n },\n },\n })\n await client.send(command)\n\n\n // for AWS SDK v2 here is the code\n // const Ses = require(\"aws-sdk/clients/ses\");\n \n // const client = new Ses({\n // region: 'us-east-1',\n // credentials: {\n // accessKeyId: context.values.get('AwsSesKey'),\n // secretAccessKey: context.values.get('AwsSesSecret'),\n // },\n // })\n // const params = {\n // Source: from,\n // Destination: { ToAddresses: [to] },\n // Message: {\n // Body: {\n // Html: {\n // Charset: \"UTF-8\",\n // Data: body\n // }\n // },\n // Subject: {\n // Charset: 'UTF-8',\n // Data: 'Test from Realm with AWS SDK v2'\n // }\n // }\n // }\n // await client.sendEmail(params).promise(); \n \n};\n",
"text": "Hello, I’ve using Realm to send AWS SES emails for 3 years now, from everything from user auth to newsletters. Starting on April 13th 2023 (or a few days before) all emails have garbled characters appearing everywhere, many strange characters appearing instead of accented characters, but sometimes the garbled text appears on spaces and common characters too!I’ve spent quite a lot of time debugging and narrowed it down at “possibly” the context.http.get method changed somehow, and now has problems with utf8 charset.I’d like to reassure that the error appeared without any changes on the Realm dependencies and no changes at all on our server. The only coincidence is that you upgraded the Atlas Shared clusters to MongoDB 6.0.5.I will share some screenshots to illustrate the using, and also a working code below.Before:\nAfter:\nfrom this HTML email: The world in you - typo/graphic postersBefore (error on spaces):\nAfter (error before spaces):\nBefore: (error before spaces):\nAfter: (error before spaces):\nI’m using AWS SDK v3 as dependency, but tried with v2 just now and the error is the same.\nTried with @aws-sdk/client-sesv2 3.261.0, 3.267.0, 3.67.0, neither work.\nTried with aws-sdk 2.1360.0, too.**What could be the cause? Realm Functions downgraded somehow? **\nIs there a charset issue on context.http now?Thanks!Here the code:",
"username": "andrefelipe"
},
{
"code": " // get email HTML \n const emailHtml = await context.http.get({\n url: 'https://www.typographicposters.com/newsletters/the-world-in-you'\n })\n const body = emailHtml.body.text()\n",
"text": "Did you confirm the jumbled text is not generated directly from the HTML response before the email is sent?Is the email jumbled for everyone who receives it, or did you only test with one recipient?",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "Yes, I did. The source HTML is fine.In fact I noticed the error from one internal admin email which is sent regularly. Was always fine, and then suddenly all emails have garbled text.Yes, I did test with different recipients too.",
"username": "andrefelipe"
},
{
"code": "",
"text": "Tested with a completely different source, from another server:\nhttps://www.eartheclipsed.com/newsletters/new-episode-available-todayBefore:\nAfter:\nAnd another, completely different source too:\nhttps://www.sp-arte.com/newsletters/casa-sp-arte-2/Before:\n\nScreenshot 2023-04-20 at 08.18.40696×844 161 KB\nAfter:\n\nScreenshot 2023-04-20 at 08.18.09723×831 167 KB\n—I really drained by options, as it’s not the AWS SES, nor it’s not the HTML sources, the only think in the middle is the context.http.get and the App Services Function itself.So that’s the reason of my guess:\ndid App Services changed recently? Is it using a different charset?",
"username": "andrefelipe"
},
{
"code": "await context.http.getcontext.http.getconsole.log",
"text": "await context.http.getSo if the HTML looks fine immediately after context.http.get what makes you think it is the culprit?\nDid you actually do a console.log right after this statement?",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "const emailHtml = await context.http.get({\n url: 'https://www.typographicposters.com/newsletters/atlas-test'\n // created this new URL with only that thank you note\n})\nconst body = emailHtml.body.text()\n\nconst command = new SendEmailCommand({\n FromEmailAddress: from,\n Destination: { ToAddresses: [to] },\n Content: {\n Simple: {\n Subject: {\n Data: 'Test from Realm',\n Charset: 'UTF-8',\n },\n Body: {\n Html: {\n Data: body, // <---------------\n Charset: 'UTF-8',\n },\n },\n },\n },\n })\n await client.send(command)\n",
"text": "Yes, here are the results:The console.log is indeed fine when looking at the Realm UI:\n\nScreenshot 2023-04-21 at 08.12.091451×644 272 KB\nBut very curious is this test, sending that paragraph inline to SES:\n\nScreenshot 2023-04-21 at 08.16.281562×343 31.2 KB\nThe email gets received correctly!\n\nScreenshot 2023-04-21 at 08.16.07946×173 25.4 KB\nAnd again, getting going back to the case, if I get the HTML from here:The email gets received with all the garbled characters again:\n\nScreenshot 2023-04-21 at 08.22.26962×525 51.9 KB\nAnd yes, that log at the Realm UI looks fine:\n\nScreenshot 2023-04-21 at 08.30.291614×610 272 KB\nOK, now, what could be problem?The source HTML is fine, and as I said, have being sending these emails for years. Just got the errors suddenly since last week.(I will reference this source HTML from now on, it’s shorter: Atlas Test - typo/graphic posters)",
"username": "andrefelipe"
},
{
"code": "context.http.get",
"text": "To close another possibility, tried removing all CSS and HTML head.But the result is the same:\n\nScreenshot 2023-04-21 at 08.54.111587×583 211 KB\n—\nNow take a look on here, this is the email source which was sent inline, where the characters were correct:\nScreenshot 2023-04-21 at 08.16.281562×343 31.2 KB\n\nScreenshot 2023-04-21 at 08.53.311596×320 145 KB\nSo the question is, who is fiddling with the characters? AWS or Atlas Functions?My understanding it’s not AWS, because from the test above, sending the HTML inline, the email got received just fine.Would be the Atlas Functions dependencies? Something at the transport level?\nPlease understand that it’s my guess, but makes sense somehow.Here is the final test. Sending the exact same HTML of the thank you note, which was inlined, but now getting with the context.http.get\n\nScreenshot 2023-04-21 at 09.07.071899×501 130 KB\n",
"username": "andrefelipe"
},
{
"code": "context.http.getexports = async function() {\n const response = await context.http.get({ url: \"https://www.example.com/users\" })\n // The response body is a BSON.Binary object. Parse it and return.\n return EJSON.parse(response.body.text());\n};\n",
"text": "Is context.http.get supposed to be used as a general purpose HTTP client like you’re using it?\nBased on review of the docs, I’m not so sure. In all the examples, BSON.Binary is the expected response.Also, look here:body: The binary-encoded body of the HTTP response.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "context.http.get",
"text": "Can you try to use axios or fetch instead of context.http.get?",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "fetchconst axios = require('axios');\n\nfailed to execute source for 'node_modules/axios/index.js': FunctionError: failed to execute source for 'node_modules/axios/lib/axios.js': FunctionError: failed to execute source for 'node_modules/axios/lib/core/Axios.js': FunctionError: failed to execute source for 'node_modules/axios/lib/core/dispatchRequest.js': FunctionError: failed to execute source for 'node_modules/axios/lib/adapters/adapters.js': FunctionError: failed to execute source for 'node_modules/axios/lib/adapters/http.js': FunctionError: failed to execute source for 'node_modules/axios/lib/helpers/formDataToStream.js': TypeError: Value is not an object: undefined\n\tat node_modules/axios/lib/helpers/formDataToStream.js:41:19(118)\ncontext.http.getresponse.body.text()",
"text": "Atlas functions doesn’t have fetch.Have Axios 1.3.6 is installed, but just by requiring it error out with:Like I said, I have being using context.http.get to get texts just fine, for more than 3 years. response.body.text() gets the body string.",
"username": "andrefelipe"
},
{
"code": "",
"text": "The latest version of Axios doesn’t appear to work with Atlas functions, per the below post. Version 1.2.0 appears to work fine.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "const url = 'https://www.typographicposters.com/newsletters/atlas-test'\nconst axios = require('axios').default;\n\nconst response = await axios.get(url);\nreturn response.data;\nresponse.data.toString()responseType: 'text'",
"text": "OK, installed Axios 1.2.0, but the response is coming as binary.Of course I tried many different params and methods and researched on Github too, but we are losing the point here. I don’t see the reason to start testing Axios now. The issue is in MongoDB Atlas Functions and as I said have being working for 3+ years.Now we need the MongoDB team to jump in this issue.Here is Axios code:And the binary response, tried response.data.toString() also responseType: 'text':\n\nScreenshot 2023-04-24 at 09.30.411516×110 25.2 KB\n@Try_Catch_Do_Nothing if you find real solutions, tested inside the Atlas Functions, let me know.",
"username": "andrefelipe"
},
{
"code": "context.http.get",
"text": "Now we need the MongoDB team to jump in this issue.So submit a support request then.\nIf it is an issue that was introduced within Atlas Functions, then I highly doubt a fix will be made for your specific use case (which is why I suggested an alternative like axios).The bottomline is, it’s not working now, so something changed to break the existing functionality. Was it context.http.get? Was it the aws library? You need to test different scenarios to pinpoint where the issue occurs.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "Hello @andrefelipe,Thank you for raising the concerns and discussing this. I’m looking into this. Please allow us some time.We appreciate your patience and understanding.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Everything right up to the email render is correct - the Web page us using UTF-8 and returning the ™ symbol as three bytes (Unicode character inspector: ™) E2 84 A2In your final email you can see these three bytes as â ¢ - This is Those same three bytes being rendered using either ISO 8859-1 or ISO 8859-15 the single byte representation of non ASCII characters we used before Unicode was a common thing. E2 is â and A2 is ¢ - 84 is not defined in there. (ISO/IEC 8859-15 - Wikipedia)The issue is that the Email is not correctly defining its contents as utf8 and so the rendered is falling back tio 8 bitl. At least that’s how it looks - if you look at the source of the email (a) is it really these three bytes or has it done some off translation to three unicode characters and (b) what charset is actually defined in the raw email content.",
"username": "John_Page"
},
{
"code": "",
"text": "It’s possible the issue is that when calling the SES endpoint the Request is using ISO8859-1 rather then UTF 8, then the web service would think you were sending it three characters. That seems the most likely issue (we will see this is the internal email contents are quite different) .Wh you send your Explicit test - you arent sending al the HTML you are only sending part of it (the text part) so it’s possible that somwhere in all the rest of the HTML is a problem.",
"username": "John_Page"
},
{
"code": "\nexports = async function (emailAddress, Content, Subject) {\n\n\n const ses = context.services.get('email').ses();\n const emailHtml = await context.http.get({\n url: 'https://www.typographicposters.com/newsletters/atlas-test'\n // created this new URL with only that thank you note\n})\nconst body = emailHtml.body.text()\n\n const result = await ses.SendEmail({\n Source: \"[email protected]\",\n Destination: {ToAddresses: [\"[email protected]\"]},\n Message: {\n Body: {\n Html: {\n Charset: \"UTF-8\",\n Data: body\n }\n },\n Subject: {\n Charset: \"UTF-8\",\n Data: \"trst Email\"\n }\n }\n }).then(r => console.log(EJSON.stringify(r))).catch(e => console.log(e));\n\n};\n",
"text": "I just tested this using the deprecated built-in SES client and it worked correctly.",
"username": "John_Page"
},
{
"code": "@aws-sdk/client-sesv2",
"text": "Thank you very much @John_Page for narrowing the issue and sharing your knowledge.Yes, using the deprecated built-in SES client solved the issue and I already changed my App to use that for now.But it will be deprecated on August 1st. What’s the solution going forward? I understand when you said that when calling the SES endpoint the Request is using ISO8859-1. But how to overcome this?Note:\n— I’ve tried both AWS SDK JS v2 and v3, both have the same issue. Is this an App Service’s Dependency issue?\n— Sorry for saying again, but I used to send emails just fine using the solution I shared above (with AWS SDK v3 as dependency with @aws-sdk/client-sesv2 package) why suddenly it stoped working?Thank you very much again.",
"username": "andrefelipe"
},
{
"code": "",
"text": "I suspect this is an issue with a dependancy - Looks like in the config for the SDK you can set an optional _HttpHandler value which AWS describe as “Fetch in browser and Https in Nodejs.” - I’ve asked the app services development team to comment on why / if that might be passing the wrong content-type charset in the request.If you can see a way to explicity set it to @aws-sdk/node-http-handler | AWS SDK for JavaScript v3 it might help.",
"username": "John_Page"
},
{
"code": "@aws-sdk/node-http-handler",
"text": "I quickly browsed the @aws-sdk/node-http-handler but didn’t find a charset setting yet, will try to give attention later today.Again, thanks for your effective support.",
"username": "andrefelipe"
}
] | Garbled text from context.http.get as source of AWS SES emails | 2023-04-19T13:11:46.058Z | Garbled text from context.http.get as source of AWS SES emails | 2,273 |
|
null | [
"node-js",
"react-native",
"typescript",
"one-to-one-relationship"
] | [
{
"code": "",
"text": "What is the typescript syntax for adding a user-defined object to a List field in realm for react-native? The list is in a parent object (i.e. there is a one-to-many relationship between the parent object and the list of child objects within the parent object.I have looked quite extensively through the documentation but this particular use-case doesn’t seem to be clearly described as far as I can find.Thanks in advance!",
"username": "Supi_Twist"
},
{
"code": "",
"text": "@Supi_Twist Our documentation on one-to-many relationships should be able to cover the use case you have described.",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to add elements to a List in a document for realm / react-native / typescript? | 2023-07-12T10:06:14.917Z | How to add elements to a List in a document for realm / react-native / typescript? | 621 |
null | [] | [
{
"code": "InvalidSession Error - invalid session: error finding user for endpoint.\n",
"text": "Hi,\nI’m trying to post to a Mongo Atlas database using a short life JSON token. It’s been working fine for months for different users but this week (10/7/23) it’s failing - I’ve set up new projects, new devices, new account - same problem:-\nMongo is throwing anUnder my Data API - advanced settings - I’ve enabled User Settings - Create User Upon Authentication.\nUnder Authentication I’ve enabled Custom JWT Authentication - Provider enabled, RS256 and uploaded my public key. Saving changes. I’ve tried disabling and re-enabling “Create User Upon Authentication”\nI’ve verified the JWT with jwt.io and looks correct.\nUnder App Users → Users list is empty - as the error says.\nAnyone have any ideas why Mongo is not creating a new user as requested?\nThanks,\nKeith",
"username": "Keith_Bramley"
},
{
"code": "",
"text": "Turns out it was our library code. We had a max token lengh that as too short. No ones owned up to making a change our side so not sure if it was our code that grew a longer token or a Mongo change. Working now.",
"username": "Keith_Bramley"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Invalid session: error finding user for endpoint | 2023-07-11T11:36:55.779Z | Invalid session: error finding user for endpoint | 769 |
null | [
"node-js",
"atlas-cluster"
] | [
{
"code": "Error al conectar al cliente migueltejera_ecommerce Error: querySrv ECONNREFUSED _mongodb._tcp.cluster-mt.d1yad.mongodb.net\n at QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:254:17) {\n errno: undefined,\n code: 'ECONNREFUSED',\n syscall: 'querySrv',\n hostname: '_mongodb._tcp.cluster-mt.d1yad.mongodb.net'\n}\n0.0.0.0/0 (including my current IP address)",
"text": "Hello,I’m new to MongoDB Atlas and have been struggling with a persistent error for quite some time now. Despite my efforts to find a solution, I haven’t been successful, and it’s becoming increasingly frustrating. I would like to emphasize that when I connect to my home Wi-Fi, everything works fine without any issues. However, when I try to connect using my phone’s hotspot or any other Wi-Fi network, I encounter this error consistently.I have already configured my Network Access to 0.0.0.0/0 (including my current IP address) and have attempted various troubleshooting steps, but none of them have yielded positive results.If someone could kindly assist me with this problem, I would greatly appreciate it. I am using Node.js for my application.Thank you so much in advance for your help.",
"username": "Miguel_Tejera"
},
{
"code": "8.8.8.88.8.4.4",
"text": "Hey @Miguel_Tejera,Welcome to the MongoDB Community forums I would like to emphasize that when I connect to my home Wi-Fi, everything works fine without any issues. However, when I try to connect using my phone’s hotspot or any other Wi-Fi network, I encounter this error consistently.I recommend checking if there are any firewall or network restrictions on the Wi-Fi networks where the error occurs. However, it seems like the DNS issue, try using Google’s DNS 8.8.8.8 and 8.8.4.4. Please refer to the Public DNS for more details.Apart from this, please refer to this response and try using the connection string from the connection modal that includes all three hostnames instead of the SRV record.If it returns a different error, please share that error message here.In addition to the above, I would recommend also checking out the Atlas Troubleshoot Connection Issues documentation.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Dear Kushagra,I sincerely appreciate your prompt response. I wanted to inform you that I have managed to find a solution to the connection problems I was experiencing. Although I’m not entirely sure how it started working, I tried numerous troubleshooting steps and eventually achieved success. However, I am still encountering issues with my phone hotspot. It seems that there might be a connection between my phone’s firewall settings or dynamic IP configuration and this problem. Rest assured, I will continue my efforts to resolve this.Once again, I extend my heartfelt gratitude to you for your assistance. I will certainly update you if I am able to establish a connection using my phone.Wishing you a fantastic day ahead.Kindly,\nMiguel",
"username": "Miguel_Tejera"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | ECONNREFUSED node:internal/dns/promises:254:17 | 2023-07-08T14:18:54.382Z | ECONNREFUSED node:internal/dns/promises:254:17 | 660 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I was trying to build a small watcher on the mongodb to monitor the querries and performance. I went ahead with the currentOp aggregation pipeline and use that to get the status at regular cadence and analyse it based on querries running with correct index and time taken by querries as well to see potential scope to optimise them.\nI am checking planSummary field to check for index and i saw a few results with planSummary:null\nI understand what COLSCAN and IXSCAN mean but i am not able to understand what null is.\nFound this in documentation but its not much help:https://www.mongodb.com/docs/manual/reference/command/currentOp/#mongodb-data-currentOp.planSummary\nCan someone please help me understand what this means?",
"username": "Bhavesh_Navandar"
},
{
"code": "currentOpnull",
"text": "Any command that filters data should have an associated plan. Can you share an example of a currentOp that has a null plan?",
"username": "alexbevi"
},
{
"code": "",
"text": "Hey,\nJust took another look, those are insert commands. Forgot to filter those out. I think I am not handling the planSummary correctly.",
"username": "Bhavesh_Navandar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What does planSummary null mean? | 2023-07-11T06:03:52.548Z | What does planSummary null mean? | 349 |
null | [] | [
{
"code": "",
"text": "I recently installed Debian 12 to ‘check it out’. One of the important tools for me is MongoDB Community Server. Unfortunately, there is no Debian 12 precompiled binary available (yet). I tried compiling from source but got an error about 15 minutes into the compilation process. Wondering whether anyone has had success building a binary for Debian 12 or whether one is already publicly available.Thanks.",
"username": "Hilkiah_Lavinier"
},
{
"code": "",
"text": "I see 7.0-enterprise listing Debian 12 in Supported Platforms so I’m guessing you’ll see it soon for community too.In the meantime you could run it in a containerised install on Debian 12",
"username": "chris"
},
{
"code": "",
"text": "Thanks Chris. I honestly didn’t think of doing that. Already have docker setup on the computer so that wouldn’t be a problem.Thanks again.",
"username": "Hilkiah_Lavinier"
},
{
"code": "",
"text": "You can install MongoDB from the “bullseye” (Debian 11) repository on Debian 12, but you’ll need to also add bullseye to your sources.list to get the libssl1.1 package on which it depends (Debian 12 provides libssl3 which is not ABI-compatible with libssl1.1). It’s a bit hackish but it does work well in my own tests (and is simpler than recompiling or going through a container).But I do hope we’ll get pre-compiled binaries for Debian 12 (“bookworm”) soon enough so we don’t have to do those kind of tricks.",
"username": "Gael_Le_Mignot"
},
{
"code": "# /etc/apt/sources.list.d/mongo-db-repo.list \n#deb https://repo.mongodb.org/apt/debian bullseye/mongodb-org/6.0 main\ndeb https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse \n\nroot@......:~# ldd /usr/bin/mongod\n\tlinux-vdso.so.1 (0x00007ffed03db000)\n\tlibcurl.so.4 => /lib/x86_64-linux-gnu/libcurl.so.4 (0x00007f30853d3000)\n\tliblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f30853a4000)\n\tlibresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f3085393000)\n\tlibcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007f307e400000)\n\tlibssl.so.3 => /lib/x86_64-linux-gnu/libssl.so.3 (0x00007f30852ea000)\n\tlibm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f307e921000)\n\tlibgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f30852c8000)\n\tlibc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f307e21f000)\n\t/lib64/ld-linux-x86-64.so.2 (0x00007f3085488000)\n\tlibnghttp2.so.14 => /lib/x86_64-linux-gnu/libnghttp2.so.14 (0x00007f3085299000)\n\tlibidn2.so.0 => /lib/x86_64-linux-gnu/libidn2.so.0 (0x00007f307e8f0000)\n\tlibrtmp.so.1 => /lib/x86_64-linux-gnu/librtmp.so.1 (0x00007f307e8d1000)\n\tlibssh2.so.1 => /lib/x86_64-linux-gnu/libssh2.so.1 (0x00007f307e88e000)\n\tlibpsl.so.5 => /lib/x86_64-linux-gnu/libpsl.so.5 (0x00007f307e20b000)\n\tlibgssapi_krb5.so.2 => /lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007f307e1b9000)\n\tlibldap-2.5.so.0 => /lib/x86_64-linux-gnu/libldap-2.5.so.0 (0x00007f307e15a000)\n\tliblber-2.5.so.0 => /lib/x86_64-linux-gnu/liblber-2.5.so.0 (0x00007f307e14a000)\n\tlibzstd.so.1 => /lib/x86_64-linux-gnu/libzstd.so.1 (0x00007f307e08e000)\n\tlibbrotlidec.so.1 => /lib/x86_64-linux-gnu/libbrotlidec.so.1 (0x00007f307e081000)\n\tlibz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f307e062000)\n\tlibunistring.so.2 => /lib/x86_64-linux-gnu/libunistring.so.2 (0x00007f307deac000)\n\tlibgnutls.so.30 => /lib/x86_64-linux-gnu/libgnutls.so.30 (0x00007f307dc00000)\n\tlibhogweed.so.6 => /lib/x86_64-linux-gnu/libhogweed.so.6 (0x00007f307de63000)\n\tlibnettle.so.8 => /lib/x86_64-linux-gnu/libnettle.so.8 (0x00007f307dbb2000)\n\tlibgmp.so.10 => /lib/x86_64-linux-gnu/libgmp.so.10 (0x00007f307db31000)\n\tlibkrb5.so.3 => /lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007f307da57000)\n\tlibk5crypto.so.3 => /lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007f307de36000)\n\tlibcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007f307e884000)\n\tlibkrb5support.so.0 => /lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007f307de28000)\n\tlibsasl2.so.2 => /lib/x86_64-linux-gnu/libsasl2.so.2 (0x00007f307da3a000)\n\tlibbrotlicommon.so.1 => /lib/x86_64-linux-gnu/libbrotlicommon.so.1 (0x00007f307da17000)\n\tlibp11-kit.so.0 => /lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007f307d8e3000)\n\tlibtasn1.so.6 => /lib/x86_64-linux-gnu/libtasn1.so.6 (0x00007f307d8ce000)\n\tlibkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007f307de21000)\n\tlibffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x00007f307d8c2000)\n",
"text": "Hello,\nYou can add MongoDB official Ubuntu 22.04LTS (jammy) repository for Mongodb-server 6.0.x (community edition). Ubuntu 22.04LTS and Debian 12 uses libssl3.x and Debian 12 meet dependencies for Ubuntu 22.04LTS mongodb-server packages.I upgraded my mongod-db VM from Debian11 → Debian12 and it works like a charm !Tomasz Jeliński\nSenior IT Specialist\nPalac Kultury Zaglebia",
"username": "Tomasz_Jelinski"
}
] | Mongo 6.x on Debian 12 | 2023-06-24T10:43:40.524Z | Mongo 6.x on Debian 12 | 8,242 |
null | [] | [
{
"code": "",
"text": "Hi, Trying to find the list of courses which I have passed in MongoDB university, could not find it.\nPlease help",
"username": "farideh_gorji"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
},
{
"code": "",
"text": "Hey @farideh_gorji,You can go to the following link MongoDB University Dashboard to view all your completed courses and the Proof of completion.Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | List of My Courses passed In MongoDB University | 2023-05-11T14:11:22.869Z | List of My Courses passed In MongoDB University | 778 |
[
"migration"
] | [
{
"code": "",
"text": "\nimage1548×718 25.6 KB\nhelloAn error occurred during the migration and the work was stopped. The above error has occurred. Please tell me what to do.thank you",
"username": "Park_49739"
},
{
"code": "",
"text": "Hi @Park_49739 and welcome to MongoDB community forums!!The error message says that, the collection on which aggregation query is applied and the collection name mentioned for $merge is the same.You can try using a different collection name for $merge operation.\nHowever, starting from MongoDB version 4.4 you can use the same name for both collection. Hence, to use the same name, make sure, you are on version 4.4 and above.\nPlease refer to the documentation for more informations.Let us know if that works for you.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Relational Migrator has stopped | 2023-07-07T08:23:27.329Z | Relational Migrator has stopped | 609 |
|
null | [
"queries",
"node-js"
] | [
{
"code": "",
"text": "I am trying to simulate a seat reservation flow by a student. When the student tries to reserve a seat, certain fields in two documents (student document and study space document, both from different collection) should be updated. I can connect to my database, and the study space document can get updated. However, when trying to update my student data document, I keep getting this errorMongoNetworkError: connection 6 to 34.231.146.63:27017 closedMy code was working yesterday so I’m not sure why it failed today…Please help…\nHere is my stack overflow question: node.js - MongoNetworkError: connection 5 to 34.231.146.63:27017 closed - Stack OverflowIt seems that I am unable to edit only, as I am still able to retrieve that student document.",
"username": "Hui_Ying_Khoo"
},
{
"code": "",
"text": "Hi @Hui_Ying_Khoo and welcome to MongoDB community forums!!Firstly, its always easier for us and other community users to help is all the details are posted on the same thread which avoid the toggle between two forums.\nHowever, to understand the concern in more details, it would be helpful if you could share the following information:My code was working yesterdayWas there any changes made to the code during this interval?\n3. As mentioned in the post, the code gets stuck, can you establish the connect to the database outside the application, using shell or Compass?\n4. Also, do you see any error messages during this course of retrying the write operation?\n5. The IP 34.231.146.63 is the MongoDB server IP but from the screenshots attached, it seems that you are trying to make connection with Atlas. Can you confirm if this is Atlas or a local database connection?\n6. Can you confirm if the update happens in the database but is not reflected in the code?\n7. Finally, can you share the mongoose code where you are trying to make the connection to the database?Regards\nAasawari",
"username": "Aasawari"
}
] | MongoNetworkError: connection 6 to 34.231.146.63:27017 closed | 2023-07-09T06:59:05.221Z | MongoNetworkError: connection 6 to 34.231.146.63:27017 closed | 419 |
null | [
"kafka-connector"
] | [
{
"code": "{\n \"name\": \"MONGO_SO\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"errors.log.include.messages\": \"true\",\n \"publish.full.document.only\": \"false\",\n \"tasks.max\": \"1\",\n \"change.stream.full.document\": \"updateLookup\",\n \"collection\": \"coupon\",\n \"key.converter.schemas.enable\": \"false\",\n \"topic.prefix\": \"\",\n \"database\": \"\",\n \"poll.await.time.ms\": \"5000\",\n \"connection.uri\": \"\",\n \"name\": \"MONGO_SOU\",\n \"value.converter.schemas.enable\": \"false\",\n \"copy.existing\": \"true\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"errors.log.enable\": \"true\",\n \"key.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"poll.max.batch.size\": \"1000\"\n }\n} \n",
"text": "So I am using Mongo kafka connector as a source and here is my configI want my multiple collections to migrate. According to documentation it looks like only one collection is supported.Any way to include more than one collection in the config? Any thing to change in config to achieve this ?",
"username": "R_C"
},
{
"code": "pipeline=[{\"$match\": {\"ns.coll\": {\"$regex\": \"/^(coll1|coll2)$/\"}}}]",
"text": "Hi @R_C ,See the documentation: multiple sources it has exactly what you are looking for:pipeline=[{\"$match\": {\"ns.coll\": {\"$regex\": \"/^(coll1|coll2)$/\"}}}]Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "what about multiple DBs and multple collections",
"username": "vasireddy_prasanth"
}
] | How to add more than one collection in Mongo source config? | 2022-11-15T10:54:18.329Z | How to add more than one collection in Mongo source config? | 2,068 |
[
"data-modeling",
"lebanon-mug"
] | [
{
"code": "",
"text": "\nNew MongoDB Features1920×1080 162 KB\nJoin us for an extraordinary dive into these cutting-edge features and redefine what’s possible with MongoDB. Don’t wait, secure your spot now! Date & Time: Wednesday 12 | July 2023 | 07:00 PM - 09:30 PM (CEST) Location: Online Via ZoomSeats are Limited !! Register Now and Secure your free spot. ",
"username": "eliehannouch"
},
{
"code": "",
"text": "Hello amazing people, Based on numerous community requests, we are pleased to announce that the event has been moved to Saturday, July 15th, from 7:00 to 9:30 pm CEST. This change provides an extended registration window, allowing more individuals to join and benefit from this valuable opportunity. Don’t miss out on the chance to participate!Register Now and Spread the Message !!\nhttps://bit.ly/mongodb-tech-marvels",
"username": "eliehannouch"
}
] | LEBANON MUG: A Dive into the Latest MongoDB Tech Marvels | 2023-06-30T19:09:16.866Z | LEBANON MUG: A Dive into the Latest MongoDB Tech Marvels | 1,258 |
|
null | [
"realm-web"
] | [
{
"code": "const email = \"[email protected]\";\nconst password = \"Pa55w0rd!\";\nawait app.emailPasswordAuth.registerUser({ email, password });\n",
"text": "I am working on a react application using realm Web SDK.I have a register page with the email and password input fields. When submitting the credentials, I am faced with ‘TypeError: Load failed’ when catching the error.To verify if my app is working, I have went to view my application app users, and no user entry is created. Have also done all checks such as making sure Email/Password authentication is enabled, and allowing automatic confirmation of users. My app has been set up properly, with my application ID.Have even went to the extend of using the documentation example to register a new account, but I am still faced with the same error message:Taken from: https://www.mongodb.com/docs/realm/web/manage-email-password-users/Can anyone please advise on this issue? Thanks so much in advance!",
"username": "hittt"
},
{
"code": "const onSubmit = async (e: any) => {\n try{\n e.preventDefault()\n const form = e.target;\n const formData = new FormData(form);\n const formJson = Object.fromEntries(formData.entries());\n const user = await data.emailPasswordSignup(formJson.email, formJson.password);\n if (user) {\n redirectNow();\n }\n }\n catch (error) {\n alert(error);\n }\n};\n",
"text": "For anyone who encounter the same problem as me:You need to include e.preventDefault(). By default, the browser will send the form data to the current URL and refresh the page. You can override that behaviour by calling e.preventDefault().My data wasn’t sent because this line wasn’t called. Subsequently, read the form data with new FormData(e.target).For example:Please read: <input> – ReactThanks for reading.",
"username": "hittt"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to register/login new user account via manage email/password authentication (TypeError: Load failed) | 2023-07-11T13:54:28.786Z | Unable to register/login new user account via manage email/password authentication (TypeError: Load failed) | 779 |
[
"aggregation",
"charts"
] | [
{
"code": "[\n {\n $lookup: {\n from: \"hot_store_configs\",\n let: {\n id: \"$_id\",\n createdAt: \"$hot_store_configs.createdAt\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\"$merchantId\", \"$$id\"],\n },\n },\n },\n {\n $sort: {\n createdAt: 1,\n },\n },\n ],\n as: \"storesLookup\",\n },\n },\n {\n $addFields: {\n \"stores.firstStoreCreatedAt\": {\n $first: \"$storesLookup.createdAt\",\n },\n },\n]\n",
"text": "I’m creating a report to track an onboarding process, where for each user we can see the date they created their first store/campaign etc.In the aggregation for this chart view, I’ve defined the firstStoreCreatedAt field by first using a lookup stage to join my stores collection, sorting by createdAt in the process, and then taking the first element.In the output documents of this chart view, if a user has never created a store, the firstStoreCreatedAt field doesn’t exist. In these cases, I want the “First Store At” field in my report to be blank, rather than showing “Invalid Date”.I feel as though maybe I’m missing a null check or something, any help would be much appreciated!",
"username": "Fred_Soper"
},
{
"code": "[\n {\n $addFields: {\n dateString: {\n $ifNull: [{ $dateToString: { format: \"%d/%m/%Y\", date: \"$time\" } }, \"\"]\n }\n }\n }\n]\n",
"text": "Hi Fred,Unfortunately currently there is no good way in app to customise the display of non-validate date for a date type field. We have a table chart enhancement on our product roadmap, so if there is any idea/feedback, please feel free to submit in our feedback engine hereHowever to work around it, there are two ways you can achieve this.\nOption 1: use the query bar in the chart builder which you can run a custom aggregation for this particular chart to convert the date to string and display empty string for null values:Option 2: Given that you are already using the Charts view, you can also add above pipeline segment into your view aggregation pipeline.When creating the chart, use the newFields created with string type to display in the table.The difference between above options is that option 1 will apply to a given chart and option 2 will apply to the entire view.The caveat of using the workaround is that you will lose the date type in which means if you want to use dashboard filter with one of those field, it will be presented with a string filter rather than a date filter. If the table is just for displaying all the record, above workaround should achieve what you need.Another way I can think of is to assign null value with an early date e.g. 01 January, 1970 00:00:00, and use conditional formatting to make date earlier than x to display with white text color (we don’t support transparent text color). The issue with this approach is that as you hover onto the row, the text will be visible and it also will show up in dark theme if you are embedding the chart/dashboard, though it does preserve the date type.",
"username": "James_Wang1"
}
] | How to show missing dates as blank rather than "Invalid date"? | 2023-07-11T10:41:49.362Z | How to show missing dates as blank rather than “Invalid date”? | 650 |
|
null | [
"kafka-connector"
] | [
{
"code": "{\n \"name\": \"<connector_name>\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"batch.size\": \"1000\",\n \"transforms\": \"dropPrefix\",\n \"database\": \"<db_name>\",\n \"collection\": \"\",\n \"copy.existing.pipeline\": \"[{\\\"$match\\\": {\\\"ns.coll\\\": {\\\"$regex\\\": /^(\\\"<collection_1>|<collection_2>|<collection_3>\\\")$/}}}]\",\n \"pipeline\": \"[{\\\"$match\\\": {\\\"ns.coll\\\": {\\\"$regex\\\": /^(\\\"<collection_1>|<collection_2>|<collection_3>\\\")$/}}}]\",\n \"key.converter.schemas.enable\": \"false\",\n \"output.json.formatter\": \"com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson\",\n \"connection.uri\": \"<connection_uri>\",\n \"name\": \"<connector_name>\",\n \"topic.creation.default.partitions\": \"3\",\n \"topic.creation.default.replication.factor\": \"3\",\n \"value.converter.schemas.enable\": \"false\",\n \"transforms.dropPrefix.type\": \"org.apache.kafka.connect.transforms.RegexRouter\",\n \"transforms.dropPrefix.replacement\": \"<topic_name>\",\n \"transforms.dropPrefix.regex\": \"(.*)<db_name>(.*)\",\n \"copy.existing\": \"true\",\n \"value.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"key.converter\": \"org.apache.kafka.connect.storage.StringConverter\"\n }\n}\n",
"text": "Hello! We’re trying to get messages from the three collections in one DB via one connector. Pipeline in our config is similar to a documentation:Connector starts successfully, but that’s all, no messages are coming to the topic. Can anyone tell us what exactly we are doing wrong, please?",
"username": "AGorshkov"
},
{
"code": "",
"text": "Is there anything in the kafka connect log?if you remove the pipeline, copy.existing.pipeline does it capture events?also try removing “collection”:\"\" since you only need to specify the database in your scenario.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "Thank You for response! Answering your questions:",
"username": "AGorshkov"
},
{
"code": "name”: “<connector_name>”,\n“config”: {\n“connector.class”: “com.mongodb.kafka.connect.MongoSourceConnector”,\n“database”: “<db_name>”,\n“connection.uri”: “<connection_uri>”,\n“name”: “<connector_name>”,\n“value.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“key.converter”: “org.apache.kafka.connect.storage.StringConverter”\n}\n",
"text": "let’s start with the minimum connector config and go from there,see if that generates events.",
"username": "Robert_Walters"
},
{
"code": " {\n \"name\" : \"<connector_name>\",\n \"config\" : {\n \"batch.size\" : \"1000\",\n \"connection.uri\" : \"<connection.uri>\",\n \"connector.class\" : \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"copy.existing\" : \"true\",\n \"database\" : \"<db_name>\",\n \"key.converter\" : \"org.apache.kafka.connect.storage.StringConverter\",\n \"key.converter.schemas.enable\" : \"false\",\n \"name\" : \"<connector_name>\",\n \"output.json.formatter\" : \"com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson\",\n \"pipeline\" : \"[ { $match: { \\\"ns.coll\\\": { \\\"$in\\\": [\\\"<collection_1>\\\", \\\"<collection_2>\\\", \\\"<collection_3>\\\" ] } } } ]\",\n \"transforms\" : \"dropPrefix\",\n \"transforms.dropPrefix.regex\" : \"(.*)<db_name>(.*)\",\n \"transforms.dropPrefix.replacement\" : \"<topic_name>\",\n \"transforms.dropPrefix.type\" : \"org.apache.kafka.connect.transforms.RegexRouter\",\n \"value.converter\" : \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter.schemas.enable\" : \"false\"\n }\n }\n",
"text": "After lot of tests the connector with following configuration is working:The only major difference is a different pipeline format. So, there’s another question - what can be wrong with the pipeline version from the documentation or there’s another root cause of this issue?",
"username": "AGorshkov"
},
{
"code": "",
"text": "Small update. With the configuration from above after restarting Kafka Connect node because of Out Of Memory issue some kind of topic re-init happens, all historical messages has been re-uploaded to the topic. What could have caused this?\nThank you.",
"username": "AGorshkov"
},
{
"code": "",
"text": "you have copy.existing set to true so that will copy all the existing data in the collection before opening the change stream and processing the current events.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "Is there any solution to bypass messages duplication and avoid messages lose, except of using 2 connectors (with copy.existing:true and without)? We need all existing data, but don’t need to duplicate it, because there’re lot of such data and reuploading causes issues.",
"username": "AGorshkov"
},
{
"code": "java.lang.StringJsonToken.START_OBJECT",
"text": "HI\ni am facing similar thing on source connector config . my config is to collect data from multiple databases and collections from same mongodb host and publish to same topic .below is config i am using , but getting error< {\n“name” : “mongo-source”,\n“config” : {\n“batch.size” : “1000”,\n“connection.uri” : “mongodb://:@*********************:1025/?ssl”,\n“connector.class” : “com.mongodb.kafka.connect.MongoSourceConnector”,\n“key.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“pipeline”: “[ {\"$match\": {$or: [ {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"AccessToken\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"Account\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"Achievement\"}, {\"ns.db\": \"uat_move5health\", \"ns.coll\":\"AppleRing\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"Application\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"AuditLog\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"Badge\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"Challenge\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"Code\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"Country\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"Goal\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"GoalReward\"}, {\"ns.db\": \"uat_move5tracker\", \"ns.coll\": \"HealthNotification\"}, {\"ns.db\": \"uat_move5health\", \"ns.coll\": \"HealthSummary\"}, {\"ns.db\": \"uat_move5tracker\", \"ns.coll\": \"HealthTracker\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"Installation\"}, {\"ns.db\": \"uat_move5cas\", \"ns.coll\": \"HPMember\"}, {\"ns.db\": \"uat_move5cas\", \"ns.coll\": \"MoveKey\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"Muser\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"Participation\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"Program\"}, {\"ns.db\": \"uat_move5notification\", \"ns.coll\": \"PushNotification\"}, {\"ns.db\": \"uat_move5notification\", \"ns.coll\": \"PushResponse\"}, {\"ns.db\": \"uat_move5notification\", \"ns.coll\": \"PushSubscription\"}, {\"ns.db\": \"uat_move5queue\", \"ns.coll\": \"QueueError\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"Reward\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"RoleMapping\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"Role\"}, {\"ns.db\": \"uat_move5queue\",\"ns.coll\": \"Task\"}, {\"ns.db\": \"uat_move5queue\", \"ns.coll\": \"TaskConfig\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"UserBadge\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"UserCode\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\":\"UserGoal\"}, {\"ns.db\": \"uat_move5challenge\", \"ns.coll\": \"UserReward\"}, {\"ns.db\": \"uat_move5app\", \"ns.coll\": \"UserState\"}, {\"ns.db\": \"uat_move5message\", \"ns.coll\": \"DestinationMapping\"}, {\"ns.db\": \"uat_move5message\", \"ns.coll\": \"FollowUpMapping\"}, {\"ns.db\": \"uat_move5health-score\", \"ns.coll\": \"HealthProfile\"}, {\"ns.db\": \"uat_move5health-score\", \"ns.coll\": \"HealthScore\"}, {\"ns.db\": \"uat_move5health-score\", \"ns.coll\": \"HealthScoreDelta\"}, {\"ns.db\": \"uat_move5health-score\", \"ns.coll\": \"ProviderAccount\"}, {\"ns.db\": \"uat_move5health-score\", \"ns.coll\": \"SurveyQuestion\"}, {\"ns.db\": \"uat_move5message\", \"ns.coll\": \"SystemMessage\"}, {\"ns.db\": \"uat_move5message\", \"ns.coll\": \"UserMessage\"},{\"ns.db\": \"uat_move5health-score\", \"ns.coll\": \"UserSurvey\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"HealthSummary\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"AppleRing\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"UserReward\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"UserGoal\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"UserState\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"Participation\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"HealthScore\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"Muser\"}, {\"ns.db\": \"perf_move5edl\", \"ns.coll\": \"Account\"} ] } } ]”,\n“topic.prefix”: “SG_uat_move5app.Installation”\n}\n}Error:\ncurl -X PUT -H “Content-Type: application/json” --data @./test.json http://localhost:8083/connectors/MongoSourceConnectorV1/config\n{“error_code”:500,“message”:“Cannot deserialize value of type java.lang.String from Object value (token JsonToken.START_OBJECT)\\n at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 1, column: 53] (through reference chain: java.util.LinkedHashMap[\"config\"])”}",
"username": "vasireddy_prasanth"
}
] | MongoDB Kafka source connector pipeline for multiple collections isn't working | 2021-08-25T15:02:25.882Z | MongoDB Kafka source connector pipeline for multiple collections isn’t working | 6,289 |
null | [
"atlas-device-sync",
"realm-web"
] | [
{
"code": "",
"text": "We are looking into integrating realm sync into our existing web app, along with our native phone apps. Our use case is that users can log into our website and edit realtime data collaboratively with other users, along with logging into the phone version and doing the same with offline support.My question is if the Realm Web SDK supports this type of use case on the web end. There doesn’t seem to be much information on if implementing realm sync into a website is possible, although I did find the collection.watch() method to listen for changes. Is this the right way to go about this?",
"username": "Jordan_Ellis"
},
{
"code": "",
"text": "Hi Jordan,The Web SDK has just introduced capability where it supports Device Sync as a preview feature!Regards\nManny",
"username": "Mansoor_Omar"
}
] | Does Realm Sync support creating realtime websites? | 2022-10-09T17:07:35.894Z | Does Realm Sync support creating realtime websites? | 2,028 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "I want to create a webapp using the same data that’s being used by my apps (Realm SDK, .NET), what’s the best way to go? Just upload the .realm file to the web hosting or should I be considering direct connection to Atlas, if so, what’s the best practice?",
"username": "Movsar_Bekaev"
},
{
"code": "",
"text": "It depends on whether you want server side rendering or not. If you do, then you can sync the Realm file on the server and use it as a sort of a local cache to return data from. Alternatively, you can also use the GraphQL API to fetch data directly from the client.",
"username": "nirinchev"
},
{
"code": "",
"text": "Yes but how each scenarios will affect performance, also, if I decide to encrypt the realm database, I think there is a limitation on concurrent opening of encrypted file, which potentially can result in an error, so I’m looking for the best practice in such situation.And yes, the rendering is going to be on server.",
"username": "Movsar_Bekaev"
},
{
"code": "usingNito.AsyncEx",
"text": "The limitation on concurrent access of encrypted/synchronized Realms concerns accessing them from different processes. If your webserver runs a single process, which is what you typically do with .NET, you should be fine.Adding encryption to the database is likely to impact performance by 5-10%.One thing to keep in mind when it comes to ASP.NET Core is that you don’t get a synchronization context installed on the thread handling the request pipeline, so you should always wrap accessing the Realm instance/objects in a using block and not use async code within that block. If you do need to run some async code (e.g. to wait for notifications or synchronization), you will need to install a context - for example, by using Nito.AsyncEx.In terms of using Realm for server-side rendering, it has both benefits and drawbacks. One benefit is that it optimizes transfer of data from Atlas and caches that data locally. This means if you get a lot of client requests for the same objects, you’ll be able to respond much faster than if you had to query MongoDB every time. There are two downsides to consider though:",
"username": "nirinchev"
},
{
"code": "",
"text": "I see, then what about user management? I want to allow user registration / log in, but as I understand this info is stored in mongodb-realm folder which is one folder near my website files, can it manage different users from different devices? If so, how?",
"username": "Movsar_Bekaev"
},
{
"code": "",
"text": "@nirinchev what’s the point of having multiple users if it’s a website with normal Realm? It will always use the last logged in user’s credentials, right? Or should I check in AllUsers and manage access through custom data etc? Because now when I log in, the website shows my credentials on any other device whether I logged in there or not",
"username": "Movsar_Bekaev"
},
{
"code": "app.AllUsersFlexibleSyncConfigurationapp.AllUsersapp.AllUsers",
"text": "Hm… user management is going to be somewhat tricky, particularly with load balanced sites. My guess is that you still want to roll out your own authentication mechanism - e.g. when a client logs in, you relay their credentials to Atlas App Services and obtain a user. Then, you issue an access and refresh tokens signed with your own keys and send those to the client device. When a client device makes a request, you find the user in app.AllUsers and then pass that user to the FlexibleSyncConfiguration.Now the challenge comes when we introduce load balancing. Then a client would not be guaranteed to reach the same web server on every request, meaning a user that logged in on server A would not be available in app.AllUsers on server B. There are two approaches for this:Note that the current SDK API are not a great fit for this use case, but we’d be happy to make small adjustments to support it. Notably, right now, app.AllUsers returns an array, which would be expensive if you have thousands of users. Similarly, we don’t have an API to create a user from a refresh token, but that can easily be added.",
"username": "nirinchev"
},
{
"code": "",
"text": "Yes, that’s what I thought too, it would be really great if you do some APIs to make these things easier, like switching to another user without getting the whole list for instance as you mentioned. But thank you, this is valuable information.",
"username": "Movsar_Bekaev"
},
{
"code": "",
"text": "Hi Movsar,The WEB SDK now supports Device Sync as a preview feature!Regards\nManny",
"username": "Mansoor_Omar"
}
] | What's the best way to use Realm Sync data on a website? | 2023-01-19T15:01:12.685Z | What’s the best way to use Realm Sync data on a website? | 1,412 |
null | [
"queries",
"node-js",
"mongoose-odm",
"graphql",
"realm-web"
] | [
{
"code": "",
"text": "What is the best way to define a use case of using the Realm WebSDK vs Atlas using Mongoose etc. For a web application.This was asked in another area, and it actually has me stumped with the versatility Atlas itself has just existing with the Node.JS Driver, Mongoose, Functions, GraphQL, etc.Where does the WebSDK fit where you’d use it over the other typical methods of using MongoDB/Atlas in say MERN or MEAN etc.? I suppose it can share a place in the MEAN or MERN stacks, but in what situations would the WebSDK excel over just typical Atlas work?I want to be clear, this is not a “Use Realm as a Piñata” question, I’m actually being sincere and literal. As I’ve used the WebSDK in a 3D Website to count bubbles being popped by a user, etc. But it doesn’t have the ability to sync so it has to send the data to Atlas in batches, but also that’s how Mongoose does things too. Or Atlas functions unless you build a listener function and use triggers.I’m legitimately trying to find use cases where the WebSDK goes above and beyond, or really stands out from other traditional web development services that everyone else learns to do.As this question again was posed by someone else, and me and oa lot of others are stumped on how to answer this.",
"username": "Brock"
},
{
"code": "",
"text": "I would like to know this too because we use MERN right now where we work and I would like to know what advantage if any this Web SDK offers over Mongoose.",
"username": "UrDataGirl"
},
{
"code": "",
"text": "There are a few things the Realm Web SDK can do that other tools cannot: for example, calling a function or using the GraphQL API.If you’re just looking for data access, the Realm SDK and a MongoDB driver serve similar purposes, but depending on your app needs (or what other apps you’re building) you might find it easier to think of your data model in terms of Realm Objects instead of MongoDB documents.More information on the Realm Web SDK here: https://www.mongodb.com/docs/realm/web/quickstart/And if you’re not familiar, you may be interested in checking out the MongoDB node driver (you might find it more usable than Mongoose): https://www.mongodb.com/docs/drivers/node/current/",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "Hi All,Device Sync for the realm-web SDK is now available as a preview feature!Regards\nManny",
"username": "Mansoor_Omar"
}
] | Realm WebSDK vs Atlas - Definitive Use Cases - Looking for advice | 2023-04-15T16:50:27.610Z | Realm WebSDK vs Atlas - Definitive Use Cases - Looking for advice | 1,034 |
null | [
"realm-web"
] | [
{
"code": "",
"text": "Hi,I’m evaluating offline-first options and MongoDB Realm and AWS Amplify DataStore are the 2 more interesting solutions for my case.\nSadly while DataStore supports local databases for Web and other platforms, such as RN, Android, etc, MongoDB Realm doesn’t, according to this statement on the Web SDK page:“The Web SDK does not support creating a local database or using sync. Instead, web apps built with Realm use GraphQL or the Query API to query data stored in Atlas.”Is there a near future plan for this to be added to the Realm features?Thank you in advance for the attention.",
"username": "Ricardo_Montoya"
},
{
"code": "",
"text": "Hi @Ricardo_Montoya,Realm has offline-first SDKs including React Native, Android, Swift, .NET, Node.js, and alpha SDKs for Kotlin Multiplatform and Flutter. These SDKs can be used offline-only (no sync) or offline-first with Realm Sync providing bidirectional sync between local device storage and a MongoDB Atlas cluster.The Realm Web SDK focuses on online browser-based use cases:The MongoDB Realm Web SDK enables browser-based applications to access data stored in MongoDB Atlas and interact with MongoDB Realm services like Functions and authentication. The Web SDK supports both JavaScript and TypeScript applications.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank. you for the answer @Stennie,So you mean off-line first for web is not considered a use case for Real Sync?In Amplify DataStore I can save, query, delete and filter before sync, and the code can run across iOS, Android and Web with React Native using Expo. The Web version deployed uses IndexedDB for the DataStore and for Native it uses sqlite AFAIK. It could also be possible to create a storage engine adapter for other DBs.Could you confirm please that in the case of Realm there are no plans to offer similar functionality to work across platforms? Thank you in advance.PD: Here is the link to DataStore info regarding How it Works in case you are interested in having a look at it.\nhttps://docs.amplify.aws/lib/datastore/how-it-works/q/platform/js/#model-data-locally",
"username": "Ricardo_Montoya"
},
{
"code": "npmnpm",
"text": "Hi @Ricardo_Montoya ,So you mean off-line first for web is not considered a use case for Real Sync?Per my earlier note, Realm has offline-first SDKs and there are cross-platform Realm SDKs for different languages including a Node.js SDK and a React Native SDK that are comparable to Amplify DataStore. Local storage uses a Realm database and Realm Sync enables bi-directional sync to MongoDB Atlas.The Realm Web SDK that you originally referenced is a solution for online browser-based applications that run in pure client-side JavaScript and do not have an app server (i.e. running solely in a web browser environment, no Node.js, npm packages, or app server deployment). I don’t believe this is a sync use case covered by Amplify as the first step appears to be installing npm packages.If you are building cross-platform applications with React Native and Expo, you would use the React Native SDK: Build an Offline-First React Native Mobile App with Expo and Realm.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for the answer @Stennie_X ,I’ll check out the RN SDK and get back to you in a few days to report if it worked cross platform in iOS, Android and Web using Expo, since that’s how I’m using Amplify DataStore.Thank you for the attention ",
"username": "Ricardo_Montoya"
},
{
"code": "",
"text": "Hi All,Device Sync for the realm-web is now available as a preview feature!Regards\nManny",
"username": "Mansoor_Omar"
}
] | When is local database support and sync be available for Web? | 2022-02-02T18:41:25.406Z | When is local database support and sync be available for Web? | 5,476 |
null | [
"atlas-device-sync",
"realm-web"
] | [
{
"code": "",
"text": "It seems like React Native has sync support but not angular? https://docs.mongodb.com/realm/sdk/react-native/examples/sync-changes-between-devices/We are disappointed that angular has no support for sync. Are there any workarounds?",
"username": "Suresh_Batta"
},
{
"code": "",
"text": "Yes, I am also struggling to enable ‘SYNC’ offline feature of mongoDB RealM. No information found about specifically ANGULAR framework! You can go thru the doc https://docs.mongodb.com/realm/web/mongodb/#std-label-web-mongodb-watch and implement REALTIME data in your app.",
"username": "TANMOY_GHOSH"
},
{
"code": "",
"text": "Correct the realm-web SDK does not support the realm database or sync which is called out in the docs here - https://docs.mongodb.com/realm/web/We are investigating what it would take to bring these capabilities to realm-web in the future but it is a long-term project",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks for your reply. I planned to move my next project to ‘mongodb realm’ to get ‘OFFLINE SYNC’ feature, but after getting your reply in the community I am really disappointed.\nAnyways, looking forward to get the feature available in web sdk as soon as possible.",
"username": "TANMOY_GHOSH"
},
{
"code": "",
"text": "We are in the same boat. Would love to see Angular supported for offline capabilities. We will have to look at alternatives now. Thank you!",
"username": "Lazmeister"
},
{
"code": "",
"text": "Hi everybody,@Ian_Ward having sync in angular would be great. Please bump this feature in the development queue… ",
"username": "Armando_Marra"
},
{
"code": "",
"text": "We are investigating what it would take to bring these capabilities to realm-web in the future but it is a long-term projectAny news on this? Working with synched objects would be great!",
"username": "Runar_Jordahl"
},
{
"code": "",
"text": "Device Sync for the WEB SDK is now available as a preview feature!Regards\nManny",
"username": "Mansoor_Omar"
}
] | Angular Realm Sync | 2021-03-26T21:47:18.806Z | Angular Realm Sync | 5,956 |
null | [
"atlas-device-sync",
"realm-web"
] | [
{
"code": "",
"text": "Any plan to support Reaml Sync in your Web SDK?\nIf yes, how soon?Thanks a lot for the great work!",
"username": "Gilles_Yvetot"
},
{
"code": "",
"text": "Hi @Gilles_Yvetot we are in the process of researching how this might work. Do mind providing your use-case for how sync with the web SDK would be valuable for you?",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "@Sumedha_Mehta1 thanks for the quick reply.I have a very specific use case where I have an electron app and a web extension that needs to be kept in sync.But I have seen some more classic use cases in my previous job where we were using Firebase to update forms in real time for collaboration. Like a very very simple google doc.",
"username": "Gilles_Yvetot"
},
{
"code": "",
"text": "Hi everybody,@Sumedha_Mehta1 , I’m interested in having sync enabled in Web SDK because we are developing a collaborative platform using react-native for the mobile part (iOS and Android) and Angular as web front-end application.It would be nice if both app and web-app where able to use the same sync logic.Are there any updates on this possibility since the original date of this post?",
"username": "Armando_Marra"
},
{
"code": "",
"text": "Hi All,Device Sync for the WEB SDK is now available as a preview feature!Regards\nManny",
"username": "Mansoor_Omar"
}
] | Any plan to support Reaml Sync in your Web SDK | 2021-08-02T18:03:00.818Z | Any plan to support Reaml Sync in your Web SDK | 4,338 |
null | [] | [
{
"code": "",
"text": "Still want to follow the question Realm Web SDK and sync option.When will it supported, your competitor google Firestore and amplify datastore implemented this feature. If you can’t support this feature, lots of developers probably will not choose realm, feel like incomplete database.Please report to realm product manager this problem and solve this as soon as possible. It’s a not feature should be ignored. Thank you",
"username": "Ruofei_Lyu"
},
{
"code": "",
"text": "Hi Ruofei,Device Sync for the WEB SDK is now available as a preview feature!Regards",
"username": "Mansoor_Omar"
}
] | Realm Web SDK and sync option follow up | 2022-03-26T03:21:07.569Z | Realm Web SDK and sync option follow up | 1,902 |
null | [] | [
{
"code": "",
"text": "Hi all…how to restore files from downloaded snapshot backups in mongo db atlas ??\nSo I once had a cluster in mongodb that had data, then I downloaded the backup results…\nI plan to use this file to restore to a new cluster. But when I look for it in the restore menu, it’s not there for uploading files. how to restore the file ??Thanks",
"username": "Nur_Homsan"
},
{
"code": "",
"text": "Hello @Nur_Homsan ,Welcome to The MongoDB Community Forums! I saw you have not had a response to this topic yet, were you able to find an answer to your question?Please take a look at below thread as it explains certain scenarios to restore data from one cluster to another.Let me know in case you face any issue, I will be happy to assist! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi Tarun ,Thanks for the links …\nMy problem is solve right now …Thanks",
"username": "Nur_Homsan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Restore from download backup snapshot | 2023-06-30T02:32:03.355Z | Restore from download backup snapshot | 406 |
null | [
"java",
"atlas-cluster",
"atlas"
] | [
{
"code": "v5.0.18Timed out after 60000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@74123110. Client view of cluster state is {type=REPLICA_SET, servers=[{address=testcluster-shard-00-00.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=testcluster-shard-00-01.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=testcluster-shard-00-02.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}]\nConnectionString connectionString = new ConnectionString(url);\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .applyToClusterSettings(\n builder -> builder.serverSelectionTimeout(1, TimeUnit.MINUTES)\n )\n .build();\n MongoClient client = MongoClients.create(settings);\n return client;\n",
"text": "When trying to connect to a v5.0.18 M10 Tier Atlas cluster (populated only with sample dataset) through a public subnet without Privatelink or VPC Peering set up I get this error:It used to be at 30000 ms but I doubled it in case the connection attempt didn’t have enough time to find the server. Here’s a code snippet for how I’m handling the connection in my application:I’ve also made sure that the network access settings on the Atlas project include the IP address as well as the security group from which the application is being executed. Is there something I’m not taking into account here? Any advice would be very helpful.",
"username": "Julio_Montes_de_Oca"
},
{
"code": "M0/M2/M5M10Timed out after 60000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@74123110. Client view of cluster state is {type=REPLICA_SET, servers=[{address=testcluster-shard-00-00.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=testcluster-shard-00-01.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=testcluster-shard-00-02.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}]<cluster name>-pl-<number>-lbia-prod-regional-cluste-pl-0-lb.fu9ds.mongo.com",
"text": "Hello @Julio_Montes_de_Oca ,I saw that you haven’t got a response to this topic yet, were you able to find a solution?\nIf not, then could you please confirm and share few things for me to understand your use-case better?Timed out after 60000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@74123110. Client view of cluster state is {type=REPLICA_SET, servers=[{address=testcluster-shard-00-00.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=testcluster-shard-00-01.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=testcluster-shard-00-02.ihdih.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}]The most common reason for someone to get this error if a DAO(Data Access Object) tries to connect using an invalid mongo URI. Why it’s invalid can be many different reasons, but some common ones are:Attaching a few documents and threads you can refer to troubleshoot/fix connection error.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hello,Thank you for your response. I will answer your questions:I hope these details help.\nThank you.",
"username": "Julio_Montes_de_Oca"
},
{
"code": "Timed out after 30000 ms while waiting for a server that matches >ReadPreferenceServerSelector{readPreference=primary}.\nNo server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description\nmaxConnectionTimeoutMSmaxConnectionTimeoutMS100000maxConnectionLifeTimemaxConnectionIdleTimemaxConnectionLifeTimemaxConnectionIdleTime",
"text": "Hello @Julio_Montes_de_Oca ,As you are able to connect to cluster even from your EC2 instance so it should not be any network/connectivity issues. Moreover, your URL seems correct and the error seems to come from the driver/application side.I would recommend you to upgrade your MongoJava-sync-driver v4.7.2 to MongoJava-sync-driver v 4.10.2 as there are a lot of bug fixes, improvements and new features available.Have you tried below recommended methods provided in the Timeout Error section of Java Sync Connection troubleshooting?Sometimes when you send messages through the driver to the server, the messages take a while to respond. When this happens, you might receive an error message similar to one of the following error messages:If you receive one of these errors, try the following methods to resolve the issue.The maxConnectionTimeoutMS option indicates the amount of time the Java driver waits for a connection before timing out. The default value is 10000. You can increase this value or set it to 0 if you want the driver to never timeout.Consider setting maxConnectionLifeTime and maxConnectionIdleTime. These parameters configure how long a connection can be maintained with a MongoDB instance. For more information about these parameters, see Connection Pool Settings.You might have too many open connections. The solution to this is described under Error Sending Message.If you are using an older version of Java, you might need to manually install some certificates as described under Error Sending Message.",
"username": "Tarun_Gaur"
}
] | Application using MongoJava Sync driver v4.7.2 and public subnet can't connect to Atlas Cluster (Timeout) | 2023-06-29T01:04:13.064Z | Application using MongoJava Sync driver v4.7.2 and public subnet can’t connect to Atlas Cluster (Timeout) | 1,010 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "After signing in with Apple, the user.profile object has all nil properties. If email and name is not automatically set with the values provided by the Apple sign in, how can I set those values?",
"username": "Madalin_Sava"
},
{
"code": "User",
"text": "From my understanding, there are a limited numbers of providers (Facebook, Google and a Custom JWT) that populate the profile data (aka user metadata). See Authentication Provider MetadataIf you’re using the Swift SDK, see User Metadata - Swift SDK, noting the followingYou cannot edit user metadata through a User object.So now you know what you can’t do, what you can do is leverage Realms Custom User Data system which allows the developer to define what custom data goes with each user and can be easily read in and manipulated when a user authenticates.Also, I believe Apple only shares user information such as the display name with apps the first time a user signs in and I don’t know if we have access that through Realm’s Apple AuthPerhaps someone else can add more info.",
"username": "Jay"
}
] | User profile after login with Apple is empty | 2023-07-11T13:54:16.637Z | User profile after login with Apple is empty | 585 |
[
"node-js",
"mongoose-odm",
"react-js"
] | [
{
"code": "function AdminBulkUpdate() {\n\nconst [data, setData] = useState([]);\n\nconst handleFileUpload = (e) => {\n const reader = new FileReader();\n reader.readAsBinaryString(e.target.files[0]);\n reader.onload = (e) => {\n const data = e.target.result;\n const workbook = XLSX.read(data, { type: \"binary\" });\n const sheetName = workbook.SheetNames[0];\n const sheet = workbook.Sheets[sheetName];\n const parsedData = XLSX.utils.sheet_to_json(sheet);\n setData(parsedData);\n };\n}\n\nreturn (\n <Row className=\"m-5\">\n\n <Col md={2}>\n <AdminLinksComponent />\n </Col>\n <Col>\n <input \n type=\"file\" \n accept=\".xlsx, .xls\" \n onChange={handleFileUpload} \n />\n\n {data.length > 0 && (\n <table>\n <thead>\n <tr>\n {Object.keys(data[0]).map((key) => (\n <th key={key}>{key}</th>\n ))}\n </tr>\n </thead>\n <tbody>\n {data.map((row, index) => (\n <tr key={index}>\n {Object.values(row).map((value, index) => (\n <td key={index}>{value}</td>\n ))}\n </tr>\n ))}\n </tbody>\n </table>\n )}\n </Col>\n </Row>\n );\n}\n\nexport default AdminBulkUpdate;\nconst mongoose = require(\"mongoose\")\nconst Review = require (\"./ReviewModel\")\nconst imageSchema = mongoose.Schema({\n path: {type: String, required: true}\n})\n\nconst productSchema = mongoose.Schema({\n name: {\n type: String,\n required: true,\n unique: true,\n },\n description: {\n type: String,\n required: true,\n },\n category: {\n type: String,\n required: true,\n },\n count: {\n type: Number,\n required: true,\n },\n price: {\n type: Number,\n required: true\n },\n rating: {\n type: Number\n },\n reviewsNumber: {\n type: Number,\n },\n sales: {\n type: Number,\n default: 0\n },\n attrs: [ \n {key: {type: String}, value: {type: String}}\n ], \n images: [imageSchema],\n reviews: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: Review,\n }\n ],\n \n}, {\n timestamps: true,\n})\n\n\nconst Product = mongoose.model(\"Product\", productSchema)\nmodule.exports = Product\n",
"text": "I’m using the MERN stack for a e-commerce project I’m working on. I need to manually update the prices of several products using a excel spreadsheet, once the provider sends it to me. Since there are many products, I need to perform a bulk update with the information provided on the spreadsheet. Some prices my vary, other will remain the same. This is something I’ll need to perform every month (as prices vary from month to month) I’ve managed to create a page that shows the values on the spreadsheet. That is currently working.The Problem: I haven’t been able to save the new information (update the price field) on my MongoDB collection (called “products”)I’d like to keep it as simple as possible. So far I’m trying to do this from the front end… Perhaps I need to do it from the back end? Add code to my “Product.Controller” file?After some reading, I’ve decided to go with the xlsx library, but I’m open to suggestions.Here is a sample of my MongoDB Collection:\nCollection773×300 25.4 KBHere is the code I’m using to “show” the spreadsheet information:This is my Product Model (Schema):The expected outcome is: When I receive a spreadsheet with the new product’s prices, I’ll upload such file from the front end and update the price field on MongoDB. This needs to happen on demand.Hope you can help me with this. Thanks.",
"username": "espiralverde_N_A"
},
{
"code": "",
"text": "Hello there,I believe it would be more convenient to assist you if you could provide reproducible code, such as a GitHub repository that we can clone locally.",
"username": "Carl_Champain"
}
] | Update specific field in collection from excel using react | 2023-07-10T22:26:28.828Z | Update specific field in collection from excel using react | 639 |
|
null | [
"aggregation",
"views"
] | [
{
"code": "",
"text": "Hi team,\nWe have a requirement to move some documents from one collection to another. We are evaluating the $merge operator in the aggregation pipeline to solve this problem. This operator is working as expected but is causing our MongoDB CPU usage to spike.\nIs there a way to control this? For example, can we run this query on a single core or limit the resources that it uses somehow.?\nWe are ok with running this query in the background, but it should not cause any spikes or discrepancies in production environments.",
"username": "Kiran_Sunkari"
},
{
"code": "",
"text": "What options do you have on the $merge? I tested locally with a simple merge within the same DB but to a different collection and the CPU jumped a couple of % on the mongo process./editAlso do you see the same CPU spike when running a mongoimport or something similar, or a $out?",
"username": "John_Sewell"
},
{
"code": "db.oldCollection.aggregate([\n {\n \"$match\": {\n \"key1\": \"DefaultValue\"\n }\n },\n {\n $merge: {\n into: \"newCollection\",\n on: [\"key1\", \"key2\"],\n whenMatched: \"keepExisting\",\n whenNotMatched: \"insert\"\n }\n }\n])\n",
"text": "I also tested writing to different collections. I need this only. As per the documentation, I did not find any options in the $merge operator to control the resources. However, can I use $merge to move data from one collection to another in production?\nIf $merge uses all the available resources to move the data, I can not use it in the production but if it is limited to certain resources then vertical scaling will work. Is there a way to limit the resources that $merge uses or it will use fixed percentage of resources?\nand my merge query:",
"username": "Kiran_Sunkari"
},
{
"code": "",
"text": "I’m not sure how to limit it or why it’s consuming so much resource to be honest.We use $merge extensively in all environments up to and including production to export data between collections and databases or into the same collection as an update and have seen no performance issues.For context, we’re not running on-prem at the moment though but on Atlas.Does using $out also cause this cpu spike? Do you have indexes on key1 and key2?",
"username": "John_Sewell"
},
{
"code": "",
"text": "we have a compound index on {“key1”:1, “key2”: 1} on new collection.",
"username": "Kiran_Sunkari"
},
{
"code": "",
"text": "You may always add a $limit:N stage after the $match so that you merge only N documents at a time. You just need to call your aggregation more often but at least you may reduce the CPU spikes.",
"username": "steevej"
}
] | $merge in controlled way | 2023-07-11T13:44:13.995Z | $merge in controlled way | 531 |
null | [
"ruby",
"mongoid-odm"
] | [
{
"code": "",
"text": "Mongoid 8.1.1 is a patch release in the 8.x series. It fixes the following issue:It also corrects the following documentation error:",
"username": "Jamis_Buck"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | Mongoid 8.1.1 Released | 2023-07-11T17:51:20.516Z | Mongoid 8.1.1 Released | 545 |
null | [
"data-modeling",
"react-native"
] | [
{
"code": "userId",
"text": "I have a List Model (like a ToDo List).\nI want to connect this List Model with a User so they can access (only) their own List’s.I have now 3 options:Option 1 is right now my preferred way to go, because option 2 seems not to be live (docs say CustomUserData is stale/updates slowly) and I want that the User can directly access the List’s he created.I search for an efficient solution and I want to add later the feature that a User can share a List with other User.[React Native, I am quite new to Realm]Thank You for your Feedback in advance ",
"username": "Milan_Doe"
},
{
"code": "",
"text": "I would recommend going with Option 3. Add a userID to every List document, and then allow the user to just sync on lists they own.If you want to use a different user identifier that’s not userID (like email), option 2 is also perfectly fine. As long as you stipulate that a user’s identifier / email doesn’t change, you could attach that identifier to every list object and user custom user data to associate a user document to the list that the user owns.",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "Thank you for your Answer! I will choose Option 2.",
"username": "Milan_Doe"
}
] | How should i connect a User with a Model? | 2023-07-09T20:56:27.547Z | How should i connect a User with a Model? | 531 |
null | [
"queries",
"dot-net",
"text-search"
] | [
{
"code": "",
"text": "Hi,\nIn reference to my previous discussion on the same topic - Atlas Search in C# application using MongoDB Driver ExtensionIn order to use atlas search in C# application I must refer to the MongoDB.Labs.Search library which is in beta state for a long time. Any idea when will it be released?",
"username": "Prajakta_Sawant1"
},
{
"code": "",
"text": "Prajakta,Most of the Atlas Search functionality has been incorporated to the C# driver, with fluent methods for most of the Search operators (see docs). Notably, search indices cannot be configured from the C# driver, though we anticipate releasing support for this in Q3. In the mean time, search indices can be configured via the Atlas UI or CLI.Please let me know if I can help answer any additional questions!Best,\nPatrick",
"username": "Patrick_Gilfether1"
}
] | Reopening - Atlas Search in C# application using MongoDB Driver Extension | 2023-07-11T06:54:06.549Z | Reopening - Atlas Search in C# application using MongoDB Driver Extension | 503 |
null | [
"ruby",
"mongoid-odm"
] | [
{
"code": "",
"text": "Mongoid 8.0.5 is a patch release in the 8.0 series with one bug fix:",
"username": "Jamis_Buck"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | Mongoid 8.0.5 Released | 2023-07-11T15:52:35.150Z | Mongoid 8.0.5 Released | 560 |
null | [
"replication",
"configuration",
"storage"
] | [
{
"code": "replication:\n replSetName: rs0\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: ...\\MongoDB\\Server\\6.0\\data\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: ...\\Program Files\\MongoDB\\Server\\6.0\\log\\mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1 #, machineIP\n\nsecurity:\n authorization: enabled\n \n#replication:\n# replSetName: rs0\n \n#operationProfiling:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n",
"text": "Hello, I apologize If I put this topic in the wrong category.\nI am learning MongoDB and I can run my service fine without replication.\nMy mongod.exe Version is v6.0.7 and MongoDBCompass.exe is v1.38.2But when I add to my cfg file the following linesand try to restart the service I get the Error 1503.What I did so far:Am I missing something crucial?I would appreciate the help, bellow is my edited config file.Thanks,\nP. Costamy cfg",
"username": "P_Costa"
},
{
"code": "",
"text": "Hi @P_CostaHow are you starting mongod? Are you staring it as a service of r invoking it on the command line. The Windows Eventlog may have information if the Windows Service has failed to start.I note that you have authentication enabled. As such a internal membership authentication method also needs to be used, many installations will use keyfile authentication but X590 is also available.",
"username": "chris"
},
{
"code": "",
"text": "Hi @chris\nI started as a Service, the error that I got is “A timeout was reached (30000 milliseconds) while waiting for the MongoDB Server (MongoDB) service to connect.”\nThanks, I will check out the internal membership authentication maybe it will solve it.",
"username": "P_Costa"
},
{
"code": "",
"text": "So I made a keyfile and now my service starts.\nNow I just need to figure out the final steps to put replications working with authentication and keyfile.Thanks @chris.",
"username": "P_Costa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Trying to start MongoDB Service with Replication but getting Error 1503 | 2023-07-10T16:48:05.243Z | Trying to start MongoDB Service with Replication but getting Error 1503 | 574 |
null | [
"atlas-search"
] | [
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"client\": {\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"type\": \"string\"\n },\n \"items\": {\n \"dynamic\": true,\n \"fields\": {\n \"item1\": {\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"type\": \"string\"\n },\n \"item2\": {\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"type\": \"string\"\n }\n },\n \"type\": \"embeddedDocuments\"\n },\n \"shipping_code\": {\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"type\": \"string\"\n }\n }\n }\n}\n",
"text": "I want to create an index in atlas for some files, and one of them is an array items, with some fileds. This is the json.I wnand to search for these fields, for this reason I use dynamic: false, so I’ve added this json in atlas search in JSON editor and i don’t get results for item1, item2, for the items of the\narray. Is the json ok?, is something missing in the json?",
"username": "javier_ga"
},
{
"code": "$search",
"text": "Hi @javier_ga,I wnand to search for these fields, for this reason I use dynamic: false, so I’ve added this json in atlas search in JSON editor and i don’t get results for item1, item2, for the items of the\narray. Is the json ok?, is something missing in the json?Can you provide the following details so that I can better assist:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "already solved, changing “type”: “embeddedDocuments” to “type”: “document”",
"username": "javier_ga"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Create serach index lucene.whitespace for fields in an array | 2023-07-07T11:21:55.609Z | Create serach index lucene.whitespace for fields in an array | 479 |
null | [
"crud"
] | [
{
"code": "",
"text": "Whenever I use the updateMany function in mongo shell to update the records in cosmosdb Mongodb API it shows me the MongoServerError expected type object but found array. but my documents are not array type",
"username": "Aditya_Sharma9"
},
{
"code": "",
"text": "What’s the exact command you’re running?",
"username": "John_Sewell"
}
] | MongoServerError expected type object but found array. cosmos db mongodb | 2023-07-11T10:34:22.528Z | MongoServerError expected type object but found array. cosmos db mongodb | 522 |
null | [
"java",
"production",
"kotlin",
"scala"
] | [
{
"code": "",
"text": "The 4.10.2 MongoDB Java & JVM Drivers has been released.Reference documentationThe documentation hub includes extensive documentation of the 4.10 driver.Java DriversKotlin DriversScala DriverYou can find a full list of bug fixes here.You can find a full list of improvements here.You can find a full list of new features here.",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Java Driver 4.10 Released | 2023-07-11T11:00:55.357Z | MongoDB Java Driver 4.10 Released | 1,998 |
null | [
"replication",
"sharding",
"configuration"
] | [
{
"code": "systemLog:\n destination: file\n path: /var/log/mongodb/mongos.log\n logAppend: true\nprocessManagement:\n fork: true\nnet:\n bindIp: router1\nsharding:\n configDB: test-configsrv-replica/cfg1:27019\n{\"t\":{\"$date\":\"2023-06-28T07:58:03.147+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333222, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM received error response\",\"attr\":{\"host\":\"127.0.0.1:27019\",\"error\":\"HostUnreachable: Error connecting to 127.0.0.1:27019 :: caused by :: Connection refused\",\"replicaSet\":\"test-configsrv-replica\",\"response\":{}}}\n{\"t\":{\"$date\":\"2023-06-28T07:58:03.147+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"test-configsrv-replica\",\"host\":\"127.0.0.1:27019\",\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"Error connecting to 127.0.0.1:27019 :: caused by :: Connection refused\"},\"action\":{\"dropConnections\":true,\"requestImmediateCheck\":false,\"outcome\":{\"host\":\"127.0.0.1:27019\",\"success\":false,\"errorMessage\":\"HostUnreachable: Error connecting to 127.0.0.1:27019 :: caused by :: Connection refused\"}}}}\n",
"text": "Hi,\nI am doing sharding with 1 config server, 1 shard and a mongos Instance.I have setup the config server and shard as single member replica set and both are runnning on their IP with port 27019 and 27018 respectively. (I tested using telnet to check connectivity too)I have my mongos.conf as followsAnd I have mapped cfg1 with IP address of config server and router1 as IP of mongos instance itself in /etc/hosts and I am strangely facing this issue while running mongosI am not sure why this is trying to connect 127.0.0.1:27019 because in mongos it is cfg1 and its correctly mapped in hosts with Different IP AddressAny help would be appreciated! Thanks",
"username": "Jay_Bhanushali1"
},
{
"code": "db.hello()",
"text": "You initialised the configserver replSet using 127.0.0.1 as the hostname. The server discovery protocol uses the provided hostnames as seed and will query the replset for its members. Running db.hello() is basically the same thing a driver will do during this discovery.Best to always use a hostname for the members and every host client/mongos/configSvr/shardSvr need to be able to resolve(and connect) those names.",
"username": "chris"
},
{
"code": "",
"text": "Yes, that is correct. I was running configSrv with only localhost binding. Thanks @chris !",
"username": "Jay_Bhanushali1"
}
] | Error connecting to 127.0.0.1:27019 :: caused by :: Connection refused | 2023-06-28T07:59:38.732Z | Error connecting to 127.0.0.1:27019 :: caused by :: Connection refused | 763 |
null | [
"sharding",
"mongodb-shell"
] | [
{
"code": "mongoshrs.status()systemLog:\n destination: file\n path: /var/log/mongodb/mongos.log\n logAppend: true\nprocessManagement:\n fork: true\nnet:\n bindIp: router1\nsharding:\n configDB: test-configsrv-replica/cfg1:27019\nroot@configsrv3-ubuntu22-04lts-scpu-1gb-cdg1-1:~# sudo systemctl status mongos\n● mongos.service - MongoDB Shard Router\n Loaded: loaded (/etc/systemd/system/mongos.service; enabled; vendor preset: enabled)\n Active: active (running) since Thu 2023-06-29 12:06:31 UTC; 1s ago\n Main PID: 190600 (mongos)\n Tasks: 16 (limit: 1101)\n Memory: 14.4M\n CPU: 17ms\n CGroup: /system.slice/mongos.service\n ├─190600 /usr/bin/mongos --config /etc/mongos.conf\n ├─190601 /usr/bin/mongos --config /etc/mongos.conf\n └─190602 /usr/bin/mongos --config /etc/mongos.conf\nroot@configsrv3-ubuntu22-04lts-scpu-1gb-cdg1-1:~# mongosh --host xxx.xxx.xxx.xxx\nCurrent Mongosh Log ID: 649d758964c015107f2b959a\nConnecting to: mongodb://xxx.xxx.xxx.xxx:27017/?directConnection=true&appName=mongosh+1.10.1\nMongoNetworkError: connect ECONNREFUSED xxx.xxx.xxx.xxx:27017\n",
"text": "Hi,\nI am working on configuring one config server, shard and a mongos router. I have been able to setup config server and shard and they are running on their respective IP addresses. I am able to mongosh into both of them and rs.status() seems to give correct result. I have mongos running on different instance however unable to mongosh into it. Below is mongos.conf I am usingStatus of mongos serverI have router1 (mapped to IP of mongos instance itself) and cfg1 mapped in /etc/hosts with correct IP addresses\nBelow is the errorAny help would be appreciated, Thanks!",
"username": "Jay_Bhanushali1"
},
{
"code": "",
"text": "Hi @Jay_Bhanushali1,\nI think the parameter that isn’ t set correctly Is bindip.From the documentation:Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "systemctl mongos startmongos --config <path-to-config>",
"text": "Hi @Fabio_Ramohitaj ,\nIt was setted up correctly however somehow the mongos service which I was runnning using systemctl mongos start was different that running mongos using mongos --config <path-to-config> . The second option worked for me which was by using the mongos in command line. Thanks!",
"username": "Jay_Bhanushali1"
}
] | MongoNetworkError: connect ECONNREFUSED <mongos instance IP Address> | 2023-06-29T12:19:41.315Z | MongoNetworkError: connect ECONNREFUSED <mongos instance IP Address> | 648 |
null | [] | [
{
"code": "",
"text": "In Relational Databases, it is necessary to create multiple tables to have a healthy system with good data normalization, so that at the end of the day, the database has 5 related tables, for example. However, in MongoDB, it is possible to consolidate all 5 tables into a single Collection.So, the question remains:\nWhen should I create a new Collection?",
"username": "Gabriel_Pavao"
},
{
"code": "",
"text": "Hey @Gabriel_Pavao,Welcome to the MongoDB Community forums When should I create a new Collection?In MongoDB, collections are used to group related documents and provide an organized structure for storing data. It is recommended to create a new collection when there is a logical separation or when the data in the collection has a different structure or purpose. Doing so allows us to maintain data organization and improves query performance.When considering whether to create a new collection, think about the distinct characteristics or access patterns of the data. For example, if you have an e-commerce application that stores information about both products and customers. In this case, you might create separate collections for “products” and “customers” to maintain the different attributes and relationships associated with each entity.On the other hand, if a customer has multiple shipping addresses, you can choose to embed the shipping addresses as an array within the “customer” document. This avoids the need to create a separate “shipping addresses” collection.Also if you’re starting your MongoDB journey, I would recommend the following resources:These resources will provide you with valuable insights for designing effective MongoDB schemas.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | When do I need to create a new Collection? | 2023-07-06T15:48:13.221Z | When do I need to create a new Collection? | 420 |
null | [
"aggregation"
] | [
{
"code": "connection.uri: \n",
"text": "HiWe are already using kafka source connector (1.9.1).\nSometimes i need to copy all the data from scratch. I#M using “startup.mode: copy_existing” and this works so far.\nSometimes i want copy all the data but only some selected ones. So i tried the filter option described here\nCopy Existing Data (I used the same example)My connector looks like this:topic.prefix: mongo\ndatabase: \ncollection: kafkatest\ntopic.suffix: dev\noutput.json.formatter: com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson\nkey.converter: org.apache.kafka.connect.storage.StringConverter\nkey.converter.schemas.enable: false\nvalue.converter: org.apache.kafka.connect.storage.StringConverter\nvalue.converter.schemas.enable: false\npublish.full.document.only: true\noffset.partition.name: kafkatestdev.12\nstartup.mode: copy_existing\nstartup.mode.copy.existing.pipeline: [{ “$match”: { “country”: “Mexico” } }]\nerrors.tolerance: none\npipeline:\n[{“$match”:{“operationType”:{“$in”:[“insert”, “update”]}}},{“$project”: {“_id”:1,“fullDocument”:1,“ns”:1,“documentKey”:1}}]Without the startup.pipeline i get all data but with the startup.pipeline i dont get any existing entries.\nDoes anyone has an idea whats wrong?Thank you",
"username": "Walter_Olligeschlager"
},
{
"code": "",
"text": "Fix for anyone who got the same problem.startup.mode.copy.existing.pipeline: ‘[{ “$match”: { “country”: “Mexico” } }]’I used single quotes for the pipeline.",
"username": "Walter_Olligeschlager"
}
] | Kafka copy_existing in combination with pipeline doesnt work | 2023-03-14T14:56:47.971Z | Kafka copy_existing in combination with pipeline doesnt work | 851 |
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "$sort$near$searchcreatedAt$nearshouldfilterpivotconst shopName = \"test-inc\";\nconst query = 'Smith';\nconst size = 25;\nconst page = 0;\n\nconst aggregation = [\n {\n $search: {\n index: \"checks\",\n compound: {\n $filter: {\n {\n phrase: {\n query: shopName,\n path: \"shopName\",\n },\n },\n },\n $should: [\n {\n text: {\n query: searchTerm,\n path: [\n \"firstName\",\n \"lastName\",\n \"phone\",\n \"email\",\n \"orderId\",\n ],\n },\n },\n {\n near: {\n path: \"createdAt\",\n origin: new Date(),\n pivot: 86400000 * 7,\n score: { boost: { value: 999 } },\n },\n },\n ],\n }\n },\n {\n $skip: parseInt(page) * parseInt(size),\n },\n {\n $limit: parseInt(size),\n },\n {\n $facet: {\n customers: [{\n $project: { \n createdAt: 1, \n name: 1,\n email: 1\n }\n }],\n },\n {\n $set: { meta: \"$$SEARCH_META\" },\n },\n];\n\n",
"text": "Hi,I have a small (~38 megabytes) search index with a relatively simple query, however it is not performant with some queries with customer accounts that have a large amount of data indexed. The search will take longer than 60 seconds, which times out the HTTP request serving the frontend.I’m trying to remove the $sort operation from the aggregation as recommended here: https://www.mongodb.com/docs/atlas/atlas-search/performance/query-performance/#-sort-aggregation-stage-usageHowever, I’m a bit confused on how to use $near within the $search operation.For context, I’m trying to sort based on a date field (createdAt), and the results should always be in descending order.But I’m not sure if $near should be within a should clause or filter clause, and I’m also unsure how to use the pivot option to sort in a descending order:",
"username": "Dylan_Pierce"
},
{
"code": "",
"text": "Hi @Dylan_Pierce ,The following is relatively new and should help you here:Sort you Atlas Search results by date, number, and string fields.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to use $near as a sort within a search query? | 2023-06-05T14:08:31.492Z | How to use $near as a sort within a search query? | 626 |
null | [
"queries",
"golang"
] | [
{
"code": "batchSize = 1000FindOptionscursor.ID == 0cursor.Next()number of documents < batchSize",
"text": "Hello,I have been using mongodb v1.11.1 with golang 1.19.\nI was doing some data type migration, and I ran into something.In some environment, I had to change data type over 100 000 documents. Since everything is running is pods, I couldn’t load in memory 100 000 documents so I used batchSize = 1000 in the FindOptions struct.And I was using the method find in a infinite loop and breaking if cursor.ID == 0.\nThe thing is I was doing the check before using cursor.Next() so some clients had a number of documents < batchSize and the cursor was considered dead and I wasn’t updating any of their document.Is that a wanted behavior so put the ID of cursor = 0 when number of documents is < batchSize ?FYI, when number of documents > batchSize everything is good !",
"username": "Dylan_Dinh"
},
{
"code": "",
"text": "Hey @Dylan_Dinh,Thank you for reaching out to the MongoDB Community forums!To better understand the problem, may I ask you the following questions:Looking forward to hearing back from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "number of documents < batchSizebatchSize := int32(1000)\n\topts := &options.FindOptions{\n\t\tBatchSize: &batchSize,\n\t}\n\tfor {\n\t\tcursor, err := db.Collection(pushedNotificationCollection).Find(context.Background(), bson.M{\"payload\": bson.M{\"$type\": \"binData\"}}, opts)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif cursor.ID() == 0 {\n\t\t\tbreak\n\t\t}\n\n\t\tfor cursor.Next(context.Background()) {\n\t\t\tvar ai pushednotification.AlarmInfo\n\t\t\tvar pn oldUserPushedNotification\n\n\t\t\tif err = cursor.Decode(&pn); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\terr = json.Unmarshal(pn.Payload, &ai)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tnewUpn := buildNewUserPushedNotifFromOld(pn, ai)\n\n\t\t\t_, err = db.Collection(coll).ReplaceOne(context.Background(), cursor.Current, newUpn)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\nnumber of doc < batch sizeif cursor.ID() == 0 {\n\t\t\tbreak\n\t\t}\nnumber of doc < batch sizethen cursor.ID() == 0db.version()",
"text": "Hi @Kushagra_Kesav,Code snippet :Is it the right way to do that, the fastest as possible ?\nDoing that before calling ReplaceOne when number of doc < batch size will in fact break the for loop and you miss some documents :Question is, why when number of doc < batch size then cursor.ID() == 0, I feel like it has to be to value 0 when there is no documents at all.Maybe I shouldn’t use a for loop so I could get rid of that break call but I have memory limitation on my pod, so loading 100 000 documents is not possible.",
"username": "Dylan_Dinh"
},
{
"code": "",
"text": "Where in the documentation does it say that cursor being zero indicates that the cursor does not contain any documents?",
"username": "John_Sewell"
}
] | Size of retrieved documents < batch size | 2023-06-26T08:26:01.520Z | Size of retrieved documents < batch size | 757 |
null | [
"queries",
"indexes"
] | [
{
"code": "",
"text": "Hello Folks,I’m just getting started with MongoDB and I am currently exploring the concept of indexes.I’m working on a music application, where users can search songs based on different tags (such as genre, mood, artist, bpm, etc.), totaling around ten. What makes it a bit more complex is that the users have the flexibility to choose the different search filters / fields dynamically.To illustrate, here are two examples queries a user may request:Example 1db.songs.find( { genres: { $in: [rock, pop] } }, { moods: happy } )Example 2db.songs.find( { voice: female }, { moods: { $in: [happy, sad] } }, { bpm: { $gt: 90 } } )I’d like to optimize these queries using indexes, but given the dynamic nature of the queries, I am unsure about the best approach to create the necessary index or indexes.Is it a viable strategy to create a separate index for each searchable field?Alternatively, should I create a single index including all the searchable fields? I fear this may be problematic because I have several fields that are arrays (for instance, a song can be tagged with multiple moods / genres). As far as I know, MongoDB can accommodate only one multikey (array) index. Therefore, a single index featuring all the fields wouldn’t be viable from my understanding.What would you recommend as a solution for my scenario?I appreciate any help and guidance!Thanks a lot.Best,\nValerio",
"username": "Valerio_Velardo1"
},
{
"code": "",
"text": "Its hard to give a best answer here.But you can take a look at this mysql - Create sql indexes for complex filtering - Stack OverflowIt has some good ideas on what are needed and what are not. Principle is only create indexes when necessaryAnyways the correct solution depends on your application needs.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Looks like the Attribute Pattern.",
"username": "steevej"
},
{
"code": "",
"text": "Many thanks @Kobe_W The resource you shared is really helpful.",
"username": "Valerio_Velardo1"
},
{
"code": "",
"text": "Didn’t know about the Attribute Pattern. Thank you for sharing it @steevej !",
"username": "Valerio_Velardo1"
}
] | MongoDB Indexes to Optimize Dynamic Query | 2023-07-11T00:00:14.281Z | MongoDB Indexes to Optimize Dynamic Query | 550 |
null | [] | [
{
"code": "mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\nActive: failed (Result: core-dump) since Sat 2023-07-01 12:28:44 UTC; 9s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 49592 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=dumped, signal=ILL)\nMain PID: 49592 (code=dumped, signal=ILL)\n\nJul 01 12:28:41 ubuntu systemd[1]: Started MongoDB Database Server.\nJul 01 12:28:44 ubuntu systemd[1]: mongod.service: Main process exited, code=dumped, status=4/ILL\nJul 01 12:28:44 ubuntu systemd[1]: mongod.service: Failed with result 'core-dump'.\n",
"text": "Hardware: Raspberry Pi 3B\nOS: Ubuntu 20.04 LTSHey I need help. I am very desperated.I want to run mongoDB on my Raspberry Pi 3B. I run it with Ubuntu 20.04.6 LTS and mongoDB version 4.4, as sooo many thread mention, that the minimum recuired processor architecture is not fulfilled by any Pi (so far)When I want to start mongoDB I get this errorI installed mongoDB exactly following instructions of this tutorial from mongodb.com (Install & Configure MongoDB on the Raspberry Pi | MongoDB) where the instructions absolutely pay attention of fulfilling all known requirements. I tried this official (?) tutorial, since all previous attempts produces the same error (the same I am still confronting with). But nothing helps …As I mentioned before I had this issue all the time before. I have to fight it for almost a month (it is a private project and I have really not so much time for it after work). And because no solution on the web helped me, I reinstalled Ubuntu completely on my Pi and did it exactly with the commands of the tutorial I linked above. And it still does not work.I really don’t know what is wrong … I don’t know, why it is such a big issue (there are soooo many other threads already open) but without any solution. I really like mongoDB and this Pi will be the first tiny version of a server I plan to be put into a NAS one time.But if it will not work, no matter what I try, I have to use another database framework which is able to save documents even with an older cpu. I really don’t know why it got this limitations, so it is not backward compatible … I mean … isn’t it just saving and getting documents? :-/",
"username": "chris080G"
},
{
"code": "",
"text": "It makes me so sad that mongoDB is not backward compatible but also that it seems that no one seems to know any solution. I really like mongoDB, but since there are errors with such a big impact in such an early phase of my project, I cannot imagine running mongoDB in my productive system …",
"username": "chris080G"
},
{
"code": "",
"text": "Hi @chris080G,In my personal capacity I have built MongoDB from source and published binaries on Github here. While I have not attempted to run the binaries on a Pi 3, you can give it a try and report back.The Illegal instruction error you are hitting is well documented here and on StackOverflow. Search for the many threads of people hitting the same issue (I have included a random sampling).The storage engine of MongoDB, WiredTiger, is driving the requirement for a microarchitecture that exceeds that of Raspberry Pi’s and this is not going to change anytime soon.Feel free to raise an issue on the Github repo if there is an issue with the binaries or procedure to run them.I hope this helps get you unstuck.",
"username": "Matt_Kneiser"
}
] | mongoDB Active: failed (Result: core-dump) | 2023-07-07T08:07:50.254Z | mongoDB Active: failed (Result: core-dump) | 1,408 |
null | [] | [
{
"code": "",
"text": "I’m new to Mongo but have many years of experience in the business intelligence world.\nOur team was acquired by another company. This company uses MongoDB as their primary database.\nData events they are usually handled by a message queue in this case Google PubSub, from there they insert events into Mongo.\nAny new application that would like to consume those events will use the Topic (pubsub) instead of retrieving the data from Mongo.\nManagement made it clear that they don’t want us to do any interaction directly with Mongo. They are worried about performance being affected in the servers, slowing down the response of our primary application that end users depend.I don’t know if Mongo offers the option of using a secondary replica that we can use for reporting purposes.\nThat we can hit it as much as we want (batch loads, changestreams,etc) without having any effects to the primary mongo server.What are our options?",
"username": "Marbin_Diaz-Diaz"
},
{
"code": "",
"text": "Search for reporting in this link:You can setup a hidden replicaset member for reporting so you do not hit the primary, you could also set secondary read preference to not hit the primaries for a reporting engine.I’ve not needed to use this as our system is low enough volume that reporting from the primaries does not cause an issue but on several of the mongo conferences I’ve been on recently this has been mentioned as a use case for reporting.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks.\nI will look into it.\nOur prod env is high volume, saturated with transactions. That explains why management is so jealous about it.",
"username": "Marbin_Diaz-Diaz"
},
{
"code": "",
"text": "A hidden node can not be reached by read queries from clients. To server those “specific traffic” from a “specific node”, you can try using tag sets",
"username": "Kobe_W"
},
{
"code": "",
"text": "one thing you have to be aware is that a hidden node, being member of the replica set, receive the same write load as the other nodes. with the extra index and reporting work load it needs to be sized correctly.",
"username": "steevej"
},
{
"code": "",
"text": "a little correction is in order, the hidden nodes cannot be reached via a replica set connection string but it is reachable with a direct non-replica set connection string. that is how you are able to created reporting specific indexes that are not replicated to the other members of the replica set.",
"username": "steevej"
},
{
"code": "",
"text": "I realised my original link was to archived documentation, this seems to be the more recent version:",
"username": "John_Sewell"
},
{
"code": "",
"text": "Doesn’t this fit better in our case?Enhance the performance of your analytics workloads on MongoDB Atlas by choosing appropriately sized analytics tiers for dedicated nodes. Read on to learn more about analytics node tiers.How about heavy workloads on an analytic node?\nWill they affect the whole cluster?",
"username": "Marbin_Diaz-Diaz"
},
{
"code": "",
"text": "Doesn’t this fit better in our case?It does if you are on Atlas. But I suspect they are just the same as hidden replica set nodes as mentioned before but manageable via the Atlas GUI and CLI.How about heavy workloads on an analytic node?An analytic node handles the same write load as the other nodes of the cluster. If your usual traffic is mostly writes, the analytic node needs to be bigger as to handle the extra analytic reads. If your usual traffic is mostly reads, the analytic node might be smaller unless the analytic reads are high.Since yourprod env is high volume, saturated with transactionsit sounds like your traffic is mostly writes, so you might need a bigger node.",
"username": "steevej"
}
] | Use MongoDB for reporting | 2023-07-03T18:43:24.917Z | Use MongoDB for reporting | 475 |
null | [
"queries"
] | [
{
"code": "{\n \"itemType\": \"fruit\",\n \"name\": \"nectarines\",\n \"displayName\": \"Nectarines\",\n \"zones\": [\n {\n \"zone\": \"10\",\n \"dates\": {\n \"plant\": [ \"January\", \"February\"],\n \"harvest\": [\"June\", \"July\", \"August\"]\n }\n },\n {\n \"zone\": \"11\",\n \"dates\": {\n \"plant\": [\"December\", \"January\"],\n \"harvest\": [\"May\",\"June\",\"July\"]\n }\n },\n {\n \"zone\": \"12\",\n \"dates\": {\n \"plant\": [\"November\", \"December\"],\n \"harvest\": [\"April\", \"May\",\"June\"]\n }\n },\n {\n \"zone\": \"13\",\n \"dates\": {\n \"plant\": [\"October\", \"November\"],\n \"harvest\": [ \"March\",\"April\", \"May\"]\n }\n }\n ]\n}\n{\n \"itemType\": \"fruit\",\n \"name\": \"oranges\",\n \"displayName\": \"Oranges\",\n \"zones\": [\n {\n \"zone\": \"10\",\n \"dates\": {\n \"plant\": [\"February\",\"March\",\"April\"],\n \"harvest\": [\"November\",\"December\", \"January\", \"February\", \"March\", \"April\"]\n }\n },\n {\n \"zone\": \"11\",\n \"dates\": {\n \"plant\": [\"January\",\"February\",\"March\"],\n \"harvest\": [\"October\",\"November\", \"December\", \"January\", \"February\", \"March\"]\n }\n },\n {\n \"zone\": \"12\",\n \"dates\": {\n \"plant\": [\"December\", \"January\", \"February\"],\n \"harvest\": [\"September\", \"October\", \"November\", \"December\", \"January\", \"February\"]\n }\n },\n {\n \"zone\": \"13\",\n \"dates\": {\n \"plant\": [\"November\", \"December\", \"January\"],\n \"harvest\": [\"August\", \"September\", \"October\", \"November\", \"December\", \"January\"]\n }\n }\n ]\n}\nzonesnectarinesoranges{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"zones\": {\n \"dynamic\": true,\n \"type\": \"embeddedDocuments\"\n }\n }\n }\n}\ncompoundmustfilterzones[\n {\n $search: {\n index: \"searchMarketItems\",\n embeddedDocument: {\n path: \"zones\",\n operator: {\n compound: {\n filter: [\n {\n text: {\n path: \"zones.zone\",\n query: \"11\",\n },\n },\n ],\n must: [\n {\n text: {\n path: \"zones.dates.harvest\",\n query: \"july\",\n },\n },\n ],\n },\n },\n },\n },\n },\n];\n",
"text": "Hello! I’m new to Atlas search and trying to wrap my head around compound operators. I have a collection with documents like this:And then in my client I have a search form with two fields - one for “zone” and one for “month”. I want to return query results based on the zones data for each document - for example, if I searched for all items which are harvested in July in Zone 11, nectarines would be returned but not oranges.I indexed this collection like this:I have been struggling with compound, must, and filter but so far I’ve only been able to return documents which include the queried zone and harvest month in ANY values in the zones array, not specifically only those where the zone and harvest month are property values in the same object.Here’s what I thought would work, but doesn’t:Any help would be appreciated!",
"username": "Max_MacMillan"
},
{
"code": "",
"text": "I was able to create an aggregation pipeline that works, (stages of $unwind, then $match, then $unwind, then $match) but I’d still like to know if this is possible using $search?",
"username": "Max_MacMillan"
},
{
"code": "$search{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"zones\": {\n \"fields\": {\n \"dates\": {\n \"fields\": {\n \"harvest\": {\n \"type\": \"string\"\n }\n },\n \"type\": \"document\"\n },\n \"zone\": {\n \"type\": \"string\"\n }\n },\n \"type\": \"embeddedDocuments\"\n }\n }\n }\n}\n$searchdefaultdb.collection.aggregate({\n $search: {\n index: 'default',\n embeddedDocument: {\n path: 'zones',\n operator: {\n compound: {\n filter: [\n {\n text: {\n path: 'zones.zone',\n query: '11'\n }\n }\n ],\n must: [\n {\n text: {\n path: 'zones.dates.harvest',\n query: 'July'\n }\n }\n ]\n }\n }\n }\n }\n})\n[\n {\n _id: ObjectId(\"64ab6ba0220ed429db5aa12d\"),\n itemType: 'fruit',\n name: 'nectarines',\n displayName: 'Nectarines',\n zones: [\n {\n zone: '10',\n dates: {\n plant: [ 'January', 'February' ],\n harvest: [ 'June', 'July', 'August' ]\n }\n },\n {\n zone: '11',\n dates: {\n plant: [ 'December', 'January' ],\n harvest: [ 'May', 'June', 'July' ]\n }\n },\n {\n zone: '12',\n dates: {\n plant: [ 'November', 'December' ],\n harvest: [ 'April', 'May', 'June' ]\n }\n },\n {\n zone: '13',\n dates: {\n plant: [ 'October', 'November' ],\n harvest: [ 'March', 'April', 'May' ]\n }\n }\n ]\n }\n]\n",
"text": "Hi @Max_MacMillan,I assume since you got it working via your above mentioned stages, you have a desired output already. Could you provide that output here just so that I can verify if it is possible with $search?In the meantime, with the 2 sample documents you provided in my test environment, I had the following index definition:I then performed the following $search (my test index called default):Which gave the following output:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran Thanks so much for this - yes, that search solution works and the field mapping is the key I was missing. I thought that dynamic mapping everything would “just work” but if I update my index with your solution I get the same results - ‘nectarines’ and not ‘oranges’. Much appreciated!",
"username": "Max_MacMillan"
},
{
"code": "",
"text": "Nice one - Thanks for marking the solution + updating the post as well! ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to return results only where two properties of a nested object match | 2023-07-08T03:38:34.251Z | How to return results only where two properties of a nested object match | 547 |
null | [] | [
{
"code": "",
"text": "Hi,My MongoDB version is 5.0.18.\nmongod.log is under /var/log/mongodb and data is located in /var/lib/mongo.\nThey are mounted on totally different partitions.Several days ago, slow queries flooded the mongod.log and exhausted the disk space.\nThen, as the title, mongod crashed.It looks like by design, butIs there anyway to prevent mongod from crash like this?mongod crashing is a disaster in anyway, but losing mongod.log doesn’t hurt so much.",
"username": "Hailin_Hu"
},
{
"code": "",
"text": "Hi @Hailin_Hu,\nCan you do a custom script, where you can define the limit of maximum size of logs and retetion of them.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Mongodb has no built in support for logrotate.You have to use some external tools to manage log files",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thank you, guys.\nI’m working on logrotate to manage the disk space better.But it would be great if mongod can keep alive when it failed to log.",
"username": "Hailin_Hu"
}
] | Writing to log file failed, aborting application | 2023-07-10T08:59:39.433Z | Writing to log file failed, aborting application | 416 |
null | [
"unity"
] | [
{
"code": "",
"text": "This is the official place to continue the discussion for our GameDev series!ICYMI: Here’s our kickoff stream.Check it out and then head back here to continue the discussion, share some ideas, or bring us your questions!",
"username": "yo_adrienne"
},
{
"code": "",
"text": "Will it be available on Linux and Steam? ",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Stretch goals for sure! ",
"username": "yo_adrienne"
},
{
"code": "",
"text": " for Linux / Steam",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @yo_adrienne, had a quick a question about the Plummy Game you guys developed over Season 1. Thanks a lot for the amazing series btw.\nI am trying to replicate most of your steps, but was really interested in working with my game through WebGL. For that, I am not sure if the backend you and Nic designed using Main.js works. I think Karen’s strategy using webhooks would be easier via Realm. I know you guys want to address this in Season 2, but could you give like a short summary of what I would need to change to incorporate WebHooks instead of a backend?",
"username": "Owais_Hamid"
},
{
"code": "main.js",
"text": "Hey @Owais_Hamid! Thanks for checking out the series and for posting on the Community Forums!Always so happy to see more peeps get into game development Yes! Webhooks can definitely replace the Node API we initially created. To do that, you’d create some Realm Functions and Webhooks that you’d interact with instead of calling a self-hosted backend (like we did with our main.js file.Check out this walkthrough of how to configure a service webhook in Realm.If you have any questions about the walkthrough, feel free to come back here and ask them and I’ll do my best to help you through them!",
"username": "yo_adrienne"
},
{
"code": "",
"text": "Thanks for the prompt response, @yo_adrienne, the resource you provided clarified a few things, but for the most part, the code Karen wrote to configure a service with a webhook worked well.\nI think I will write a github gist that details the main steps to connecting MongoDB with Unity through Realm sometime soon to clarify it further for others.\nThanks again for the response.",
"username": "Owais_Hamid"
}
] | GameDev Episode 1: Designing a Strategy for Building a Game with MongoDB and Unity | 2020-09-15T16:25:53.874Z | GameDev Episode 1: Designing a Strategy for Building a Game with MongoDB and Unity | 5,033 |
null | [
"queries",
"schema-validation"
] | [
{
"code": "{\n \"_id\": ObjectID('123'),\n \"test\": {\n \"a\":1,\n \"b\":1\n }\n},\n{\n \"_id\": ObjectID('456'),\n \"test\": {\n \"a\":1\n }\n{\n \"_id\": ObjectID('123'),\n \"test\": {\n \"a\":1,\n \"b\":1\n }\n}\ndb.col.find({\"test\": {\"$gt\": {\"a\": 1} }})",
"text": "We have one use case. Let’s suppose I have two documents as given below.Now I want those result whose “test” field has property other than “a” or in another way, I want those objects which have multiple keys/properties in a “test” field or to check the size of an object greater than 1So, the result will be:I tried to make a query for the above output as shown below and it working as expecteddb.col.find({\"test\": {\"$gt\": {\"a\": 1} }})So, is that the right thing to do? Any downside of doing it? We want to lever-age indexes as well considering that we have indexes on ‘test’ fieldPlease let me know your inputs on this.Thanks.",
"username": "Aamod_Pisat"
},
{
"code": "db.col.find( { ''test.a': { $exists: true } )db.col.find( { ''test.b': { $exists: true } )\"_id\": ObjectID('123')",
"text": "Hello @Aamod_Pisat, welcome to the MongoDB Community forum!You can use the $exists query operator to check if a field exists or not - within a document. For example, the querydb.col.find( { ''test.a': { $exists: true } )returns both the documents from your example collection.db.col.find( { ''test.b': { $exists: true } )returns only one document, the one with \"_id\": ObjectID('123').",
"username": "Prasad_Saya"
},
{
"code": "$gt",
"text": "Hi @Prasad_Saya, Thanks for your response. But I am not expecting that. $exists will return if the key is present or not. What I want is I would like to get that any other exists than given key in object field or I want is if that object has multiple keys or not i.e size of object length should be greater than 1.\nSo, for that, we used $gt operator. So, is that the right thing to do, Any downside of doing as we want to leverage indexes as well?",
"username": "Aamod_Pisat"
},
{
"code": "$gt",
"text": "What I want is I would like to get that any other exists than given key in object field or I want is if that object has multiple keys or not i.e size of object length should be greater than 1.\nSo, for that, we used $gt operator. So, is that the right thing to doThat is not the right query for it.You have to write an aggregation query and use the $objectToArray aggregation operator to convert the object as an array and then find the length of the array. If the length of the array is greater than 1, then there are more than one field in the object. See the example from the documentation explains how to convert the object to an array.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Okay thanks, @prasad. Using above, if I have index on ‘test’ field. So, will it leverage that or will this have any performance impact on querying documents",
"username": "Aamod_Pisat"
},
{
"code": "",
"text": "@Aamod_Pisat, I am not sure about that (I think it will not use the index). You can check if a query is using an index or not by using the explain on the query. The explain method generates a query plan which will have information about index usage by the query.",
"username": "Prasad_Saya"
},
{
"code": "$objectToArraydb.col.find({\"test\": {\"$gt\": {\"a\": 1} }})",
"text": "HI @Prasad_Saya, We tried working with aggregation query using $objectToArray but the query doesn’t use indexes as it’s taking COLSCAN even though I have an index on that field.\nBut below query which I shared is taking index i.e. IXSCAN. So, hopefully, there won’t be any downside to doing this.db.col.find({\"test\": {\"$gt\": {\"a\": 1} }})Let us know your thoughts on this.Thanks.",
"username": "Aamod_Pisat"
},
{
"code": "test\"test.a\"1\"test.a\"",
"text": "Hello @Aamod_Pisat, as I had mentioned earlier, you cannot use that query to count the number of fields in the test object (or sub-document ). The query you had posted doesn’t the number of fields - it only finds documents where the \"test.a\" value is greater than 1.Yes, the index defined on the \"test.a\" will be used in the query - as you had mentioned there is an IXSCAN from the query plan.Please note that the two queries have different purposes.",
"username": "Prasad_Saya"
},
{
"code": "db.collection.find({$nor:[{\n \"$jsonSchema\":{\n \"properties\":{\n \"test\":{\n \"type\":\"object\",\n\t \"properties\":{\"a\":{}},\n\t \"additionalProperties\": false\n }\n }\n }\n}]})",
"text": "Hello @Aamod_Pisat, and welcome to the community,\nyou can use $jsonSchema as an alternative to @Prasad_Saya 's solution;",
"username": "Imad_Bouteraa"
},
{
"code": "{\n \"$match\": {\n \"parameters_fields\": {$gte: {$bsonSize: 5}},\n// \"parameters_fields\": {\"$size\": 1},\n// \"parameters_fields\": {$gte: {\"$size\": 0}},\n }\n}\n",
"text": "Hi @Aamod_Pisat, Mongo stores json objects as bson, and so, the way to measure this value is using the key $bsonSize where the minimum size for an empty object is 5. Below is an example of…",
"username": "Jonathan_Calderon_Centeno"
}
] | Query on non empty object field having multiple keys or to check size of object greater than 1 | 2021-04-15T13:21:38.340Z | Query on non empty object field having multiple keys or to check size of object greater than 1 | 21,010 |
null | [
"node-js"
] | [
{
"code": "const { MongoClient } = require('mongodb');\nconst uri = 'mongodb://localhost:27017';\n\nfunction connectToDB() {\n const client = new MongoClient(uri);\n\n try {\n\n client.connect();\n setTimeout(() => {\n console.log(`\\x1b[32m[DONE]\\x1b[0m db connection established.`)\n }, 200);\n return client.db('serverDatabase')\n\n\n } catch (error) {\n console.error('error:', error);\n throw error;\n }\n}\n\nmodule.exports = connectToDB();\n",
"text": "I’m using a synchronized database connection for my project and I’m wondering does it cause trouble? Let me explain, how i use this. When the application starts life it connects to the database and it is using one connection until application dies. I prefered this because I need to export database to other documents for db queries. This is my code:",
"username": "Quarse"
},
{
"code": "",
"text": "Im not sure what you mean by “synchronized” and “connection” here.But i believe it’s a general practice to maintain only one such instance (db/client, or similar) and share it with multiple threads.",
"username": "Kobe_W"
},
{
"code": "",
"text": "That was my main purpose in doing this. This isn’t async function so I’m wondering would it be a problem because the client connection is not async",
"username": "Quarse"
},
{
"code": "",
"text": "I’m wondering would it be a problem because the client connection is not asyncNo, people always do this. If you see this as a problem, then something wrong with your code/service/deployment/…etc.that being said, some drivers may have async support. (e.g. i recall java does).",
"username": "Kobe_W"
}
] | Synchronized Connection on MongoDB | 2023-07-09T19:09:25.704Z | Synchronized Connection on MongoDB | 501 |
null | [
"cxx",
"c-driver"
] | [
{
"code": "",
"text": "Hi,I have some images size varying from 100KB to 500KB(for example). How to store this image in MongoDB and how to read it back. I didn’t find an example in the mongodb git hub repo.\nMany artiicles suggested to store the image as binary in mongodb, however I’m not sure if it works or not. Can you please post a sample code to achieve this.Thanks",
"username": "Raja_S1"
},
{
"code": "",
"text": "How about GridFS?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks for your suggestion however I don’t want to use GridFs because I don’t want to store my data in two different formats like one in collection which contains basic information and another collection that have image data and then referring it back to the main collection.",
"username": "Raja_S1"
},
{
"code": "",
"text": "If they are smaller than 16mb this article talks about using bindata to do it in the document. It is using nodejs but maybe it’ll give you some inspiration.Storing Images in MongoDB Using NodeJS | by KRISHNA KISHORE V | Medium.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thanks. I will go through it and update this thread.",
"username": "Raja_S1"
},
{
"code": "",
"text": "Unfortunately there is no break through for this problem.\nI’m able to store the data in bsoncxx::types::b_binary format and able to retrieve it. However cannot convert this binary data to image format. Generally char* data will be used to create an image. In this case, its of type uint8_t*. Don’t know how to move forward.",
"username": "Raja_S1"
},
{
"code": "",
"text": "I’ve seen several posts on SO about converting the binary file to a vector and then storing that in a binary field in the database, and similar in reverse.\nC++ is not my primary language (or a distant second) so cannot test this without a lot more effort, if that’s your primary language then perhaps give that a whirl?",
"username": "John_Sewell"
}
] | Mongocxx and bsoncxx Image store and retrieve | 2023-07-07T10:06:19.865Z | Mongocxx and bsoncxx Image store and retrieve | 748 |
[
"aggregation",
"python",
"atlas",
"weekly-update",
"ruby"
] | [
{
"code": "",
"text": "It’s FRIDAY! You know what that means…Each week, we bring you the latest and greatest from our Developer Relations team — from blog posts and YouTube videos to meet-ups and conferences — so you don’t miss a thing.Everything you see on Developer Center is by developers, for developers. This is where we publish articles, tutorials, and beyond. How to Use PyMongo to Connect MongoDB Atlas with AWS Lambda by Anaiya RaisinghaniThis tutorial will take you through how to properly set up an Atlas cluster, connect it to AWS Lambda using MongoDB’s Python Driver, write an aggregation pipeline on our data, and return our wanted information.Other ShoutoutsGetting Started with MongoDB Atlas and Ruby on Rails by Luce CarterOur Community Champion @Nuri_Halperin also wrote a blog post about how to increase security by limiting access to Atlas.Every month, all across the globe, we organize, attend, speak at, and sponsor events, meetups, and shindigs to bring the DevRel community together. Here’s what we’ve got cooking:Seville MUG: July 12th 2023, 10:00am – 11:30am, (GMT-07:00) Pacific Time\nMelbourne MUG: July 12th 2023, 12:00am, (GMT-07:00) Pacific Time\nIndia vMUG: July 14th 2023, 10:00pm – July 15th 2023, 1:30am, (GMT-07:00) Pacific Time\nAba MUG: July 15th 2023, 3:00am, (GMT-07:00) Pacific Time\nAuckland MUG: July 18th 2023, 10:00pm – July 19th 2023, 12:30am, (GMT-07:00) Pacific Time\nNebraska.Code(): Jul 19, 2023 - Jul 21, 2023\nvMUG EMEA: July 25th 2023, 2:00am – 4:00am, (GMT-07:00) Pacific TimeRecently in Sydney, Kheang Ly, who’s the CEO of OWNA, spoke about the tech founder’s journey. @tomhollander, a MongoDB lead product manager, demonstrated the relational migrator. Many thanks to our event leaders, Wan Bachtiar and @Markus_Thielsch, for making it all happen.\nSydney MUG2048×1536 280 KB\n\nSydney MUG3610×1179 510 KB\n\nSydney MUG800×1066 109 KB\nOver in London, Svitlana Gavrylova from Google Cloud joined our MUG to discuss how MongoDB and Google Cloud collaborate to create even better solutions for your next big project. Thomas Chamberlain from DWP discussed the process of creating a job search engine microservice.And MongoDB’s own Sam Brown demoed data localization in MongoDB, which is a crucial aspect of data management in today’s business landscape. Sam discussed how you can scale MongoDB globally and still provide localized access for either performance or data residency concerns. Great effort by our MUG leader, @Sani_Yusuf.\nLondon MUG931×699 107 KB\n\nLondon MUG930×697 106 KB\n\nLondon MUG523×697 39.9 KB\nAnd at our Frankfurt MUG — led by Tim Bidenkapp, @Nicole_Wesemeyer, and @Prabakaran_Dhanasekaran — @Sascha_Dittmann demonstrated how easy it is to leverage Google’s BigQuery and MongoDB Atlas. Philipp Weyer showcased how MongoDB vector search capabilities will enable us to easily build customer experience features. And Praba and Raul Rincones demonstrated MongoDB’s reliability by explaining the raft protocol.\nFrankfurt MUG1478×1108 197 KB\nMongoDB is heading out on a world tour to bring the best, most relevant content directly to you! Join us to connect with MongoDB experts, meet fellow users building the next big thing, and be among the first to hear the latest announcements. Register now.\n.local1200×627 78.3 KB\nUse code DEVELOPERFAM50 to secure 50% off your ticket!If reading’s not your jam, you might love catching up on our podcast episodes with @Michael_Lynn and @Shane_McAllister. In our newest episode, we sit down with Niall Maher and talk about InnerSource, a software development strategy that applies open-source practices to proprietary code.In this episode, we meet Niall Maher and talk about InnerSource, which is a software development strategy that applies open source practices to proprietary code. InnerSource can help establish an open source culture within an organization while...Additionally, you can join Community Enthusiast Justin Jenkins live next week on the podcast, where we’re talking about unlocking career opportunities with MongoDB.Not listening on Spotify? We got you! We’re also on Apple Podcasts, PlayerFM, Podtail, and Listen Notes. (We’d be forever grateful if you left us a review.)Have you visited our YouTube channel lately? We have a podcast on MongoDB TV where we sit down with David Neal, a Principal Developer Advocate for Pluralsight, and discuss visual storytelling.Additionally on YouTube, Anaiya and Luce live streamed to show us how to build and deploy with Flask, MongoDB Atlas, and Azure App Service. Check out the replay!Don’t forget that if you weren’t able to keep up with .local NYC live, you can catch all the replays on YouTube.Allen Cordrey, our Dallas MUG leader, sat down to discuss how at MongoDB World last year, he found Will Atwood at the Community Cafe and ended up hiring him. He also shares his plans for the Dallas MongoDB User Group.@Justin_Poveda, our NYC MUG leader, also spoke about his experiences with MUGs and the community as a whole.Be sure you subscribe so you never miss an update. Also, keep an eye on all of our upcoming live streams. We’re always bringing you fresh and exciting content.That’ll do it for now, folks! Like what you see? Help us spread the love by tweeting this update or sharing it on LinkedIn.",
"username": "Megan_Grant"
},
{
"code": "",
"text": "MongoDB Local skips Denver, the #4 hitech region in North America ",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Sorry we won’t be in Colorado this year Jack - We’ll let our team know about your request ",
"username": "Veronica_Cooley-Perry"
}
] | The Index (Formerly the Weekly Update) #123 (July 7, 2023): PyMongo, Building Flask Apps, and Becoming Superhuman | 2023-07-07T16:00:27.278Z | The Index (Formerly the Weekly Update) #123 (July 7, 2023): PyMongo, Building Flask Apps, and Becoming Superhuman | 994 |
|
null | [] | [
{
"code": "",
"text": "Hi I am snehasish. recently I signed up for the mongo DB student certification program as a part of the github developer pack. I completed one of the developer paths and received my discount voucher, however at check out the discount voucher is not getting applied specifically it seems that apply button itself is not working. I tried it in multiple browsers in my PC but the same issue seems to persist. Can anybody from the Community help me out.",
"username": "SNEHASISH_BASU1"
},
{
"code": "",
"text": "Hi @SNEHASISH_BASU1 and welcome to MongoDB community forums!!Could you kindly contact @[email protected] with the screenshots and the error message that you encounter when using the coupon?Best regards,\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thank you for assisting. Actually the problem got solved when I tried applying the coupon using incognito mode.",
"username": "SNEHASISH_BASU1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to apply discount voucher | 2023-07-09T07:14:10.519Z | Unable to apply discount voucher | 671 |
null | [
"mongodb-shell",
"transactions",
"field-encryption",
"storage"
] | [
{
"code": "1. {\"t\":{\"$date\":\"2023-07-10T09:40:31.249+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n2. {\"t\":{\"$date\":\"2023-07-10T09:40:32.812+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n3. {\"t\":{\"$date\":\"2023-07-10T09:40:32.813+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n4. {\"t\":{\"$date\":\"2023-07-10T09:40:32.815+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n5. {\"t\":{\"$date\":\"2023-07-10T09:40:32.816+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n6. {\"t\":{\"$date\":\"2023-07-10T09:40:32.816+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n7. {\"t\":{\"$date\":\"2023-07-10T09:40:32.816+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n8. {\"t\":{\"$date\":\"2023-07-10T09:40:32.817+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":14916,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-CFHKUQA\"}}\n9. {\"t\":{\"$date\":\"2023-07-10T09:40:32.817+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n10. {\"t\":{\"$date\":\"2023-07-10T09:40:32.817+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.5\",\"gitVersion\":\"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n11. {\"t\":{\"$date\":\"2023-07-10T09:40:32.818+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 22621)\"}}}\n12. {\"t\":{\"$date\":\"2023-07-10T09:40:32.818+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n13. {\"t\":{\"$date\":\"2023-07-10T09:40:32.819+02:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory C:\\\\data\\\\db\\\\ not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n14. {\"t\":{\"$date\":\"2023-07-10T09:40:32.820+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n15. {\"t\":{\"$date\":\"2023-07-10T09:40:32.820+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n16. {\"t\":{\"$date\":\"2023-07-10T09:40:32.820+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n17. {\"t\":{\"$date\":\"2023-07-10T09:40:32.820+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n18. {\"t\":{\"$date\":\"2023-07-10T09:40:32.820+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n19. {\"t\":{\"$date\":\"2023-07-10T09:40:32.820+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n20. {\"t\":{\"$date\":\"2023-07-10T09:40:32.820+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n21. {\"t\":{\"$date\":\"2023-07-10T09:40:32.821+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n22. {\"t\":{\"$date\":\"2023-07-10T09:40:32.821+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n23. {\"t\":{\"$date\":\"2023-07-10T09:40:32.821+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n24. {\"t\":{\"$date\":\"2023-07-10T09:40:32.821+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n25. {\"t\":{\"$date\":\"2023-07-10T09:40:32.821+02:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n26. {\"t\":{\"$date\":\"2023-07-10T09:40:32.821+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n27. {\"t\":{\"$date\":\"2023-07-10T09:40:32.822+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n28. {\"t\":{\"$date\":\"2023-07-10T09:40:32.822+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n29. {\"t\":{\"$date\":\"2023-07-10T09:40:32.822+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n30. {\"t\":{\"$date\":\"2023-07-10T09:40:32.822+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n31. {\"t\":{\"$date\":\"2023-07-10T09:40:32.822+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n32. {\"t\":{\"$date\":\"2023-07-10T09:40:32.822+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n33. {\"t\":{\"$date\":\"2023-07-10T09:40:32.822+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n34. {\"t\":{\"$date\":\"2023-07-10T09:40:32.822+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:41.507+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:41.509+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.003+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.005+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.006+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.006+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.006+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.007+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":15436,\"port\":27017,\"dbPath\":\"D:/data/db\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-CFHKUQA\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.007+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.007+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.5\",\"gitVersion\":\"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.008+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 22621)\"}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.008+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"D:\\\\data\\\\db\"}}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.019+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3538M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.426+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":407}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.427+02:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.763+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.765+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.767+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuidDisposition\":\"provided\",\"uuid\":{\"uuid\":{\"$uuid\":\"d41ca9aa-fb0d-42b3-89c3-802f960f8241\"}},\"options\":{\"uuid\":{\"$uuid\":\"d41ca9aa-fb0d-42b3-89c3-802f960f8241\"}}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.937+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"d41ca9aa-fb0d-42b3-89c3-802f960f8241\"}},\"namespace\":\"admin.system.version\",\"index\":\"_id_\",\"ident\":\"index-1-3248176528316502644\",\"collectionIdent\":\"collection-0-3248176528316502644\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.939+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":20459, \"ctx\":\"initandlisten\",\"msg\":\"Setting featureCompatibilityVersion\",\"attr\":{\"newVersion\":\"6.0\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.939+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"setFCV\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.940+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.941+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.942+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.944+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-07-10T09:45:43.945+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.430+02:00\"},\"s\":\"W\", \"c\":\"FTDC\", \"id\":23718, \"ctx\":\"initandlisten\",\"msg\":\"Failed to initialize Performance Counters for FTDC\",\"attr\":{\"error\":{\"code\":179,\"codeName\":\"WindowsPdhError\",\"errmsg\":\"PdhAddEnglishCounterW failed with 'Das angegebene Objekt wurde nicht auf dem Computer gefunden.'\"}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.431+02:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"D:/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.435+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.startup_log\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"b2d8b14b-79b0-473b-bc76-1ca7e002c891\"}},\"options\":{\"capped\":true,\"size\":10485760}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.622+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"b2d8b14b-79b0-473b-bc76-1ca7e002c891\"}},\"namespace\":\"local.startup_log\",\"index\":\"_id_\",\"ident\":\"index-3-3248176528316502644\",\"collectionIdent\":\"collection-2-3248176528316502644\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.624+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.625+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.634+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20712, \"ctx\":\"LogicalSessionCacheReap\",\"msg\":\"Sessions collection is not set up; waiting until next sessions reap interval\",\"attr\":{\"error\":\"NamespaceNotFound: config.system.sessions does not exist\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.636+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.635+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"config.system.sessions\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"abf2aa7f-71db-4490-99b6-3f820a4a0793\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.636+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.856+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"abf2aa7f-71db-4490-99b6-3f820a4a0793\"}},\"namespace\":\"config.system.sessions\",\"index\":\"_id_\",\"ident\":\"index-5-3248176528316502644\",\"collectionIdent\":\"collection-4-3248176528316502644\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.857+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"abf2aa7f-71db-4490-99b6-3f820a4a0793\"}},\"namespace\":\"config.system.sessions\",\"index\":\"lsidTTLIndex\",\"ident\":\"index-6-3248176528316502644\",\"collectionIdent\":\"collection-4-3248176528316502644\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-07-10T09:45:44.858+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"config.system.sessions\",\"command\":{\"createIndexes\":\"system.sessions\",\"v\":2,\"indexes\":[{\"key\":{\"lastUse\":1},\"name\":\"lsidTTLIndex\",\"expireAfterSeconds\":1800}],\"ignoreUnknownIndexOptions\":false,\"writeConcern\":{},\"$db\":\"config\"},\"numYields\":0,\"reslen\":114,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":5}},\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":5,\"w\":1}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":5}},\"Global\":{\"acquireCount\":{\"r\":5,\"w\":1}},\"Database\":{\"acquireCount\":{\"r\":4,\"w\":1}},\"Collection\":{\"acquireCount\":{\"r\":5,\"w\":1}},\"Mutex\":{\"acquireCount\":{\"r\":8}}},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":223}}\n",
"text": "I have downloaded a .zip folder called mongodb-win32-x86_64-windows-6.0.5\\bin and inside of it, I have mongod.exe and mongos.exe\nMy aim to work with mongodb from the command line.\nI want to start the mongod server and then make queries with mongosh.exe\nHowever, when I type mongod.exe in the windows terminal, I get the following output (It tells me nothing).I then tried to launch\nmongod.exe --dbpath=“D:\\data\\db”\nThen I get the outputI don’t know wheter that means that I actually managed to start a server and that it is listening now on port 27017When I start another terminal and launch\nmongos “mongodb://localhost:27017”\nI get the error\nError parsing command line: too many positional options have been specified on the command line\ntry ‘mongos --help’ for more informationIf so, how can I connect to that server now with mongosh.exe.\nAs already mentioned, I only want to use command line tools and don’t find the right tutorial for some reason.",
"username": "Florian_Ingerl"
},
{
"code": "",
"text": "you have to read a little, you log actually literally says that it is waiting for connections on port:27017 and address:127.0.0.1if you had try mongosh, it would have workwhy do you try mongos?",
"username": "steevej"
}
] | Start MongoDB server and connect to it using command line tools (only mongod.exe and mongos.exe for windows) | 2023-07-10T07:54:54.728Z | Start MongoDB server and connect to it using command line tools (only mongod.exe and mongos.exe for windows) | 715 |
null | [] | [
{
"code": "[\n {\n \"_id\": ObjectId(\"6499d7eb72fb2552774c9f80\"),\n \"name\": \"Product 1\",\n \"openingStock\": 10\n },\n {\n \"_id\": ObjectId(\"6499d81a72fb2552774c9f82\"),\n \"name\": \"Product 2\",\n \"openingStock\": 10\n },\n {\n \"_id\": ObjectId(\"6499d83d72fb2552774c9f84\"),\n \"name\": \"Product 3\",\n \"openingStock\": 20\n },\n {\n \"_id\": ObjectId(\"6499d86e72fb2552774c9f86\"),\n \"name\": \"Product 4\",\n \"openingStock\": 15\n }, \n ]\n[\n {\n \"_id\": ObjectId(\"64a559f68d79acbc66d96fdf\"),\n \"products\": [\n {\n \"product\": ObjectId(\"6499d7eb72fb2552774c9f80\"),\n \"qty\": 3\n },\n {\n \"product\": ObjectId(\"6499d83d72fb2552774c9f84\"),\n \"qty\": 3\n },\n \n ]\n },\n {\n \"_id\": ObjectId(\"64a559da8d79acbc66d96fde\"),\n \"products\": [\n {\n \"product\": ObjectId(\"6499d7eb72fb2552774c9f80\"),\n \"qty\": 3\n },\n {\n \"product\": ObjectId(\"6499d83d72fb2552774c9f84\"),\n \"qty\": 1.5\n },\n \n ]\n }\n ]\n[\n {\n \"_id\": ObjectId(\"64a5b540ffcbb3b942ccaae8\"),\n \"products\": [\n {\n \"product\": ObjectId(\"6499d81a72fb2552774c9f82\"),\n \"qty\": 2\n },\n {\n \"product\": ObjectId(\"6499d7eb72fb2552774c9f80\"),\n \"qty\": 3.3\n }\n ]\n }\n ]\n[\n {\n \"_id\": ObjectId(\"6499d7eb72fb2552774c9f80\"),\n \"name\": \"Product 1\",\n \"stock\": 7.3\n },\n {\n \"_id\": ObjectId(\"6499d81a72fb2552774c9f82\"),\n \"name\": \"Product 2\",\n \"stock\": 12\n },\n {\n \"_id\": ObjectId(\"6499d83d72fb2552774c9f84\"),\n \"name\": \"Product 3\",\n \"stock\": 15.5\n },\n {\n \"_id\": ObjectId(\"6499d86e72fb2552774c9f86\"),\n \"name\": \"Product 4\",\n \"stock\": 15\n }\n]\n",
"text": "I was trying to build simple billing application. I have store product opening quantity, and sales and purchases along with product id. I need product wise current stock of all products on Products collection. PlaygroundProducts collectionSales CollectionPurchase CollectionMy expected result looks like bellow.Please help me out. Thank you.",
"username": "Pallab_Kole"
},
{
"code": "",
"text": "You are using data in a bit of a relational way here. Why mot keep the product collection updated as you go?\nOther than thaat use lookup, unwind and group to summarise it.Actually can probably just do lookups and then a reduce…If you draw a blank i can try and setup an example tomorrow.",
"username": "John_Sewell"
},
{
"code": "db.getCollection(\"Product\").aggregate([\n{\n $lookup:{\n from:'Purchases',\n localField:'name',\n foreignField:'name',\n as :'Purchases'\n }\n},\n{\n $lookup:{\n from:'Sales',\n localField:'name',\n foreignField:'name',\n as :'Sales'\n }\n},\n{\n $project:{\n currentQty:{\n $subtract:[\n {$sum:'$Purchases.qty'},\n {$sum:'$Sales.qty'}\n ]\n }\n }\n}\n])\n",
"text": "Had a play, something like this?Mongo playground: a simple sandbox to test and share MongoDB queries onlineObviously ensure appropriate indexes are available for the lookups.",
"username": "John_Sewell"
}
] | Calculate product stock from sales and purchase | 2023-07-09T13:26:42.205Z | Calculate product stock from sales and purchase | 291 |
null | [
"queries",
"compass"
] | [
{
"code": "",
"text": "Hi everyone.\nI’m using Atlas MongoDB. I have added a data source starting from a HTTP(S) URL pointing to a CSV file and I have create a collection inside a VirtualDatabase. Everything works well infact I’m able to query this collection both from Compass or with custom code from VS Code. Unfortunately I can’t read data from inside a linked function associated to an HTTP Endpoint. This is the source of my function:\nexports = async function() {\nconst serviceName = ‘mongodb-atlas’;\nconst myDataLake = context.services.get(serviceName);\nconst myDB = myDataLake.db(‘VirtualDatabase0’);\nreturn await myColl.find({first_name: “Lissie”});\n}\nUnfortunately the result is always an empty collection (the query is right - tested on VS Code).\nHoping someone can help me.Bye.\nJohn",
"username": "John_Dole"
},
{
"code": "myCollVirtualDatabase0exports = async function() {\n const serviceName = 'fdi';\n const myDataLake = context.services.get(serviceName);\n const myDB = myDataLake.db('airbnb');\n const myColl = myDB.collection('listingsAndReviews') /// <--- Additional line referencing the collection name\n return await myColl.findOne({});\n}\nresult> result: \n{\n \"_id\": \"23715192\",\n \"listing_url\": \"https://www.airbnb.com/rooms/23715192\",\n \"name\": \"Porto Downtown Luxury Studio\",\n ...\n}\n",
"text": "exports = async function() {\nconst serviceName = ‘mongodb-atlas’;\nconst myDataLake = context.services.get(serviceName);\nconst myDB = myDataLake.db(‘VirtualDatabase0’);\nreturn await myColl.find({first_name: “Lissie”});\n}Is this missing myColl? When I tried to execute your lines of code but with reference to my own test data sources, I got back the following error:{“message”:“‘myColl’ is not defined”,“name”:“ReferenceError”}You have a link to the database VirtualDatabase0 but the query is being run without a collection specified.I wrote something similar on my test environment function which had included a reference to an existing collection in the federated instance which returned an expected document:result from the above function being run in my test environment:Note: shortened the result for readabilityRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "find()findOne()exports = async function() {\n const serviceName = 'fdi';\n const myDataLake = context.services.get(serviceName);\n const myDB = myDataLake.db('airbnb');\n const myColl = myDB.collection('listingsAndReviews')\n return await myColl.find({'_id':'23715192'});\n}\n> result (JavaScript): \nEJSON.parse('[{\"_id\":\"23715192\",\"listing_url\":\"https://www.airbnb.com/rooms/23715192\",\"name\":\"Porto Downtown Luxury Studio\",...\n",
"text": "Also quickly did a find() (instead of findOne()) for the same document which ran as well:Result:",
"username": "Jason_Tran"
},
{
"code": "const serviceName = 'fdi';",
"text": "const serviceName = 'fdi';Dear Jason,\nthanks a lor for your prompt reply.\nIn the operation of cut&paste of my code snippet I forgot the line:const myColl = db.Collection(‘Test_CSV_Collection’);Anyway, the result is always the same either with find() and findOne() method, with or without filters. This is my result obtained by using “Run” from the funciont edito window:ran at 1688980963940\ntook 529.584588ms\nresult:\n\nresult (JavaScript):\nEJSON.parse(‘’)I noticed that you call your service name as ‘fdi’. In my case the service name is ‘mongdb-atlas’. While this linked datasource of my app has is pointing to the “cluster” instance, I created a new linked source that refers to the Federated Database Instance that, infact, is the environment where the external data source is linked.\nBy changing the service name everything worked.I appreaciated your help. Thanks a lot. ",
"username": "John_Dole"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Querying Federated Database Instance from Functions #2 | 2023-07-09T15:45:30.165Z | Querying Federated Database Instance from Functions #2 | 617 |
null | [] | [
{
"code": "const mongoose=require('mongoose');\n\nmongoose.connect('mongodb://localhost:27017/shortner',{\n useCreateIndex:true,\n useNewUrlParser:true,\n useUnifiedTopology:true\n}).then(()=>{\n console.log(\"connection is successfull\");\n}).catch((e)=>{\n console.log(\"no connection \");\n});\nconst express= require('express');\nrequire(\"../main/db/conn\");\nconst app=express();\nconst port= process.env.PORT || 8000;\napp.set('view engine','ejs');\napp.get('/',(req,res)=>{\n res.render('index',{name:'sonai'})\n});\n\napp.listen(port,()=>{\n console.log(\"server is starting\")\n});\n",
"text": "my code:I tried connecting mongodb with mongoose but its not working. I am getting “no connection” in my console.I wrote this in other file and required it in my index.js file.Here is code of my index.js file:Please if anyone can help me with this.",
"username": "Janhabi_Mukherjee"
},
{
"code": "",
"text": "Hi,\nAre you running your database server? If so, how?",
"username": "santimir"
},
{
"code": "",
"text": "It happens on the local server if you do not install Mongosh or not upgrade it.",
"username": "Sumit_Pathak"
}
] | Mongoose.connect is not working | 2022-01-18T18:07:43.161Z | Mongoose.connect is not working | 9,263 |
null | [
"queries"
] | [
{
"code": "{\n id: <uuid> // currently this is primary key or unique key\n otherIds: \n [ \n { type: \"abc\", value: \"<longNumber>\"},\n { type: \"abc2\", value: \"<longNumber>\"} \n { type: \"abc3\", value: \"<longNumber>\"} \n ]\n}\n",
"text": "Hi Team,\nmy data model is something likeNow I have id which is primary key\nI want Mongo to reject the calls if otherIds have got same ids\nIs it possible to put unique constrainst on array nested models ?? Is it performant ??\nOther Solution I am thinking is to derive id from these otherIds and id is already primary key so Mongo would reject it\nWhat are recommended ways by Mongo Team to derive id from array of nested structures ??\nThanks,\nGheri.",
"username": "Gheri_Rupchandani1"
},
{
"code": "",
"text": "Hey Team,\nAny feedback ??",
"username": "Gheri_Rupchandani1"
}
] | Can we create unique constraints on array field? | 2023-06-30T09:26:52.143Z | Can we create unique constraints on array field? | 334 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Hello community, I’m having an issue trying to connect my application or Atlas to an external database in Windows as a MV in Paralles. I’m getting a TimeOut error (3000). I try the same connection in my mac, and works perfect and the same for a physical windows PC. So I supouse that is something is blocking the connection in Paralles. I try it with multiple network configs but nothing.Do you guys have any idea of what is going wrong here?",
"username": "Sandy_Gonzales"
},
{
"code": "",
"text": "Hey @Sandy_Gonzales,Thank you for reaching out to the MongoDB Community forums.an external database in WindowsCould you please provide more details regarding the “external database” you mentioned? As I understand I think that you’re encountering issues connecting to your MongoDB Atlas cloud cluster from an application running on Parallels. Let me know if I understood correctly.Also, could you share which OS is running on your Parallels? Also, please provide more details about the application you’re attempting to connect to.I try the same connection in my mac, and works perfect and the same for a physical windows PC.Can you share how you tried connecting to your MongoDB Atlas cluster? Did you use mongo shell, Compass, or any MongoDB driver for the connection?I’m getting a TimeOut error (3000)May I ask from where you are getting this error message “TimeOut error (3000)”?So I supouse that is something is blocking the connection in Paralles. I try it with multiple network configs but nothing.Have you whitelisted the IP address in your MongoDB Atlas - Network tab?Looking forward to hearing back from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Mongo connection to externaldb inside Paralles | 2023-07-07T16:29:12.613Z | Mongo connection to externaldb inside Paralles | 544 |
null | [] | [
{
"code": "",
"text": "I am wondering if I am eligible for a 100% discount on the MongoDB DBA certification exam since I am a recent graduate and unable to afford the expenses.As a recent graduate, I am interested in pursuing the MongoDB DBA certification to improve my skills and increase my employability in the field of database management. However, the cost of the certification exam may be a barrier for me due to my current financial situation. Therefore, I am inquiring about the possibility of receiving a full discount on the exam.I understand that the availability of discounts and incentives may vary depending on factors such as the policies of MongoDB and other organizations. I am hoping to receive clarification on the eligibility requirements for such discounts and any necessary documentation that may need to be provided to prove my recent graduation and financial hardship.Receiving a discount on the certification exam would greatly assist me in achieving my career goals and improving my financial situation. It would also provide me with valuable skills and knowledge in the field of database management, making me a competitive candidate for future job opportunities.In addition to seeking financial assistance for the certification exam, I am also exploring other ways to improve my skills and knowledge in the field of database management, such as online courses and self-study. I am committed to pursuing all available options to achieve my career goals and improve my financial situation.",
"username": "Zinkal_Desai"
},
{
"code": "",
"text": "Hey @Zinkal_Desai,Please email our MongoDB certification team at [email protected]. They will be happy to help you out.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can I have 100% discount if I am recently passed out | 2023-06-16T07:53:26.377Z | Can I have 100% discount if I am recently passed out | 923 |
null | [
"node-js",
"replication"
] | [
{
"code": "{\n \"message\":\"connection timed out\",\n \"name\":\"MongoServerSelectionError\",\n \"stack\":\"MongoServerSelectionError: connection timed out\n \\n at Timeout._onTimeout (…/node_modules/mongodb/src/sdam/topology.ts:567:30)\n \\n at listOnTimeout (node:internal/timers:569:17)\n \\n at processTimers (node:internal/timers:512:7)\"\n},\n\"msg\":\"connection timed out\",\"time\":\"2023-07-06T16:23:22.142Z\",\n\"v\":0\n}\n{\"t\":{\"$date\":\"2023-07-06T18:23:22.093+02:00\"},\n\"s\":\"I\",\n\"c\":\"NETWORK\",\n\"id\":22943,\n\"ctx\":\"listener\",\n\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:41396\",\"uuid\":\"0bc27e00-4dcc-4b3d-98d9-5fd4b8f58128\",\"connectionId\":8,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-07-06T18:23:22.104+02:00\"},\n\"s\":\"I\", \"c\":\"NETWORK\",\n\"id\":22944,\n\"ctx\":\"conn8\",\n\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:41396\",\"uuid\":\"0bc27e00-4dcc-4b3d-98d9-5fd4b8f58128\",\"connectionId\":8,\"connectionCount\":1}}\nnet:\n ipv6: true\n bindIp: '127.0.0.1'\n port: 27017\nstorage:\n journal:\n enabled: true\nsecurity:\n authorization: 'disabled'\nnew MongoClient('mongodb://127.0.0.1:27017', { directConnection: true })",
"text": "I’m trying to connect to a local instance of MongoDB 6.0.6 (no replica set) via the MongoDB Node client. I’m getting the following error:the corresponding MongoDB log output is:I’m using the following config for the MongoDB isntance:and the node client options are:\nnew MongoClient('mongodb://127.0.0.1:27017', { directConnection: true })",
"username": "Sebastian_Luksic"
},
{
"code": " bindIp: '127.0.0.1'\nbindIp''",
"text": "Hey @Sebastian_Luksic,Thank you for reaching out to the MongoDB Community forums.I suspect the format of the configuration file is not correct, as the bindIp does not need to be enclosed in single quotes ''. I recommend referring to the Configuration Options in the MongoDB docs to learn more about it.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | MongoServerSelectionError with Node client and local installation on Ubuntu 20.04 | 2023-07-06T16:44:58.808Z | MongoServerSelectionError with Node client and local installation on Ubuntu 20.04 | 421 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.