image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"atlas-data-lake"
]
| [
{
"code": "",
"text": "Do you have an ETA for supporting DataLake on Azure? Is there an option to sign for a private beta?",
"username": "Thomas_Pare"
},
{
"code": "",
"text": "Hello @Thomas_Pare ,We don’t yet have a date for Atlas Data Federation or Atlas Data Lake on Azure, but you can follow along on progress here and be notified when it becomes available.feedback.mongodb.comBest,\nBen",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| DataLake on Azure? | 2022-07-26T14:54:30.969Z | DataLake on Azure? | 2,504 |
null | [
"atlas-search"
]
| [
{
"code": "",
"text": "I am working on implementing a Full-Text Search on Dynamic Documents (Documents that have no fixed fields, Fields are dynamic). I had an issue with defining indexes since the fields are dynamic no fixed path can be given to the index.Anyway, I tried to use the Wildcard query with Dynamic Index enabled. But, When I do that I get MongoError: index is not allowed in this atlas tier. I couldn’t find any index level limitations for Free Tier in docs.Does anyone cloud help me, please?",
"username": "Banujan_Balendrakumar"
},
{
"code": "",
"text": "Do you mind sharing your index definition and query?You can read about our free tier limitations here.",
"username": "Elle_Shwer"
}
]
| MongoError: index is not allowed in this atlas tier | 2022-07-28T04:45:19.283Z | MongoError: index is not allowed in this atlas tier | 1,662 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "I am trying to update the memberOfGroups array only if none of the element has the matching value. not sure why its taking even the condition is not satisfy, I can not give condition in find and use positional because I need to update more array inside document. so I need to utilize aggregation pipeline.\nI can not use addtoset as there may be few field which are different , i can rely only on unique id.Please try here.Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "Shyam_Sohane"
},
{
"code": "memberOfGroups._id$pushmemberOfGroupsdb.principals.update(\n { \n _id: \"67448af8-a68b-4d08-8948-2cddca57d708\",\n \"memberOfGroups._id\": {\n $ne: \"ba93384d-d18a-4b36-9a24-7d3ebb1619d8\"\n }\n },\n {\n $push: {\n memberOfGroups: {\n _id: \"ba93384d-d18a-4b36-9a24-7d3ebb1619d8\",\n name: \"test group\"\n }\n }\n }\n)\n$in$memberOfGroups._id$memberOfGroupsdb.principals.update(\n { _id: \"67448af8-a68b-4d08-8948-2cddca57d708\" },\n [{\n $set: {\n memberOfGroups: { $ifNull: [\"$memberOfGroups\", []] }\n }\n },\n {\n $set: {\n memberOfGroups: {\n $cond: [\n { $in: [\"ba93384d-d18a-4b36-9a24-7d3ebb1619d8\", \"$memberOfGroups._id\"] },\n \"$memberOfGroups\",\n {\n $concatArrays: [\n \"$memberOfGroups\",\n [\n {\n _id: \"ba93384d-d18a-4b36-9a24-7d3ebb1619d8\",\n name: \"test group\"\n }\n ]\n ]\n }\n ]\n }\n }\n }\n])\n",
"text": "Hello @Shyam_Sohane,I would suggest you use a normal query instead of update with aggregation pipeline, here is a query,PlaygroundFixed issues in your query,Playground",
"username": "turivishal"
},
{
"code": "",
"text": "Awesome, thanks Vishal. I was comparing opposite. Thanks.",
"username": "Shyam_Sohane"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Update aggregate pipeline is updating array even i am tryign to give condition | 2022-07-28T05:13:00.362Z | Update aggregate pipeline is updating array even i am tryign to give condition | 1,715 |
null | [
"replication"
]
| [
{
"code": "",
"text": "HI, I would like to know, Is it possible to have one primary node and the one secondary node and again on the same secondary node having arbitery running?",
"username": "Rajitha_Hewabandula"
},
{
"code": "",
"text": "It is possible to do this yes, but definitely not recommended. The reason for this is, if the machine that the secondary is on fails, then you have lost two nodes from the replicaset: the secondary and the arbiter. If this should happen, then your current primary will step down and you will no longer will be able to write any data. It is best to have all nodes on separate machines, and even better if in separate data centers.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Replication - one primary and then having secondary and arbiter on same node. possible? | 2022-07-28T10:11:06.531Z | Replication - one primary and then having secondary and arbiter on same node. possible? | 1,386 |
null | [
"queries"
]
| [
{
"code": "db.Sales.find(\n{\n saleRows: {\n $elemMatch: {\n $expr:{\n $ne: ['$currentAmountIncludingTax',{$multiply:['$quantity','$currentPriceIncludingTax']}]\n }\n }\n }\n}\n);\n {\n _id: 'f4a3c536-9e6f-4219-9654-b92ff4645594',\n saleRows: [\n {\n rowId: '1',\n creationDate: '2021-10-06T13:20:48.163Z',\n rowType: 'PRODUCT',\n saleRowType: 'ORDER',\n quantity: 1,\n initialPriceIncludingTax: '39.95',\n currentPriceIncludingTax: '39.95',\n paidPriceIncludingTax: '39.95',\n initialAmountIncludingTax: '39.95',\n currentAmountIncludingTax: '39.95',\n paidAmountIncludingTax: '29.95',\n appliedDiscounts: [], \n }\n ]\n }\n",
"text": "Hi there,I have a collection with large number of documents (1.2M).\nEvery document contains a nested array of objects.I need to find wrong field values <-> amount ≠ quantity * priceI get this error “$expr can only be applied to the top-level document”Do you have some ideas ?",
"username": "emmanuel_bernard"
},
{
"code": "currentAmountIncludingTaxcurrentPriceIncludingTax",
"text": "Hi @emmanuel_bernard,I need to find wrong field values <-> amount ≠ quantity * priceJust wanting to clarify something about the above. The sample document you have given has the currentAmountIncludingTax and currentPriceIncludingTax values as strings. Is this the correct format?Additionally, assuming those fields are converted to integers in some manner, I presume you would not want the sample document returned. Is this correct? If so, would you be able to provide 3-4 sample documents and verify which ones should / shouldn’t be returned.It may be also helpful to provide further context or use case details with regards to this command.Looking forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "[\n {\n _id: 'f4a3c536-9e6f-4219-9654-b92ff4645594',\n saleRows: [\n {\n rowId: '1',\n quantity: 1,\n currentPriceIncludingTax: 39.95,\n currentAmountIncludingTax: 39.95\n }\n ]\n },\n {\n _id: 'f4a3c536-9e6f-4219-9654-b92ff46455945,\n saleRows: [\n {\n rowId: '1',\n quantity: 1,\n currentPriceIncludingTax: 24.95,\n currentAmountIncludingTax: 24.95\n },\n {\n rowId: '2',\n quantity: 1,\n currentPriceIncludingTax: 17.95,\n currentAmountIncludingTax: 29.95\n }\n ]\n },\n {\n _id: 'f4a3c536-9e6f-4219-9654-b92ff4645596',\n saleRows: [\n {\n rowId: '1',\n quantity: 1,\n currentPriceIncludingTax: 14.95,\n currentAmountIncludingTax: 14.95\n }\n ]\n }\n, {\n _id: 'f4a3c536-9e6f-4219-9654-b92ff4645597',\n saleRows: [\n {\n rowId: '1',\n quantity: 1,\n currentPriceIncludingTax: 20.95,\n currentAmountIncludingTax: 24.95\n }\n ]\n }\n]\n",
"text": "Hi @Jason_Tran,Context = when inserting documents in the collection, there was an error in calculating the amount.\nI need to retun the documents where the amount value is wrong and then correct them.Here is a sample of 4 documents, the 2nd document (error on rowId = 2) and the 4th document (error on rowId = 1) should be returned.An aggregation pipeline with $match and $reduce (or $map or $filter) is a solution, but I’m wondering if $elemMatch applied on saleRows could do the job ?Regards,Emmanuel",
"username": "emmanuel_bernard"
},
{
"code": "$filter.find()$expr$elemMatchsaleRows/// variable y set to the following:\n\nDB> y\n{\n '$filter': {\n input: '$saleRows',\n as: 'saleRow',\n cond: {\n '$ne': [\n {$round:[{ '$toDouble': '$$saleRow.currentAmountIncludingTax' },2]},\n {$round:[{\n '$multiply': [\n { '$toDouble': '$$saleRow.quantity' },\n { '$toDouble': '$$saleRow.currentPriceIncludingTax' }\n ]\n },2]}\n ]\n }\n }\n}\n\n/// Aggregation using `filter` from above, projecting the mismatches. Used `$round` to try and avoid binary rounding error\n\nDB> db.Sales.aggregate({$addFields:{mismatches:y}},{$project:{mismatches:1}})\n[\n { _id: ObjectId(\"62e1be991bb9515b8fbd4fe7\"), mismatches: [] },\n {\n _id: ObjectId(\"62e1be991bb9515b8fbd4fe8\"),\n mismatches: [\n {\n rowId: '2',\n quantity: 1,\n currentPriceIncludingTax: 17.95,\n currentAmountIncludingTax: 29.95\n }\n ]\n },\n { _id: ObjectId(\"62e1be991bb9515b8fbd4fe9\"), mismatches: [] },\n {\n _id: ObjectId(\"62e1be991bb9515b8fbd4fea\"),\n mismatches: [\n {\n rowId: '1',\n quantity: 1,\n currentPriceIncludingTax: 20.95,\n currentAmountIncludingTax: 24.95\n }\n ]\n },\n { _id: ObjectId(\"62e1dd7d1bb9515b8fbd4feb\"), mismatches: [] }\n]\n$round{\n _id: ObjectId(\"62e1dd7d1bb9515b8fbd4feb\"),\n saleRows: [\n {\n rowId: '1',\n quantity: 1,\n currentPriceIncludingTax: 24.95,\n currentAmountIncludingTax: 24.95\n },\n {\n rowId: '2',\n quantity: 3,\n currentPriceIncludingTax: 17.95,\n currentAmountIncludingTax: 53.85\n }\n ]\n }\n",
"text": "Hi @emmanuel_bernard - Thanks for providing those details.Use of the aggregation pipeline was something I was going to suggest. I used $filter to identify all the mismatches. The example aggregation provided only identifies mismatches and doesn’t modify them. Correcting them would require an extra / seperate step.An aggregation pipeline with $match and $reduce (or $map or $filter) is a solution, but I’m wondering if $elemMatch applied on saleRows could do the job ?I’m wondering is there a particular reason you’re after using a .find() with $expr and $elemMatch over the aggregation solution you mentioned? I have not tried this method myself yet but it may not be possible due to the sub-documents within the saleRows array.Example aggregation:Note: I added another document to my test environment which was not supposed to be returned as it is equal after the multiplication but due to binary rounding details, I had added the $round to try and avoid this.Additional document which was added:Please note this may not be the exact desired output you want here but the above would show the mismatches and you can alter the pipeline to better suit your use case if required.As with any of these operations / suggestions, please thoroughly test this in a test environment to verify it suits all your use cases and requirements.In saying so, it would possibly be faster (and more straightforward) to fetch all the documents and then proceed to perform the calculations on the client side. Maybe for simplicity with expressing the operation required / getting the desired output whilst also doing the calculation and query in a singular operation, then aggregation might be better suited.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_TranThanks for your answer and all the explainations !I tested your aggregation in a test environment and it works fine With relational databases I’m used to query using WHERE clause, but aggregation pipeline in MongDB is another way to query.Great job !RegardsEmmanuel",
"username": "emmanuel_bernard"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB query for nested array of objects with $elemMatch and $expr | 2022-07-20T08:35:33.619Z | MongoDB query for nested array of objects with $elemMatch and $expr | 3,614 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "Hello,I have actually two questions regarding the v4 of the driver:Since isConnected is no longer used, I notice that there are some libraries that check the isConnected. Is there any alternative for MongoClient.isConnected, or with what should I replace the isConnected check?Is it safe to use the nodejs mongodb driver v4 + mongo 5 already in production, or is it recommended to wait for it few weeks/months?Thanks and best regards\nTony",
"username": "Anton_Tonchev"
},
{
"code": "",
"text": "I came across the same question. How do I check if the connection is still alive ?\n(without creating a disconnect event and updating a variable)\nI would like to use isConnected alike variable to check before getting an conn_error.Or should I use a better approach ?\nThanks in advance for the feedback",
"username": "Guillermo_Lo_Coco"
},
{
"code": "",
"text": "I’m in the same situation.\nI don’t know how to check if a connection is still active using the driver v4.\nAny of you @Anton_Tonchev or @Guillermo_Lo_Coco found out a way to solve this?Thank you in advance.",
"username": "Ricardo_Montoya"
},
{
"code": "",
"text": "I still did not solve it. Lets see",
"username": "Anton_Tonchev"
},
{
"code": "",
"text": "const { MongoClient } = require(‘mongodb’)\nconst db = { connected: false }db.client = new MongoClient(‘mongodb+srv://user:pass@server/?retryWrites=true&writeConcern=majority’)\ndb.client.on(‘open’, _=>{ db.connected=true, log(now()+‘DB connected.’) })\ndb.client.on(‘topologyClosed’, _=>{ db.connected=false, log(now()+‘DB disconnected.’) })I am using this.",
"username": "Guillermo_Lo_Coco"
},
{
"code": "client.isConnected()let cachedDb: Db;\nlet client: MongoClient;\n\nexport const connectToDatabase = async () => {\n \n if (cachedDb && client?.isConnected()) {\n console.log(\"Existing cached connection found!\");\n return cachedDb;\n }\n console.log(\"Aquiring new DB connection....\");\n try {\n // Connect to our MongoDB database hosted on MongoDB Atlas\n\n client = await MongoClient.connect(MONGODB_URI, {\n useNewUrlParser: true,\n useUnifiedTopology: true, \n });\n\n // Specify which database we want to use\n const db = await client.db(DB_NAME);\n\n cachedDb = db;\n return db;\n } catch (error) {\n console.log(\"ERROR aquiring DB Connection!\");\n console.log(error);\n throw error;\n }\n};\nlet cachedDb: Db;\nlet client: MongoClient;\n\nexport const connectToDatabase = async () => {\n \n if (cachedDb) {\n console.log(\"Existing cached connection found!\");\n return cachedDb;\n }\n console.log(\"Aquiring new DB connection....\");\n try {\n // Connect to our MongoDB database hosted on MongoDB Atlas\n\n client = await MongoClient.connect(MONGODB_URI);\n\n // Specify which database we want to use\n const db = await client.db(DB_NAME);\n\n cachedDb = db;\n return db;\n } catch (error) {\n console.log(\"ERROR aquiring DB Connection!\");\n console.log(error);\n throw error;\n }\n};\n",
"text": "Thank you @Guillermo_Lo_Coco. \nI’ve found this 2 links related to the removal of the client.isConnected()and how it now works. From what I could understand, the connection/reconnection is now practically managed internally for every operation, so there is no need to try to reconnect manually.clicking on the “list here” text points to a link about the Unified Topology Design that explains the reasoning behind the removal of many 3.x commands. Here is the relevant section in my opinion:I was using code like this for reusing connection from AWS Lambda, as recommended from MongoDB doc called “Best Practices Connecting From AWS Lambda” (http:// docs . atlas . mongodb . com/best-practices-connecting-from-aws-lambda/ sorry, I can’t post more than 2 links per post so… ):But after reading the referenced links I changed it to something like this:Hope it helps anyone trying to know about alternatives or why the commands were removed.",
"username": "Ricardo_Montoya"
},
{
"code": "",
"text": "@Ricardo_Montoya so basically, you just ignore the isConnected, assuming that it will be always true? I kind a suspected that this would be also the case. But I am still not sure especially if you have a pool and want to know which clients can still be reused, or need to be destroyed.The solution of @Guillermo_Lo_Coco looks nice. I think we can even attack to the client an isConnected boolean in the client event listener, this would do perfectly the job.",
"username": "Anton_Tonchev"
},
{
"code": "let cachedDb: Db;\nlet client: MongoClient;\nUncaught SyntaxError: Unexpected token ':'let cachedDb, Db;\nlet client, MongoClient;\n",
"text": "Always on the lookout for modern javascript syntax, I was puzzled about the above lines.\nAfter a quick web search and pasting into Runkit and onto the command line REPL, I realized those lines are not new and improved code, but rather incorrect syntax which generates error message Uncaught SyntaxError: Unexpected token ':', I suspect what was intended was a comma instead of a colon, as follows.Noted for those who might spend time on it like I did …",
"username": "Joe_Devlin"
},
{
"code": "",
"text": "Sorry @Joe_Devlin , I didn’t mention it was Typescript code.",
"username": "Ricardo_Montoya"
},
{
"code": "",
"text": "A post was split to a new topic: Connection errors after upgrading to MongoDB v4 Node.js driver",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
]
| mongo.isConnected alternative for node driver v 4+ | 2021-07-28T08:04:29.528Z | mongo.isConnected alternative for node driver v 4+ | 18,638 |
null | [
"app-services-data-access"
]
| [
{
"code": "%%user.id_id_iduser_id{\n \"_id\": \"%%user.id\"\n}\nid_idid",
"text": "I’m having trouble creating a document rule for matching the %%user.id (which I believe is a string) to an ObjectId field (_id).The scenario is documents that have _ids which are ObjectId conversions of the authenticated user_id.Here is the rule that does not work:However, if I make an id field that is a string of the same value and adjust the rule from _id to id, it works.Should I just make my document _id’s strings, or is there a way to compare an ObjectId to a string in a rule?",
"username": "Matt_Jennings"
},
{
"code": "id%%user{\n \"id\": \"%%user.id\"\n}\n{\n \"_id\": \"ObjectId(%%user.id)\"\n}\n",
"text": "My solution is to make a string id field that equals the _id so I can compare it to %%user.It would be nice if I could do something likebut this will work for now.",
"username": "Matt_Jennings"
},
{
"code": "",
"text": "@Matt_Jennings Yes you can either use a string as an Id or use a function to perform the conversion and compare for you -https://docs.mongodb.com/realm/functions/call-a-function/index.html#call-from-a-json-expression",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hello all,\nThanks for opening this subject.\nI’m trying to click on the link shared by Ian, but it seems no more available.\nDo you have an other link for this docs ?\nThank in advance,\nKévin",
"username": "SAProcket_software"
}
]
| Comparing strings to ObjectIds in Rules | 2020-06-26T01:20:45.734Z | Comparing strings to ObjectIds in Rules | 8,566 |
[
"node-js",
"mongoose-odm",
"database-tools"
]
| [
{
"code": "mongoimport --db mongo-exercises --collection courses --drop --file exercise-data.json --jsonArray",
"text": "I’m an aspiring programmer studying MongoDB as my first database experience. I’m following the “Coding with Mosh” Node.js full course, which includes MongoDB as the database (it’s a paid course so I won’t bother linking the tutorial).\nWindows 10\nGit Bash\nMongoDB version 6.0.0\nMongoose 5.13.14There are a couple threads on this topic from 2020 and 2021 but they were different OS, different problems and different solutions that didn’t help me.I have been instructed to import a JSON file to my server, which I’m running locally, using the following command:\nmongoimport --db mongo-exercises --collection courses --drop --file exercise-data.json --jsonArrayI make sure that I’m in the directory containing the “exercise-data.json” when I run the command.Stack overflow answers to this exact problem noted that newer versions of MongoDB don’t include the dev tools necessary for this feature, and that they must be downloaded fromCommand line tools available for working with MongoDB deployments. Tools include mongodump, mongorestore, mongoimport, and more. Download now.\nSo I downloaded them via the MSI option to the default file path, and then as instructed I added that filepath in environment variables. I have filepaths in place for both mongodb and the dev tools:\nOnce I had added the paths and it still didn’t work I restarted my system and it still doesn’t work. It still says: mongoimport: command not found.I would be grateful for any guidance.",
"username": "Benjamin_Mason"
},
{
"code": "",
"text": "One thing I did notice in the documentation for mongoimport is it doesn’t list MongoDB 6.0 among the versions that support mongoimport:\nmongoimport compatibility797×323 19.8 KB\nBut it doesn’t make any sense for MongoDB 6.0 to drop support for mongoimport so surely this is just because the documentation hasn’t been updated since 6.0 release. Surely I don’t have to downgrade to version 5 just for mongoimport?",
"username": "Benjamin_Mason"
},
{
"code": "",
"text": "Never mind I figured it out. My filepath to the tools was wrong. There was another directory in the 100 folder that I had missed.Now I’m no longer getting the “command not found” error.",
"username": "Benjamin_Mason"
},
{
"code": "",
"text": "Hey @Benjamin_Mason glad you solved the path issue I would also like to suggest some free MongoDB University courses to reinforce your learning, if you’re interested. In particular, M001 MongoDB Basics and M100 MongoDB for SQL Pros if you come from SQL land. We also have M220JS MongoDB for JavaScript Developers.Welcome and I hope you have a great learning experience Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongoimport: command not found (Windows 10, GitBash, localhost) | 2022-07-24T17:34:20.698Z | Mongoimport: command not found (Windows 10, GitBash, localhost) | 3,679 |
|
null | []
| [
{
"code": "",
"text": "Hi,I have installed MongoDB on Ubunttu 20.04 using tgz file. Unfortunately, I am unable to start the MongoDB service (which I have created) and do we need to create a mongodb user manually? Can somebody kind me through?",
"username": "Helena_N_A"
},
{
"code": "",
"text": "No need to create mongodb user\nWhat error are you getting when you start the service?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Ramachandra,Just that the service is always inactive(dead) with no error.",
"username": "Helena_N_A"
},
{
"code": "",
"text": "What does service status show?\nThere should be some error or message in your mongod.log or syslog",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "No I dont see the error msg from mongod.log. Just that the service doesnt run , it becomes inactive.\nDo you have any format for the service creation and also do we have to set the environment variables?",
"username": "Helena_N_A"
},
{
"code": "",
"text": "Check this thread.You may get sample service file for Ubuntu also from GitHub",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I tried to follow most of the steps. Except that I use another user instead of mongodb.\nUnfortunately my service always starts and becomes dead (inactive) with no error msg.\nTrying to check permission n everything. Could not find anything abnormal",
"username": "Helena_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Issues creating MongoDB service on Ubunttu server | 2022-07-18T10:23:55.554Z | Issues creating MongoDB service on Ubunttu server | 1,789 |
null | []
| [
{
"code": "",
"text": "Hi myself Jack, currently holding the position of Marketing Manager in Hexon Digital. Feel free to connect with me on Website: https://hexondigital.ca/",
"username": "jack_Harry"
},
{
"code": "",
"text": "Welcome to the community @jack_HarryI hope you find the forums useful for your MongoDB journey Kevin",
"username": "kevinadi"
}
]
| Myself Introduction | 2022-07-25T05:55:42.826Z | Myself Introduction | 2,362 |
null | [
"replication"
]
| [
{
"code": "",
"text": "I have a replicaset in version 4.4 With a index on the _id field. With the growing data, we have decided to shard the collection.\nWe need to create a hased index to support hashbased sharding for our collection.\nWith sharding, would mongo require both _id index and_id_hashed index. ?\nAfter sharding can we delete the _id index otherwise it would keep on consuming extra space. ?Thanks in advance for the help",
"username": "Ishrat_Jahan"
},
{
"code": "_id_id_hashed_id",
"text": "Hi @Ishrat_JahanNo you cannot delete the default _id index.With sharding, would mongo require both _id index and_id_hashed index.Yes. They are used differently: the _id index is used to prevent duplicate documents containing the same primary key to be inserted (this is universal in MongoDB, sharded cluster or otherwise), and the _id_hashed index is the shard key (this is specific to a sharded cluster).However I’m curious: what is the content of your _id field? Is it the auto-generated ObjectId, or something custom? Why do you need it to be hashed to serve as the shard key? Is it because it’s monotonically increasing?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "We are storing orders in the mongo store. The _id is the orderId. Each orderId has a timestamp component + UUID + some suffix. Its not monotonically increasing, it will be random. The sharding technique that we intend to use is hashed and mongo requires a hashed index to support this. https://www.mongodb.com/docs/manual/core/hashed-sharding/#hashed-sharding",
"username": "Ishrat_Jahan"
},
{
"code": "",
"text": "Its not monotonically increasing, it will be randomThis may be a little late in your development timeline, but typically hashed shard key is used to solve the issue of “hot shard” or “hot chunk” (where all inserts basically just go into one shard/chunk, limiting the parallelization offered by sharding) due to a monotonically increasing shard key.Since you have an almost-random _id, you should not have this issue. I’m curious, have you tried experimenting with sharding using just your _id as the shard key, and found hashed sharding is a better solution?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "If we just use the _id, without hashed, it would be a range based sharding, right ?\nAnd range based sharding is suitable for scenarios where queries involve contiguous values. Also, it said that the hash based sharding should be used when a random distribution of the data is to be achieved",
"username": "Ishrat_Jahan"
},
{
"code": "_id_id",
"text": "without hashed, it would be a range based sharding, right ?Yes it’s called range sharding, but as far as I know it basically means that the shard key supports range queries. In contrast, hashed sharding does not support range queries.range based sharding is suitable for scenarios where queries involve contiguous values.Yes but I don’t think it’s limited to this application (queries for coniguous values). You can definitely use it for non-range queries as well.hash based sharding should be used when a random distribution of the data is to be achievedThis is true. However from your description of your _id field, it appears that it already is semi-random (at least I think it is, due to the use of UUID). However I can’t really say for sure that it’s truly non-monotonic due to the use of timestamp in the key as well. One way to know is to simulate how the sharded cluster will behave over time using simulated workloads. If the _id field does not create any hotspots in a shard/chunk after an extended simulation, then I think it’s a valid alternative to a hashed key.Sorry I know this is not what you’re asking and this has went off-topic. I’m just trying to provide alternative thoughts Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I will give this a try.\nBut thanks for your input, it was really helpful.Regards,\nIshrat",
"username": "Ishrat_Jahan"
}
]
| Can I delete the _id index after add hased index? | 2022-07-26T08:14:05.350Z | Can I delete the _id index after add hased index? | 3,425 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "Hello,I have been reviewing this tutorial for “How to Automate Continuous Data Copying from MongoDB to S3” and would like to better understand how it works from a resiliency perspective.In the example, the realm function uses an aggregation pipeline to find all documents created in the last 60 seconds, and sends them all to S3. The function is then called by a trigger that runs every 60 seconds.What would happen in the scenario that the function failed to successfully execute? (ie S3 is unavailable, etc). Am I correct in my understanding that all documents created during the outage would never be sent to S3 as there is no queueing mechanism or use of the op log? For my use cause, failing to copy even a single document to S3 is unnacceptable, so I’m curious if it’s possible for this solution to work for me.Thanks!\nChristian",
"username": "Christian_Deiderich"
},
{
"code": "",
"text": "Hi @Christian_Deiderich welcome to the community!What would happen in the scenario that the function failed to successfully execute? (ie S3 is unavailable, etc). Am I correct in my understanding that all documents created during the outage would never be sent to S3 as there is no queueing mechanism or use of the op log?Yes I believe you are correct. This is my understanding as well, due to the same reasons you have identified.Having said that, I don’t think the blog post was written with resiliency in mind it’s more like a proof of concept that this is possible. It’s also creating a new parquet file every minute (which contains the last minute of data in MongoDB), which may or may not be what you want. If this has been running for a while, I imagine you’ll have a lot of files in that bucket.Best regards\nKevin",
"username": "kevinadi"
}
]
| Resiliency of data federation output to S3 | 2022-07-25T18:38:37.682Z | Resiliency of data federation output to S3 | 1,453 |
[
"replication",
"storage"
]
| [
{
"code": "# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\njournal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\nsecurity:\n authorization: \"enabled\"\n keyFile: \"/home/mongo_key/mongodb.key\"\n#operationProfiling:\n\n#replication:\nreplication:\n replSetName: \"kubic-test\"\n\n#sharding:\n",
"text": "Hi, I’m installing mongodb 5.0 community edition on Ubuntu Linux 20.04v, but I keep having problems creating replica set.There are two phenomena related to replica set generation that we are currently experiencing.\nKakaoTalk_20220727_183904394974×442 117 KB\nThere is only one physical server that you want to apply mongodb to. There is one node and one cluster each, and the name is the same, so only the primary is composed of a replica set.I set the mongod.conf file as below.And I executed the following commands to create a replica set.Modify mongod.conf\nreplication:\nreplSetName: “kubic-test”# mongod --config /etc/mongod.confmongodb restart\n# service mongod restartcreate replica set after connecting to mongo\n$ mongo\n> rs.initiate()",
"username": "yjyj989812"
},
{
"code": "",
"text": "Remove authorization:enabled from your config file\nkeyfile does both internal member authentication and access controlAlso why you are starting mongod with config file?\nOnce you edit your config file and restart the service the new params should take effect\nYou should have only one mongod running either manually started one or the one which is started by service",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Once authorization: enabled was removed and a cluster-related role was added to the user role, a replica set was created.To talk about the reason for running Mongod with config file, I thought I had to write service start to run Mongod. I didn’t know that mongod --config was also a method of running mongod, and I knew that I had to apply the mongod --config command to apply the conf change.",
"username": "yjyj989812"
}
]
| Mongodb 5.0 replica set is not created during mongodb first installation | 2022-07-27T11:01:34.196Z | Mongodb 5.0 replica set is not created during mongodb first installation | 2,461 |
|
null | [
"replication",
"change-streams"
]
| [
{
"code": "For a replica set, you can issue the open change stream operation from any of the data-bearing members.",
"text": "Hi, reading https://www.mongodb.com/docs/manual/changeStreams/#open-a-change-stream, it mentions:For a replica set, you can issue the open change stream operation from any of the data-bearing members.which seems to suggest that we can consume change stream from a replica (secondary node). Some questions around that:Any related example or documentation would also be helpful! Thanks",
"username": "Yang_Wu1"
},
{
"code": "",
"text": "I believe the default behavior is: “allowing read change stream from any node within a replset”, and there is no special config to enforce read only from primary or secondary.If that’s the case, do we have any special events being sent out if a node being promoted / demoted at all? Or for change stream API, the “role” of a node (primary vs secondary) is not being exposed to clients at all?",
"username": "Yang_Wu1"
},
{
"code": "",
"text": "The read preference is at the client driver level. It is not specific to change stream. You specify the read preference when you establish the connection between the client driver and the server. The change stream will use what ever read preference the connection has.To use other than the default read preference see:",
"username": "steevej"
},
{
"code": "secondarysecondary",
"text": "Thanks @steevej , some follow ups:",
"username": "Yang_Wu1"
},
{
"code": "readPreference",
"text": "Some more context: when we build connection, we pass in a list of server addresses: debezium/MongoClients.java at main · debezium/debezium · GitHubOne of them is primary and I wonder what would happen if we set readPreference to secondary while we are building a connection with the primary node?",
"username": "Yang_Wu1"
},
{
"code": "",
"text": "Related ^After the connection is built and there is a failover (e.g., promote a secondary to primary), would the change stream be re-created automatically to another secondary node?",
"username": "Yang_Wu1"
},
{
"code": "",
"text": "I have no idea.Hopefully, someone will jump in the thread.",
"username": "steevej"
},
{
"code": "",
"text": "Hi guys. I have the same situation. When Primary and secondary change, streaming does not output any changes, and only work when change back.Does anyone found how to solve it?",
"username": "Mykhailo_Kuznietsov"
},
{
"code": "uri = \"mongodb://localhost:27017,localhost:27018,localhost:27019/test?replicaSet=replset&readPreference=secondary\"\nconn = pymongo.MongoClient(uri)\ndb = conn.test\ncursor = db.test.watch()\nwhile 1:\n print(next(cursor))\nreadPreference=secondary",
"text": "Hi @Mykhailo_Kuznietsov welcome to the community!I did a simple, straightforward test for this. Basically I created a 3-node replica set, and connected a simple Python script to them:Note the option readPreference=secondary in the connection string above.I then inserted a couple of documents to the collection to make sure the script works.Then I killed the primary, wait for the new primary to be elected, then inserted more documents.The script stays running and outputting the newly inserted documents as expected.If this is not working as above for you, could you post:Best regards\nKevin",
"username": "kevinadi"
}
]
| Consume change streams from secondary nodes | 2022-05-17T03:54:25.663Z | Consume change streams from secondary nodes | 4,646 |
null | [
"queries",
"dot-net",
"python"
]
| [
{
"code": "",
"text": "Simply put, my question lies in the title:Details below.The application has to save 1.22MB of data every 100 milliseconds, more than that causes latencies in my system which is very serious. Without using compression it takes ~140ms, with compression it takes ~80ms but the system runs many services so there could be delays, so even the ~80ms is not enough. With compression the transfer speed is ~15MBps at best, but the maximal transfer speed I achieved (using a file transfer to the mongo server computer) gave 50MBps.Moreover, I checked the bandwidth using simple client-server python scripts (on the same Mongo service port), I received a 30MBps transfer speed for transferring 2MB (Which is more than the data I’m required to insert uncompressed) in a while loop . Each send took ~2ms.I would like some insights if you have any.Thanks.",
"username": "DanielLyubin_N_A"
},
{
"code": "",
"text": "Hey @DanielLyubin_N_A welcome to the community!Ideally MongoDB would perform as much as the hardware allows, as well as any database product in the market. It’ll be a strange database product indeed that deliberately creates artificial limits to performance However I’d like to clarify some things:MongoDB v4.4 community edition, running on Windows 10 Virtual MachineAny reason why it needs to run inside a virtual machine? In light of the timing requirements, have you tried giving the virtual machine more resources and see if it improves the situation?the system runs many services so there could be delaysIs the MongoDB server sharing the hardware with other resource-intensive apps?Without using compression it takes ~140ms, with compression it takes ~80msHow are you measuring these times? Is it from the app side, or the server side, or others? I’m thinking that if it’s from the app side, then the time would be app+database. Would it be possible to isolate the timings from only the app side and only the database side so we know which one needs improvements?Also what is the topology: are you running MongoDB as a standalone, or a replica set, or a sharded cluster?In general, I would recommend you setup MongoDB according to the production notes and operations checklist for best results. Please ensure that the MongoDB server is running by following those recommendations so the server is set up for success and not inadvertently given a suboptimal setting that will hinder it.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "mongodmongod",
"text": "Any reason why it needs to run inside a virtual machine? In light of the timing requirements, have you tried giving the virtual machine more resources and see if it improves the situation?It runs on a cloud service at work similar to Azure or AWS, so the resources are not the issue. Moreover, performance on said VM is better than on another VM with better resources (but different disk configuration).Is the MongoDB server sharing the hardware with other resource-intensive apps?No.How are you measuring these times? Is it from the app side, or the server side, or others? I’m thinking that if it’s from the app side, then the time would be app+database. Would it be possible to isolate the timings from only the app side and only the database side so we know which one needs improvements?I did the following test: I wrote 60MB using the driver, it took ~10s. I checked mongod logs and noticed that the insert (slow query) took ~400ms. I also checked the network I/O for mongod using the Performance tool and saw that during the inserts the speed was ~70KBps which is weird since my speed calculation gives a better result.Also what is the topology: are you running MongoDB as a standalone, or a replica set, or a sharded cluster?Standalone.In general, I would recommend you setup MongoDB according to the production notes and operations checklist for best results. Please ensure that the MongoDB server is running by following those recommendations so the server is set up for success and not inadvertently given a suboptimal setting that will hinder it.I have already checked these and the hardware configuration is adequate.",
"username": "DanielLyubin_N_A"
},
{
"code": "mongod",
"text": "Thanks for the details. A couple of pointers that may be helpful:I checked mongod logs and noticed that the insert (slow query) took ~400ms.Ideally we should not see any slow queries in the logs. In my experience, the most typical reason for a database’s slow operation are:Usually I would suggest to determine the main issue in that order. That is, if everything else is tuned to perfection, then maybe the hardware need improvements.during the inserts the speed was ~70KBps which is weird since my speed calculation gives a better result.If I understand correctly, the Python script was not writing to disk (which the database server must do), so perhaps that’s one avenue for investigation.Maybe the output of mongostat could be helpful here: it provides a high level overview of how stressed the server is.Standalone.For production deployment, I would encourage you to use a replica set. A standalone should only be used for development purposes.Also you might be interested in checking out the free MongoDB University course M201 MongoDB Performance which touches on different performance considerations.Best regards\nKevin",
"username": "kevinadi"
}
]
| Is there a bandwidth/speed limitation with MongodB? | 2022-07-26T17:47:56.560Z | Is there a bandwidth/speed limitation with MongodB? | 2,945 |
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "{\n \"_id\": 17995,\n \"dates\": [\n \"2022-05-05T00:00:00.000+00:00\",\n \"2022-05-09T00:00:00.000+00:00\",\n \"2022-05-31T00:00:00.000+00:00\"\n ]\n}\nmatch: {\n $gte: ISODate(\"2022-05-25T00:00:00.000+00:00\")\n ...\n}\n\"2022-05-09T00:00:00.000+00:00\", \"2022-05-31T00:00:00.000+00:00\"\"2022-05-31T00:00:00.000+00:00\"\"2022-05-09T00:00:00.000+00:00\"",
"text": "I have a document like this:I need to query by date but keep the previous index. For example:Should return \"2022-05-09T00:00:00.000+00:00\", \"2022-05-31T00:00:00.000+00:00\".\nBecause the range matches \"2022-05-31T00:00:00.000+00:00\" and \"2022-05-09T00:00:00.000+00:00\" is the previous one. How to do that with the aggregation framework?",
"username": "pseudo.charles"
},
{
"code": "\"2022-05-25T00:00:00.000+00\"\"2022-05-09T00:00:00.000+00:00\", \"2022-05-31T00:00:00.000+00:00\"\"2022-05-31T00:00:00.000+00:00\"\"2022-05-09T00:00:00.000+00:00\"\"2022-05-09T00:00:00.000+00:00\"$matchdate$toDatefilteredDates$filter\"dates\"ISODate(\"2022-05-25T00:00:00.000Z\"DB> db.dates.aggregate({\n '$addFields': {\n filteredDates: {\n '$filter': {\n input: '$dates',\n as: 'date',\n cond: {\n '$gte': [\n { '$toDate': '$$date' },\n ISODate(\"2022-05-25T00:00:00.000Z\")\n ]\n }\n }\n }\n }\n})\n/// Output:\n[\n {\n _id: 17995,\n dates: [\n '2022-05-05T00:00:00.000+00:00',\n '2022-05-09T00:00:00.000+00:00',\n '2022-05-31T00:00:00.000+00:00'\n ],\n filteredDates: [ '2022-05-31T00:00:00.000+00:00' ] /// <--- Filtered dates\n }\n]\n$filter$addFields$toDatedates",
"text": "Hi @pseudo.charles,match: {\n$gte: ISODate(“2022-05-25T00:00:00.000+00:00”)\n…\n}The above indicates greater than or equal to \"2022-05-25T00:00:00.000+00\" (25th of May 2022). However, below you wrote:Should return \"2022-05-09T00:00:00.000+00:00\", \"2022-05-31T00:00:00.000+00:00\" .\nBecause the range matches \"2022-05-31T00:00:00.000+00:00\" and \"2022-05-09T00:00:00.000+00:00\" is the previous oneThe first date mentioned is \"2022-05-09T00:00:00.000+00:00\" (9th of May 2022). This is not greater than or equal to the initial date mentioned in your $match example.Additionally, the “dates” mentioned inside the date array are string values. Is this expected? In the example below, without converting these values to dates using $toDate no documents are returned in the filteredDates array.To clarify, can you advise on the expected / desired output documents?In the meantime, please see the below example aggregation that uses $filter to retrieve the “dates” within the \"dates\" array that are greater than or equal to ISODate(\"2022-05-25T00:00:00.000Z\":For your reference for the above example aggregation:Please test this thoroughly in a test environment to ensure it meets your requirements / suits your use cases as I have only used the sample document you provided.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Array of dates, match date range, including the previous index | 2022-06-10T22:06:42.869Z | Array of dates, match date range, including the previous index | 3,152 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "{\n\t\"_id\" : ObjectId(\"6294b9d51093daae951e1803\"),\n\t\"Source\" : \"Dynatrace\",\n\t\"Service_Host\" : \"SERVICE-4796458781\",\n\t\"Metric\" : \"builtin:service.waitTime\",\n\t\"Metric_Time\" : 1653913980000,\n\t\"Metric_Value\" : 2358371,\n\t\"Time\" : ISODate(\"2022-05-30T18:03:00.000+05:30\")\n},\n\n/* 4 createdAt:30/05/2022, 18:04:29*/\n{\n\t\"_id\" : ObjectId(\"6294b9d51093daae951e1802\"),\n\t\"Source\" : \"Dynatrace\",\n\t\"Service_Host\" : \"SERVICE-4796458781\",\n\t\"Metric\" : \"builtin:service.waitTime\",\n\t\"Metric_Time\" : 1653913920000,\n\t\"Metric_Value\" : 1024414,\n\t\"Time\" : ISODate(\"2022-05-30T18:02:00.000+05:30\")\n},\n\n/* 5 createdAt:30/05/2022, 18:04:29*/\n{\n\t\"_id\" : ObjectId(\"6294b9d51093daae951e1801\"),\n\t\"Source\" : \"Dynatrace\",\n\t\"Service_Host\" : \"SERVICE-4796458781\",\n\t\"Metric\" : \"builtin:service.waitTime\",\n\t\"Metric_Time\" : 1653913860000,\n\t\"Metric_Value\" : 711079,\n\t\"Time\" : ISODate(\"2022-05-30T18:01:00.000+05:30\")\n},\n\n/* 6 createdAt:30/05/2022, 18:04:29*/\n{\n\t\"_id\" : ObjectId(\"6294b9d51093daae951e1800\"),\n\t\"Source\" : \"Dynatrace\",\n\t\"Service_Host\" : \"SERVICE-4796458781\",\n\t\"Metric\" : \"builtin:service.waitTime\",\n\t\"Metric_Time\" : 1653913800000,\n\t\"Metric_Value\" : 1427719,\n\t\"Time\" : ISODate(\"2022-05-30T18:00:00.000+05:30\")\n},\n\n/* 7 createdAt:30/05/2022, 18:04:29*/\n{\n\t\"_id\" : ObjectId(\"6294b9d51093daae951e17ff\"),\n\t\"Source\" : \"Dynatrace\",\n\t\"Service_Host\" : \"SERVICE-4796458781\",\n\t\"Metric\" : \"iowaitTime\",\n\t\"Metric_Time\" : 1653913740000,\n\t\"Metric_Value\" : 568758,\n\t\"Time\" : ISODate(\"2022-05-30T17:59:00.000+05:30\")\n},\n\n/* 8 createdAt:30/05/2022, 18:04:29*/\n{\n\t\"_id\" : ObjectId(\"6294b9d51093daae951e17fe\"),\n\t\"Source\" : \"Dynatrace\",\n\t\"Service_Host\" : \"SERVICE-4796458781\",\n\t\"Metric\" : \"memory\",\n\t\"Metric_Time\" : 1653913680000,\n\t\"Metric_Value\" : 809901,\n\t\"Time\" : ISODate(\"2022-05-30T17:58:00.000+05:30\")\n},\n\n/* 9 createdAt:30/05/2022, 18:04:29*/\n{\n\t\"_id\" : ObjectId(\"6294b9d51093daae951e17fd\"),\n\t\"Source\" : \"Dynatrace\",\n\t\"Service_Host\" : \"SERVICE-4796458781\",\n\t\"Metric\" : \"network\",\n\t\"Metric_Time\" : 1653913620000,\n\t\"Metric_Value\" : 1128014,\n\t\"Time\" : ISODate(\"2022-05-30T17:57:00.000+05:30\")\n},\n\n/* 10 createdAt:30/05/2022, 18:04:29*/\n{\n\t\"_id\" : ObjectId(\"6294b9d51093daae951e17fc\"),\n\t\"Source\" : \"Dynatrace\",\n\t\"Service_Host\" : \"SERVICE-4796458781\",\n\t\"Metric\" : \"waitTime\",\n\t\"Metric_Time\" : 1653913560000,\n\t\"Metric_Value\" : 1580566,\n\t\"Time\" : ISODate(\"2022-05-30T17:56:00.000+05:30\")\n}\n\n",
"text": "sir/mami want all unique “Metric” for given “Service_host” in particular “Metric_Time” range\ncan i get query for thatplease send solution if possible",
"username": "Pramod_Bhat1"
},
{
"code": "$match\"Service_host\"\"Metric_Time\"$group\"Metric\"",
"text": "Hi @Pramod_Bhat1 - Welcome to the community.i want all unique “Metric” for given “Service_host” in particular “Metric_Time” rangeI’m not entirely sure this will get you the desired output but based off the above, you can possibly think about using the following in an aggregation pipeline:If you require further assistance, can you provide the following information.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| I want all unique “Metric” for given “Service_host” in particular “Metric_Time” range | 2022-05-30T14:07:10.649Z | I want all unique “Metric” for given “Service_host” in particular “Metric_Time” range | 1,093 |
null | [
"dot-net"
]
| [
{
"code": "x => user.HasDoStuff.Contains(thingImTryingToDo)\nx => thingImTryingToTouch.IsLocked\n{ \\\"IsLocked\\\" : { \\\"$ne\\\" : true }, \n\\\"$or\\\" : [\n\t{ \\\"_id\\\" : { \\\"$type\\\" : -1 } }, \n\t{ \\\"$and\\\" : [\n\t\t{ \\\"_id\\\" : ObjectId(\\\"62d705692f27e4d2887c9760\\\") }, \n\t\t{ \\\"_id\\\" : { \\\"$type\\\" : -1 } }] }\n\t\t]\n",
"text": "TLDR: It seems like the c# driver is converting LINQ statements to “true/false” type statements which then causes the mongoDB c# driver to think I have a duplicate filter.I am building some expressions in the format of Expression<Func<T, bool>> in order to control access to data within MongoDB using the C# driver, latest version as of this post.Depending on various decisions within the application I end up with a List<Expression<Func<T, bool>>> of different filters I want to use.Example:etc…My code builds each of these based on various functions and checks for security. If I combine each individual Expression<Func<T, bool>> object with filter.And(newExpression) it works perfectly. No issues.However, if I turn them into a FilterDefinition and use “currentFilter &= newFilter”, I ultimately end up with something likeIgnoring the specific details of my situation, the key piece here is that I end up with 2 sections of { “_id” : { “$type” : -1 } }, which the mongo driver then complains about, exception to upsert/get/whatever. That part makes sense, it doesn’t like the exact duplicate query.Soooooo…my core question is:Why do two false statements such as x => user.Contains(allowedRole) and x => user.CompanyID == “XXXX” both end up as { “_id” : { “$type” : -1 } }?I can reproduce this with a ~=20 line unit test independent of any specific application code or logic.",
"username": "Mark_Mann"
},
{
"code": "",
"text": "Side question: any update on if LINQ support for .Intersect() will be supported for Expression<Func<T, bool>>?\nYes I know I can use FilterDefinition AnyIn, but it would be nice if intersect could convert that statement",
"username": "Mark_Mann"
}
]
| Strange Behavior with C# Driver, FilterDefintion<T> and LINQ | 2022-07-28T01:53:56.714Z | Strange Behavior with C# Driver, FilterDefintion<T> and LINQ | 2,287 |
null | [
"data-modeling",
"flutter"
]
| [
{
"code": "6 │ @RealmModel()\n7 │ class _Account {\n │ ━━━━━━━━ in realm model for 'Account'\n8 │ late _Person? person;\n │ ^^^^^^^^ _Person? is not a realm model type\n_Personperson.dart",
"text": "I’m using Flutter Realm SDK. I have two Dart classes that are used as Realm object schema. When these two classes are placed in two different Dart files generation gives error like below._Person is defined in person.dart which is imported here.But when placed within the same file generation succeed. Does that mean all Realm object schema classes have to be placed in the same Dart file?",
"username": "Tembo_Nyati"
},
{
"code": "",
"text": "Hi,\nModels can be in different files if they don’t reference each other. If you need to reference a model from another file make sure to use $ sign for the model class name instead of underscore sign.\nSo in your example it will be\n@RealmModel()\n$Person {\n}and then in another file\n@RealmModel()\nclass _Account {\nlate $Person? person;\n}In general prefer using underscore for the models. This hides your model classes and only the generated Realm classes are visible throughout the application. This means prefer having models that reference each other in the same library.cheers",
"username": "Lyubomir_Blagoev"
},
{
"code": "",
"text": "I stumbled on the same thing as I’m trying out Realm Flutter SDK.I would like to hide my model classes but:It would be nice if there was a way to hide models but still be able to use different files.\nI’m VERY new to Dart but I’ve been contemplating if this can’t be accomplished with part/part of.Regards,\nJimisola",
"username": "Jimisola_Laursen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Do Realm object schema classes have to be in the same Dart file? | 2022-07-25T17:25:56.298Z | Do Realm object schema classes have to be in the same Dart file? | 3,116 |
null | [
"aggregation"
]
| [
{
"code": "{\n _id: ISODate(\"2022-07-24\"),\n week: 30,\n productCounts: [\n {\n id: 34528,\n count: 8\n },\n {\n id: 34556,\n count: 6\n }\n ]\n}\n{\n _id: ISODate(\"2022-07-25\"),\n week: 30,\n productCounts: [\n {\n id: 34528,\n count: 4\n },\n {\n id: 34556,\n count: 6\n }\n ]\n}\n{\n _id: { week: 30 },\n productCounts: [\n {\n id: 34528,\n count: 12\n },\n {\n id: 34556,\n count: 12\n }\n ]\n}\n[\n {\n $unwind: {\n path: '$productCounts'\n }\n }, \n {\n $group: {\n _id: 'id',\n totalCount: {\n $sum: '$count'\n }\n }\n }\n]\n",
"text": "Say I have two documents that contain sales information.Document 1Document 2My goal is to “merge” these two documents to produce another document like this:I am trying to apply the concept from Example 1, Alternative 1 on this page: https://www.mongodb.com/docs/manual/reference/map-reduce-to-aggregation-pipeline/At the moment, my aggregation pipeline looks like this:How can I use the aggregation pipeline to create a single document containing the results from the group stage?",
"username": "Suray_T"
},
{
"code": "_id: 'id',$sum: '$count'_id : \"$productCounts.id\"\n$sum: '$productCounts.count'\n",
"text": "I think the only issues with your pipeline is that you are using_id: 'id',and$sum: '$count'rather thanandTo understand, remove the $group stage entirely and analyze the result of $unwind.",
"username": "steevej"
},
{
"code": "",
"text": "I just notice you wanted _id : week, which won’t be fixed with the changes.I bookmarked back your post to work on it later.",
"username": "steevej"
},
{
"code": "unwind = { \"$unwind\" : \"$productCounts\" }\ngroup_week_id = {\n '$group': {\n _id: { week: '$week', id: '$productCounts.id' },\n count: { '$sum': '$productCounts.count' }\n }\n }\ngroup_week = {\n '$group': {\n _id: { week: '$_id.week' },\n productCounts: { '$push': { id: '$_id.id', count: '$count' } }\n }\n }\npipeline = [ unwind , group_week_id , group_week ]\ndb.sales.aggregate( pipeline )\n",
"text": "I think I nailed it.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Grouping objects in array fields and saving the results to a document | 2022-07-26T21:27:28.537Z | Grouping objects in array fields and saving the results to a document | 2,513 |
null | [
"node-js",
"mongoose-odm",
"atlas-cluster",
"schema-validation",
"next-js"
]
| [
{
"code": "MONGODB_URI =\n 'mongodb+srv://XXX:[email protected]/XXXX?retryWrites=true&w=majority';\n\nimport mongoose from 'mongoose';\n\nconst connection = {};\n\nasync function connect() {\n\n if (connection.isConnected) {\n\n console.log('already connected');\n\n return;\n\n }\n\n if (mongoose.connections.length > 0) {\n\n connection.isConnected = mongoose.connections[0].readyState;\n\n if (connection.isConnected === 1) {\n\n console.log('use previous connection');\n\n return;\n\n }\n\n await mongoose.disconnect();\n\n }\n\n const db = await mongoose.connect(process.env.MONGODB_URI);\n\n console.log('new connection');\n\n connection.isConnected = db.connections[0].readyState;\n\n}\n\nasync function disconnect() {\n\n if (connection.isConnected) {\n\n if (process.env.NODE_ENV === 'production') {\n\n await mongoose.disconnect();\n\n connection.isConnected = false;\n\n } else {\n\n console.log('not disconnected');\n\n }\n\n }\n\n}\n\nfunction convertDocToObj(doc) {\n\n doc._id = doc._id.toString();\n\n doc.createdAt = doc.createdAt.toString();\n\n doc.updatedAt = doc.updatedAt.toString();\n\n return doc;\n\n}\n\nconst db = { connect, disconnect, convertDocToObj };\n\nexport default db;\n",
"text": "Hey Guys,\nfrom one day to another my .env File seems to have an issue…\nI have published the exact same code on vercel and defined the same Environment Variables and it works without a problem.The message i am getting:MongoParseError: Invalid scheme, expected connection string to start with “mongodb://” or “mongodb+srv://”my env file:My db.js file:i hope someone can point me in the right direction…\nThanks",
"username": "Alexander_Redder"
},
{
"code": "require('dotenv').config();\n\nprocess.env.USER_ID // \"239482\"\nprocess.env.USER_KEY // \"foobar\"\nprocess.env.NODE_ENV // \"development\"\n",
"text": "Don’t you need an import or require to use variables from .env file?Examples I have seen look like:See https://nodejs.dev/learn/how-to-read-environment-variables-from-nodejs",
"username": "steevej"
},
{
"code": " const db = await mongoose.connect('mongodb+srv://XXX:[email protected]/XXXX?retryWrites=true&w=majority');\n\n",
"text": "Hi there,thanks for your reply. I have seen this aswell. However its not necessary… since my Live-Page with the same Code is working right now.\n…Even weirder: Yesterday it suddenly worked… now it doesnt work anymore (talking about npm run dev only…)\nmy live page is living happily ever after…I followed the following tutorial to set it up:\nyoutube-TutorialDo you maybe have another idea what the reason might be ?EDIT:I now found out that when i paste my MONGODB_URI string directly into my db.js file… so it says:then it works fine… so maybe i need to find a different workaround to get the string correctly?..\nI cant get the “dotenv” module to work … can you give me another hint where i would add the “require…” ?… When i console.log(process.env.MONGODB_URI) just before the original line; i am getting my MONGODB_URI Link as a string: “…”\ni dont understand why it cant use it in the next line right after it thanks a lot in advance",
"username": "Alexander_Redder"
},
{
"code": "",
"text": "If you look at the examples from the link I posted. You will see that the value is within double quotes and on the same line. I would try that.",
"username": "steevej"
},
{
"code": "",
"text": "alright, so i managed to solve it. Dont ask me why it works like that now…All i had to do: take the “” away …Thanks for your help!",
"username": "Alexander_Redder"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| NextJs .env File problem: MongooseError: The `uri` parameter to `openUri()` must be a string, got "undefined". Make sure the first parameter to `mongoose.connect()` or `mongoose.createConnection()` is a string | 2022-07-25T11:41:06.274Z | NextJs .env File problem: MongooseError: The `uri` parameter to `openUri()` must be a string, got “undefined”. Make sure the first parameter to `mongoose.connect()` or `mongoose.createConnection()` is a string | 19,481 |
null | [
"data-modeling"
]
| [
{
"code": "",
"text": "I’m fairly new to document databases. 25 years on relational. In relational, you could count on the model changing at some point. DDL change script in Dev. Test. Staging. Release to Prod. Done.\nHere in MongoDB, I understand you simply insert into collection X a document with a different shape. But in order not to leave a steaming pile of different documents, I understand it is wise to use a field to version the schema kept in the collection. Then the code has to have logic control to handle the version differences.What are your recommendations for a schema versioning field, something that has worked well for you? As in:\nfield name?\nfield datatype as number or string?\nif string, what format or numbering scheme within the string has worked well for you?",
"username": "Bill_Coulam"
},
{
"code": "",
"text": "Some reading at Building with Patterns: The Schema Versioning Pattern | MongoDB Blog.field nameAlthough, I do not use mongoose, I know they use __v as the schema version field name. I like that as it start with _ like _id and used the same name.In the blog I shared, they used schema_version which I found too verbose.field datatype as number or string?You want to use something that you can easily match related code. With git it could be a commit number, a branch, a tag …I like a value that increases so that if document:__v is smaller than code:__v, I know I might need to update.",
"username": "steevej"
}
]
| Recommendation for document version field? | 2022-07-27T19:14:01.672Z | Recommendation for document version field? | 2,533 |
[
"compass",
"schema-validation"
]
| [
{
"code": "",
"text": "Hi there.\nMy objective is to upgrade an existing mongo database to enforce schemas. Reading this blog, it seems like version 5 actually gives good error messages on insert/update.But how do I get these error messages for existing data?This section from compass is to no help (for complex objects that is):\nScreenshot 2022-07-26 at 09.57.401109×111 14.6 KB\n",
"username": "Alex_Bjorlig"
},
{
"code": "db.collection.validate()",
"text": "Hi @Alex_BjorligYou can validate existing data using db.collection.validate(): db.collection.validate() also validates any documents that violate the collection’s schema validation rules.Hope this helps.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "\"validateOutput\": {\n \"ns\": \"21risk.sessions\",\n \"nInvalidDocuments\": 0,\n \"nrecords\": 181856,\n \"nIndexes\": 2,\n \"keysPerIndex\": {\n \"_id_\": 181856,\n \"expires_1\": 181856\n },\n \"indexDetails\": {\n \"_id_\": {\n \"valid\": true\n },\n \"expires_1\": {\n \"valid\": true\n }\n },\n \"valid\": true,\n \"repaired\": false,\n \"warnings\": [],\n \"errors\": [],\n \"extraIndexEntries\": [],\n \"missingIndexEntries\": [],\n \"corruptRecords\": [],\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": \"7125046445631078401\"\n },\n \"signature\": {\n \"hash\": \"Kq8B4EiRRx5Q7A7Wk1faUpcRfUk=\",\n \"keyId\": {\n \"low\": 2,\n \"high\": 1646856462,\n \"unsigned\": false\n }\n }\n },\n \"operationTime\": {\n \"$timestamp\": \"7125046445631078401\"\n }\n }\nvalidateOutput = await db.admin().validateCollection(col.name);full: trueimport Ajv from 'ajv';\nconst ajv = new Ajv({ strict: false });\nconst validate = ajv.compile(validator.$jsonSchema);\n const result = validate(failedDoc);\n if (!result) {\n docSchemaErrors = validate.errors;\n }\n",
"text": "I don’t seem to get schema validation erros as output, just something like this:With the node.js driver, I call the validate like this validateOutput = await db.admin().validateCollection(col.name);But I don’t seem to have an option for setting full: true - as described in the docs here.Workaround:Currently I simply validate the schema with ajv:@Stennie_X Just tagging you, because I can see in my forum searches that you seem to be the schema validation expert And my AJV workaround does not seem to work well in edge cases",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "For now I actually found it that the most pragmatic approach is simply to create a temporary collection, enforce schema validation, and then try to insert the document giving errors.\nWorks surprisingly well, and gives more stable results than using ajv.",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can you get good error messages for schema validaiton, on existing data? | 2022-07-26T07:58:06.621Z | Can you get good error messages for schema validaiton, on existing data? | 2,809 |
|
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "I am struggling to find a way to create a query that will allow me to match one of three fields (field_A, field_B, field_C) and then alter the field that was matched on, using $addFields.Example Document:\n{\nfield_A: 5,\nfield_B: 10,\nfield_C: 15\n}If I check for a document that has a value of 10 in one of those three fields, I’d like it to match the document above and then allow me to alter the proper field that was matched on with $addFields, in this case it would be field_B. Is this possible?",
"username": "John_Melton"
},
{
"code": "",
"text": "So something around the logic of:If match field_A, then do this\nIf match field_B, then do this\nIf match field_C, then do this\netc.",
"username": "John_Melton"
},
{
"code": "{ \"$or\" : [ {\"field_A\":10} , {\"field_B\":10} , {\"field_C\":10} ] }\n[ { \"$set\" : {\n \"field_A\" : { \"$cond\" : [ \"$eq\" : [ \"$field_A\" , 10 ] , new_value , \"$field_A\" ] }\n \"field_B\" : { \"$cond\" : [ \"$eq\" : [ \"$field_B\" , 10 ] , new_value , \"$field_B\" ] }\n \"field_C\" : { \"$cond\" : [ \"$eq\" : [ \"$field_C\" , 10 ] , new_value , \"$field_C\" ] }\n} } ]\n",
"text": "I would approach that with:The query part of the update is obviously:And the update with aggregation could be:You requirement was not clear about what to do if more than one field matches, so my solution update all the fields that match.",
"username": "steevej"
}
]
| Mongo Query to Match on one of three fields and then alter the field that was matched on | 2022-07-27T03:39:16.652Z | Mongo Query to Match on one of three fields and then alter the field that was matched on | 1,768 |
null | [
"aggregation"
]
| [
{
"code": "{subject : \"abc-value\",\n update_time: \"20220720 23:00:01\"},\n{subject : \"abc-key\",\n update_time: \"20220720 23:00:01\"},\n$match: \n{\n $and: [\n {\n subject: {\n $ne: null\n }\n },\n {\n subject: {\n $ne: {\n $regex: RegExp(\"\\\\/-key$\\\\/i\")\n }\n }\n }\n ]\n}\n",
"text": "HelloData is like this,I want to exclude null and “*-key” in subject fieldI tried as below but it didn’t workanyone help please\nthank you",
"username": "kIL_Yoon"
},
{
"code": "",
"text": "Rather than And you should use Or operator as you are trying to exclude both the records null and subject contains -key.",
"username": "Shyam_Sohane"
},
{
"code": "$match: \n{\n $or: [\n {\n subject: {\n $ne: null\n }\n },\n {\n subject: {\n $ne: {\n $regex: RegExp(\"\\\\/-key$\\\\/i\") <- not working here\n }\n }\n }\n ]\n}\n",
"text": "thanks for the reply Shyam_Sohanei changed like this as you saidcould you please fix or give me advice?",
"username": "kIL_Yoon"
},
{
"code": "$regex: RegExpdb.collection.find({$or: [{\"subject\" : { $not : { $regex : /.*-key/i }}}, {\"subject\" : { $not : { $eq : null }}}]})",
"text": "$regex: RegExpplease try below.",
"username": "Shyam_Sohane"
},
{
"code": "db.collection.find({$or: [{\"subject\" : { $not : { $regex : /.*-key/i }}}, {\"subject\" : { $not : { $eq : null }}}]})\n{\"subject\" : { $not : { $regex : /.*-key/i }}}\n{ $match: { $and: [ { subject: { $ne: null } }, { subject: { $not : { $regex: \"-key$\" } } } ] }}\n{ $match: { \"subject\" : { $ne: null , $not : { $regex: \"-key$\" } } } }\n",
"text": "Rather than And you should use Or operatorNot really, not at least usingThere is a major flaw in the logic since the above seems to match all documents.For all documents with subject=null,will be true, as the $regex will be false, and $not:false will be true.For all documents with subject:*-key, $eq:null will be false, so $not:false will be true.@kIL_Yoon, you were pretty close to the correct solution. The only issue was the use of RegEx() function call and the use of $ne rather than $not.Try with:You may even forgo the explicit $and and use the implicit and version:",
"username": "steevej"
},
{
"code": "",
"text": "but thats what he asked.I want to exclude null and “*-key” in subject field. If we have only regex need to check if it returns null or not.",
"username": "Shyam_Sohane"
},
{
"code": "",
"text": "Yes correct it has to be and else not null will match -key as well. My bad.",
"username": "Shyam_Sohane"
}
]
| How to filter some pattern in aggregate pipeline? | 2022-07-27T03:51:27.919Z | How to filter some pattern in aggregate pipeline? | 1,376 |
null | []
| [
{
"code": "",
"text": "Hi all,When do we expect mongodb 5 to be certified on RHEL 9.Getting gpg error. Is it something to do with SHA1 or SHA256?Please help. Thanks.David",
"username": "David_Tran"
},
{
"code": "",
"text": "Hi.We are also keen to know if MongoDB 5.0 and / or 6.0 will be certified for EL9 (in our case RL9) distributions.I see there is no EL9 repo in MongoDB Repositories.Thanks.",
"username": "INVADE_International"
},
{
"code": "",
"text": "I see that the mongodb-mongosh package for 5.0 and 6.0 has now appeared in an EL9 repo on S3.",
"username": "INVADE_International"
}
]
| Installing mongodb 5 on redhat 9 | 2022-06-23T09:24:29.224Z | Installing mongodb 5 on redhat 9 | 1,999 |
null | [
"indexes"
]
| [
{
"code": "db.example.createIndex({resourceID: 1}, {unique: 1});\ndb.example.createIndex({idempotencyKey: 1}, {unique: 1});\ndb.example.insertOne({\"resourceID\": \"some UUID\", \"idempotencyKey\": \"idempotencyUUID\"});\n// Add more\ndb.example.insertOne({\"resourceID\": \"some UUID\", \"idempotencyKey\": \"idempotencyUUID\"});\n\nE11000 duplicate key error collection: db.example index: resourceID_idx dup key: { resourceID: \"someUUID\" }idempotencyKey_idxresourceID_idx",
"text": "I have two unique indexes in the collection: for example, an index on resourceId and an index on idempotencyKey.I get error like: E11000 duplicate key error collection: db.example index: resourceID_idx dup key: { resourceID: \"someUUID\" }When executing the second query, both indexes are violated. But what if, for the logic of the program, it is necessary that checking the uniqueness of one index would be a priority? For example, idempotencyKey_idx instead resourceID_idx .How to solve this problem?",
"username": "presto78"
},
{
"code": "",
"text": "You need a compound index.",
"username": "steevej"
},
{
"code": "",
"text": "compound index is not suitable, since we have to maintain uniqueness for each attribute",
"username": "presto78"
},
{
"code": "",
"text": "Then, I do not understand why the 2 indices is an issue.If you insert a duplicate resourceID it fails.\nIf you insert a duplicate idempotencyKey it fails.The only issue I can see is that if you insert a document that have both duplicate resourceID and duplicate idempotencyKey, you get a E11000 on only one. I would say that this is a nice optimization offered by the server. It tells you it fails on the first index fail rather than trying all the possible indices and reporting on all the failures.If it is really important to know if they other also fails, you might always query the server if a document exists with the potential duplicate that is not reported.Personally, I would not want to incur the cost of testing all the unique indices that fails. Most of the time, knowing that 1 condition fails is sufficient.",
"username": "steevej"
},
{
"code": "db.example.createIndex({resourceID: 1}, {unique: 1});\ndb.example.createIndex({idempotencyKey: 1}, {unique: 1});\ndb.example_idempotency.createIndex({idempotencyKey: 1});\ndb.example.insertOne({\"resourceID\": \"some UUID\", \"idempotencyKey\": \"idempotencyUUID\"});\n// Add more\n// BEGIN TRANSACTION\ndb.example_idempotency.insertOne({\"idempotencyKey\": \"idempotencyUUID\"}); // additional priority check\ndb.example.insertOne({\"resourceID\": \"some UUID\", \"idempotencyKey\": \"idempotencyUUID\"});\n// END TRANSACTION\n",
"text": "Thanks for the discussion!I agree that in case of checking all indexes during data insertion it is not optimal.But the fact is that in my case, if the uniqueness of different indexes is violated, I need to generate different errors for the system. And in the case when there is a simultaneous violation of two indexes, I need to give priority to an error for one of them.Getting information about uniqueness using a query is a bad idea, since the state of the database may change between such a request and a data insertion request.I see a possible solution to pre-insert in one transaction into a separate collection of the index key, which is more priority.For example:In this case I can control uniqueness with priority.\nThe solution doesn’t look super-good, but I haven’t found another one.",
"username": "presto78"
},
{
"code": "",
"text": "Like you wroteThe solution doesn’t look super-goodI’ll follow up if I think of something.",
"username": "steevej"
}
]
| Priority for checking a unique index | 2022-07-26T13:56:42.548Z | Priority for checking a unique index | 2,350 |
null | [
"indexes"
]
| [
{
"code": "",
"text": "Hi (-:I’ve always thought that you should absolutely never query mongo (in a live production app) without an index satisfying it (partially or full).But if I have a collection that I need to query on a certain field not very frequently, but the writes are way more intensive.To give some context. It’s a USERS collection saving user profiles. During login/Register we need to query the user by email, but this happens only once (or a little more) during the lifetime of the user. On the other hand, writes are being made all the time to that collections.To emphasize the problem, let’s say we now need to support login by googleId, twitterId, and appleId. So I would need to create 3 more indexes just to find the correct user document during login. Those index would sure hurt regarding deletes,updates and inserts.We checked the query, and without an index, it takes about 2-4 seconds. Around 1.4 million docs are in this collection. So regarding the user experience that would be fine for us.In your mongo schema designs do you often go by this way of balancing between “must-have” indexed and “better to let it full scan” queries or do you absolutely satisfy any query with some index?Any help would be much appreciated (-:",
"username": "yaron_levi"
},
{
"code": "{\n userId : \"1\",\n\"auth\" : [ { \"k\" : \"googleId\" , \"v\" : \"xxx\" },\n { \"k\" : \"twitterId\" , \"v\" : \"yyy\" }]\n...\n}\n{ auth.k : 1, auth.v: 1}db.user.findOne({\"auth\" : {$elemMatch : { k: \"googlId\", v :\"xxx\"}}})\n",
"text": "Hi @yaron_levi ,There are design patterns that allow you to minimize the index footprint/number even for a large number of query variations. For example the attribute pattern.Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.Using this example you might have the following document:The index is on { auth.k : 1, auth.v: 1}Query can beTy",
"username": "Pavel_Duchovny"
},
{
"code": " ",
"text": " A-M-A-Z-I-N-G I’ll take a look…",
"username": "yaron_levi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| When it's advised NOT to created an index on a field? | 2022-07-27T13:26:09.408Z | When it’s advised NOT to created an index on a field? | 1,733 |
null | []
| [
{
"code": "{ \n\t\"_id\" : ObjectId(\"000ac1da28d48426a46ba80b\"), \n\t\"sentDate\" : ISODate(\"2022-05-10T19:49:00Z\"), \n\t\"rootId\" : UUID(\"1623178c-9a7b-4e4f-a49c-6dda5cc4f47f\"), \n\t\"vehicleId\" : UUID(\"ff4fc2e9-9dae-4cb5-8834-c654fc39faa6\"),\n\t\"serialNumber\" : \"AB00016987\"\n\t\"points\" : [{ \n\t\t\"point\" : { \n\t\t\t\"type\" : \"Point\", \n\t\t\t\"coordinates\" : [82.655769000000006, -17.650956999999998] \n\t\t}\n\t}] \n}\ndb.sampleData.createIndex( { rootId: 1, vehicleId: 1, sentDate: -1 } );\nOR\ndb.sampleData.createIndex( { rootId: 1, serialNumber: 1, sentDate: -1 } );\n",
"text": "Please find the schema of a mongodb document.A collection has few billions documents. We have to create an index to improve the read operation and have to be careful for write operation as well.Which index would be the best per MongoDB? Any suggestion on this.Thanks much!",
"username": "AARZOO_MANOOSI"
},
{
"code": "",
"text": "Hi @AARZOO_MANOOSI ,Which index is better depanding on type of queries and their operators … Just by looking on a sample documents we can never know.Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Depanding on your server version , you may consider different strategies. Prior to 4.4 the main recommendation was Building massive indexes in MongoDB is best done in a rolling. Manner over replica sets:But post 4.4 it depends on how impactful is the optimized index build. Read more here:Ty",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Sorry got busy and couldn’t visit this thread. We are using M50 cluster and sharded as well.",
"username": "AARZOO_MANOOSI"
},
{
"code": "",
"text": "Hi @AARZOO_MANOOSI ,Which index to create is based on your queries that you need, perhaps you need booth if you use different queries.For atlas sharded enviroments we recommend using the rolling index builds.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Best index creation strategy | 2022-05-12T00:08:58.402Z | Best index creation strategy | 1,785 |
[]
| [
{
"code": "",
"text": "Hello everyone!I have a mongodb collection where I need the json “key” as consecutive numberI want “this” to be 1 2 3 4 … for every document; so document one has “1” in field “this”Hope you can help me!",
"username": "Fabian_Klaus"
},
{
"code": "",
"text": "Hi @Fabian_Klaus ,It sounds like you need some kind of a sequence ability with auto increment. If you use Atlas you can use:Learn how to implement auto-incremented fields with MongoDB Atlas triggers following these simple steps.The change you will need is to use the incremented value as a key of your document and not a specific value…\nThanks",
"username": "Pavel_Duchovny"
}
]
| DB json structure key as consecutive number | 2022-07-27T13:32:21.302Z | DB json structure key as consecutive number | 1,118 |
|
[
"node-js",
"mongoose-odm",
"atlas-cluster"
]
| [
{
"code": "MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. \nOne common reason is that you're trying to access the database from an IP that isn't whitelisted. \nMake sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\n at NativeConnection.Connection.openUri (/myapp/node_modules/mongoose/lib/connection.js:819:32)\n at /myapp/node_modules/mongoose/lib/index.js:379:10\n at /myapp/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/myapp/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (/myapp/node_modules/mongoose/lib/index.js:1224:10)\n at Mongoose.connect (/myapp/node_modules/mongoose/lib/index.js:378:20)\n at Object.module.exports.connect (/myapp/config/db.js:5:10)\n at Server.<anonymous> (/myapp/index.js:75:14)\n at Object.onceWrapper (node:events:513:28) {\n reason: TopologyDescription {\n at promiseOrCallback (/myapp/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (/myapp/node_modules/mongoose/lib/index.js:1224:10)\n at Mongoose.connect (/myapp/node_modules/mongoose/lib/index.js:378:20)\n at Object.module.exports.connect (/myapp/config/db.js:5:10)\n at Server.<anonymous> (/myapp/index.js:75:14)\n at Object.onceWrapper (node:events:513:28) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-tszcvnh-shard-00-01.efxgtyi.mongodb.net:27017' => [ServerDescription],\n 'ac-tszcvnh-shard-00-02.efxgtyi.mongodb.net:27017' => [ServerDescription],\n 'ac-tszcvnh-shard-00-00.efxgtyi.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-ew9q6w-shard-0',\n logicalSessionTimeoutMinutes: undefined\n },\n code: undefined\n}\ndnsPolicydefaultClusterFirstWithHostNetservers-public-ip/32k3sDebian 10serviceLoadBalancerDeploymentapiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: some-name\nspec:\n selector:\n matchLabels:\n app: some-name\n template:\n metadata:\n labels:\n app: some-name\n spec:\n dnsPolicy: ClusterFirstWithHostNet\n containers:\n - name: some-name\n image: me/myimg\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 3000\n env:\n env-variables-here\nServiceapiVersion: v1\nkind: Service\nmetadata:\n name: node-service\nspec:\n selector:\n app: some-name\n type: LoadBalancer\n ports:\n - port: 3000\n targetPort: 3000\n nodePort: 30001\n\n",
"text": "I’m a complete beginner to k8s, this is my first deployment. I have a NodeJS server that connects to MongoDB Atlas. I deployed it to k8s but it doesn’t connect to Atlas.I’m getting the following error in pod logsI tried setting the dnsPolicy to default and ClusterFirstWithHostNet both didn’t work.My Atlas Network access is as follows, I’ve added a lot of possible ip’s in hope of getting 1 running\nThe whitened out ip is my servers-public-ip/32There are API calls to other public API’s like weathermap in the app and they work fine.I’m using k3s binary on a Debian 10 machine.The service type is LoadBalancer.Following is my Deployment configFollowing is my Service config",
"username": "Wilfred_Almeida"
},
{
"code": "",
"text": "Does anything (especially towards the end) of this StackOverflow question help?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "No, the ports 27015-17 are open. When I run the code from IDE it runs fine and connects to Atlas.",
"username": "Wilfred_Almeida"
},
{
"code": "",
"text": "Hi! Is k3s running on your local machine? If so then…When I run the code from IDE it runs fine and connects to Atlas.…would suggest it’s not the IP whitelisting.",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "By IDE I mean VSCode, I have a debian vm as a server so to test it I installed vs code on it. I’ve tried running from vs code and docker and in both cases it connects. Only when I do from k8s it doesn’t connect.",
"username": "Wilfred_Almeida"
},
{
"code": "",
"text": "Okay, good news is that it’s definitely not the whitelisting.Are you using our the Atlas Kubernetes Operator? Or did you create the org/project etc in Atlas yourself and copy out the connection string?",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "Have you got network policies set up?",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "No, I’m not using the operator. I created the project in Atlas and copied the connection string from there.",
"username": "Wilfred_Almeida"
},
{
"code": "",
"text": "I don’t know, I’ve done just the basic stuff to get k8s running. It must all be on default.",
"username": "Wilfred_Almeida"
},
{
"code": "",
"text": "Hmm. I’m not sure…sorry I’d suggest raising a support ticket (top right in Atlas - “Get help”) and they can help you look into it in more detail.Though I suspect the answer is more specific to Kubernetes than Atlas.",
"username": "Dan_Mckean"
}
]
| Cannot connect to Atlas from Kubernetes pod | 2022-07-24T07:48:34.747Z | Cannot connect to Atlas from Kubernetes pod | 5,235 |
|
null | [
"kafka-connector"
]
| [
{
"code": "",
"text": "I’m using the mongodb source and sink connectors. It’s very easy to produce the change streams to a kafka topic per each collection on a single mongodb source connector (which has no collection field). But, I couldn’t find out how to consume the change streams to a mongodb collection per each kafka topic. Is it not possible? or am I missing something?",
"username": "inhyeok_Kim"
},
{
"code": "",
"text": "You need to specify the MongoDB CDC Event handler at the sink. ( com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler)See the \" Selective Replication in MongoDB\" section in this blog post MongoDB Connector for Apache Kafka 1.4 Available Now | MongoDB BlogI’m in the process or writing tutorials for the kafka connector, there is one that is already done that covers this:https://github.com/mongodb-university/kafka-edu/tree/main/tutorials/3-SelectiveReplication",
"username": "Robert_Walters"
},
{
"code": "",
"text": "URL seems to be broken, can you please share the updated URL for Kafka-education topic",
"username": "Balaji_Mohandas"
},
{
"code": "",
"text": "The updated link to the tutorial is -https://github.com/mongodb-university/kafka-edu/tree/main/docs-examples/examples/v1.7/cdc-tutorial",
"username": "Robert_Walters"
}
]
| Multiple mongodb collections and kafka topics using kafka sink connector | 2021-03-02T05:19:30.269Z | Multiple mongodb collections and kafka topics using kafka sink connector | 4,853 |
null | [
"aggregation",
"spark-connector"
]
| [
{
"code": "",
"text": "Spark 3.2.1, mongodb 4.4.15, Spark connector 10.xI can use Spark Structure Streaming to read from mongodb change stream for some tables. But there is a big table on mongodb that cause OOM.I use .option(“aggregation.allowDiskUse”, “true”) but it’s still the same. Even though according to the documentation: https://www.mongodb.com/docs/spark-connector/current/configuration/read/ that value should be true by default.Error message\ncom.mongodb.MongoCommandException: Command failed with error 292 (QueryExceededMemoryLimitNoDiskUseAllowed): 'Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt inTo overcome this issue, I filter the change steam with .option(“spark.mongodb.read.aggregation.pipeline”, “[{‘$match’: {createdAt: {$gte: new ISODate(‘2022-07-17’)}}}]”) or {createdAt: {$gte: ISODate(‘2022-07-17’)}}. Now the streaming job is running but there is no data in the change stream.",
"username": "khang_pham"
},
{
"code": "",
"text": "Did you set the option via \" `spark.mongodb.read.aggregation.allowDiskUse\" or just “aggregation.allowDiskUse” ?",
"username": "Robert_Walters"
},
{
"code": "",
"text": "I tried both and none of them work.",
"username": "khang_pham"
},
{
"code": "",
"text": "are you using a free tier of MongoDB Atlas?",
"username": "Robert_Walters"
},
{
"code": "",
"text": "No I’m using enterprise version",
"username": "khang_pham"
},
{
"code": "",
"text": "We filed https://jira.mongodb.org/browse/SPARK-355 to investigate as it might be a bug with the connector not enabling this parameter for some queries. Thank you for raising the question!",
"username": "Robert_Walters"
},
{
"code": "",
"text": "@khang_pham The fix is now checked in to 10.0.3, it is available here Central Repository: org/mongodb/spark/mongo-spark-connector/10.0.3.",
"username": "Robert_Walters"
}
]
| Spark structure streaming with mongodb change stream: allowDiskUse doesn't work | 2022-07-19T15:17:18.303Z | Spark structure streaming with mongodb change stream: allowDiskUse doesn’t work | 3,692 |
null | [
"android"
]
| [
{
"code": "code 999: Unable to resolve host \"realm.mongodb.com\": No address associated with hostnamecode 998: An unexpected error occurred while sending the request: No such host is known",
"text": "I am logging in with email and password on Xamarin Forms\nIt works on the emulator but on a realy device i am getting this exception\ncode 999: Unable to resolve host \"realm.mongodb.com\": No address associated with hostnameChanging the HttpClientHandler as described here gives me the following exception:\ncode 998: An unexpected error occurred while sending the request: No such host is knownThe device has Android 10 (API 29)How can i fix this?",
"username": "Johannes_Tscholl"
},
{
"code": "HttpClientHandler",
"text": "Hi Johannes,The issue with the HttpClientHandler that you’re linking should not be related to your issue.Andrea",
"username": "Andrea_Catalini"
},
{
"code": "RealmApp = Realms.Sync.App.Create(appId);\n await RealmApp.LogInAsync(Credentials.EmailPassword(emailentry.Text, passwordentry.Text));\n",
"text": "2.3. Both are running Android 10 (API 29)\n4. Unfortunately dont have another one",
"username": "Johannes_Tscholl"
},
{
"code": "",
"text": "Those 2 lines look correct.\nCould you share a minimal repro project that fails in the same way?",
"username": "Andrea_Catalini"
}
]
| Xamarin - /No such host is known - on Device only | 2022-07-26T14:07:08.833Z | Xamarin - /No such host is known - on Device only | 2,936 |
[
"indexes"
]
| [
{
"code": "",
"text": "I’m getting the Duplicate key error collection message. As far as I know this issue is because there is a duplicate key in the collection. However, I do not have a duplicate key, nor do I have anything (database, collection, or key) in my data that says (classroom.class_code: null). Not sure how to fix this. It happens when I try to upload data to the server. I’m tried from the website and from Atlas (from a CVS file as well as a single item).\n",
"username": "david_h"
},
{
"code": "",
"text": "remove index classroom.class_code to resolve this",
"username": "MAY_CHEAPER"
},
{
"code": "",
"text": "It looks like you might have some documents in the collection that have a null value for classroom.class_code. In this case it would be a duplicate key as the index value would also be null.As MAY_CHEAPER mentioned, removing the index would get rid of this error. But I’m assuming you have the index there for a reason. If you need to keep the index then look into making it a partial index - https://www.mongodb.com/docs/manual/core/index-partial/With a partial index you could write a filter to only create indexes where classroom.class_code is not null. But bear in mind, these records will not be indexed.",
"username": "Brad_Palmer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Duplicate key key error collection - I know what it is but I don't have duplicate keys | 2022-07-27T06:26:46.459Z | Duplicate key key error collection - I know what it is but I don’t have duplicate keys | 6,183 |
|
null | []
| [
{
"code": "mappedUsers = [\n {\n groupName: 'Group 1',\n users: [\n 'Nick',\n 'John',\n ]\n },\n {\n groupName: 'Group 2',\n users: [\n 'Mark',\n 'Samuel',\n ]\n }\n]\n\ndata = [\n {\n date: '22/13/07',\n results: [\n {\n user: 'Nick',\n result: 5\n },\n {\n user: 'Mark',\n result: 7\n }\n ]\n },\n {\n date: '22/13/06',\n results: [\n {\n user: 'John',\n result: 6\n },\n {\n user: 'Helga',\n result: 9\n }\n ]\n },\n]\nresults = [\n {\n date: '22/13/07',\n results: [\n {\n user: 'Group 1', <-- replaced\n result: 5\n },\n {\n **user: 'Group 2', <-- replaced\n result: 7\n }\n ]\n },\n {\n date: '22/13/06',\n results: [\n {\n **user: 'Group 1', <-- replaced\n result: 6\n },\n {\n **user: 'Helga', <-- NOT replaced\n result: 9\n }\n ]\n },\n]\n",
"text": "Hi, community,I struggle with replacing values while getting data.\nLet’s imagine we have two collections:I want to replace users with their groups. If a user doesn’t belong to any group we get his name. Same table with just mapped users:How can I replace those values?",
"username": "Nick_Elovsky"
},
{
"code": "db.data.aggregate( [ { '$unwind': '$results' }, \n{\n '$lookup': {\n from: 'mappedUsers',\n localField: 'results.user',\n foreignField: 'users',\n as: 'relationship'\n }\n}, \n{\n '$project': {\n 'results.user': {\n '$cond': {\n if: { '$gte': [ { '$size': '$relationship' }, 1 ] },\n then: '$relationship.groupName',\n else: '$results.user'\n }\n },\n date: 1,\n 'results.result': 1\n }\n}, \n{$group:\n {_id:'$date',\n results:\n {$push:'$results'}}\n}\n]\n)\n[\n {\n _id: ObjectId(\"62df743966c81fed8894c2e2\"),\n date: '22/13/07',\n results: { result: 5, user: [ 'Group 1' ] }\n },\n {\n _id: ObjectId(\"62df743966c81fed8894c2e2\"),\n date: '22/13/07',\n results: { result: 7, user: [ 'Group 2' ] }\n },\n {\n _id: ObjectId(\"62df743966c81fed8894c2e3\"),\n date: '22/13/06',\n results: { result: 6, user: [ 'Group 1' ] }\n },\n {\n _id: ObjectId(\"62df743966c81fed8894c2e3\"),\n date: '22/13/06',\n results: { result: 9, user: 'Helga' }\n }\n]\n",
"text": "Hi @Nick_Elovsky and welcome to the community!!Could you advise further context or the use case details for the desired results you have specified? Additionally, have you considered perhaps doing the replacements on the application end?Also, if you still wish to achieve using aggregation, here is the step by step aggregation stages to achieve the above to an extent.The above aggregation includes four stages and the output response would look like:Please perform thorough testing to verify this suits your use case(s) or requirements. Additionally, you may want to view the $merge documentation which can allow you to write the results of the aggregation pipeline to a specified collection.Let us know if you have any further questions.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Replacing field while taking data | 2022-07-13T18:31:10.944Z | Replacing field while taking data | 995 |
null | [
"data-modeling"
]
| [
{
"code": "{\n \"_id\": \"123\",\n \"timestamp\": 1628632419,\n \"myArray\": [\n {\n \"myNestedArray\": [\n {\n \"name\": \"NestedName1\",\n \"value\": \"1\"\n },\n {\n \"name\": \"NestedName2\",\n \"value\": \"2\"\n },\n {\n \"name\": \"NestedName3\",\n \"value\": \"3\"\n }\n ],\n \"type\": \"MyType\",\n \"name\": \"MyName\",\n \"propertyA\": \"A\",\n \"propertyB\": \"B\",\n \"propertyC\": \"C\",\n \"propertyD\": \"D\",\n \"propertyE\": \"E\",\n },\n ...\n ]\n}\ncollection.createIndex({\n 'myArray.type': 1,\n 'myArray.myNestedArray.name': 1,\n 'myArray.myNestedArray.value': 1,\n })\n{a:[1,2], b:[8,9]}\n{ab:[[1,8], [1,9], [2,8], [2,9]]}\nmyArray\"type\": \"MyType\",\n\"name\": \"MyName\",\n\"myNestedArray0\": {\n \"name\": \"NestedName1\",\n \"value\": \"1\"\n},\n\"myNestedArray1\": {\n \"name\": \"NestedName2\",\n \"value\": \"1\"\n},\n...\ncollection.createIndex({\n 'myArray.type': 1,\n 'myArray.myNestedArray0.name': 1,\n 'myArray.myNestedArray0.value': 1,\n 'myArray.myNestedArray1.name': 1,\n 'myArray.myNestedArray1.value': 1,\n ...\n })\n",
"text": "i have the following document structure:With that, I want to create an index like that:This results in:cannot index parallel arraysI read through the documentation and I understand where the problem is. Now my question is, what is a good structure for my document, in order that my indexing is working?I found the approach to structure from:to:But as I see this approach for my situation, the objects under myArray are too complex.I was also thinking about moving the array indices as own properties like:But this feels wrong and is also not really flexible, furthermore the indexing would be a fix number like:I’m new to mongoDB and it would be great, if someone experienced could give me an advice on my problem.",
"username": "Stephan"
},
{
"code": "",
"text": "Hey @Stephan welcome to the community!There is no one answer to schema design questions, as this is a very personal choice driven by how the data will be used. This is one of MongoDB’s strengths: you don’t design your schema according to how the data will be stored, but rather according to how the data will be used.This is easier said than done, when all our trainings with SQL dictate just the opposite However to help you along your journey, I would ask:I think the answers to both questions could provide an initial hint to the direction you need to go.Also if you’re just starting your MongoDB journey, I would recommend the following resources:Best regards\nKevin",
"username": "kevinadi"
}
]
| Advice for different structure on error "cannot index parallel arrays" | 2022-07-26T13:46:01.128Z | Advice for different structure on error “cannot index parallel arrays” | 2,169 |
null | [
"node-js",
"production",
"field-encryption"
]
| [
{
"code": "UpdateFilterFilter",
"text": "The MongoDB Node.js team is pleased to announce version 4.8.1 of the mongodb package!This patch comes with some bug fixes that are listed below, as well as, a quality of life improvement for nested keys in the UpdateFilter and Filter types. Thanks to gh: coyotte508 (#3328) for contributing this improvement!We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "neal"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Node.js Driver 4.8.1 Released | 2022-07-26T19:10:03.343Z | MongoDB Node.js Driver 4.8.1 Released | 2,195 |
null | [
"aggregation",
"java",
"atlas-cluster",
"serverless"
]
| [
{
"code": "Invalid urimongodb+srv://jmcmt87:<password>@twittermongoinstance.db1xm.mongodb.net/twitter_data.aggregated_data?retryWrites=true&w=majority\npackages = ','.join([\n 'org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1',\n 'com.amazonaws:aws-java-sdk:1.11.563',\n 'org.apache.hadoop:hadoop-aws:3.2.2',\n 'org.apache.hadoop:hadoop-client-api:3.2.2',\n 'org.apache.hadoop:hadoop-client-runtime:3.2.2',\n 'org.apache.hadoop:hadoop-yarn-server-web-proxy:3.2.2',\n 'com.johnsnowlabs.nlp:spark-nlp-spark32_2.12:3.4.2',\n 'org.mongodb.spark:mongo-spark-connector_2.12:3.0.1'\n])\n\nspark = SparkSession.builder.appName('twitter_app_nlp')\\\n .master(\"local[*]\")\\\n .config('spark.jars.packages', packages) \\\n .config('spark.streaming.stopGracefullyOnShutdown', 'true')\\\n .config('spark.hadoop.fs.s3a.aws.credentials.provider', \n 'org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider') \\\n .config('spark.hadoop.fs.s3a.access.key', ACCESS_KEY) \\\n .config('spark.hadoop.fs.s3a.secret.key', SECRET_ACCESS_KEY) \\\n .config(\"spark.hadoop.fs.s3a.impl\",\n \"org.apache.hadoop.fs.s3a.S3AFileSystem\") \\\n .config('spark.sql.shuffle.partitions', 3) \\\n .config(\"spark.driver.memory\",\"8G\")\\\n .config(\"spark.driver.maxResultSize\", \"0\") \\\n .config(\"spark.kryoserializer.buffer.max\", \"2000M\")\\\n .config(\"spark.mongodb.input.uri\", mongoDB) \\\n .config(\"spark.mongodb.output.uri\", mongoDB) \\\n .getOrCreate()\n",
"text": "Hi, I got my URI from Mongo Atlas and I just have to put my password and name of database and collection, but even with that, it keeps giving me this Invalid uri error. I tried changing my password but still I cannot write into my MongoDB collection.I have the serverless modality if that’s relevant.This is the uri I’m using (the one given to me by Atlas, password not included):And in case it’s relevant, this is my configuration:Where mongDB variable is the string of the uri aforementioned.What could be the problem?",
"username": "Jorge_Macos_Martos"
},
{
"code": "",
"text": "Can you connect by shell?\nIs your db name twitter_data.aggregations_data correct?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "My db name is twitter_data, but the collection where I want to write everything is aggregated_data so I thought it would be twitter_data.aggregated_dataUsing the shell I can connect to the db, but only to the db, not the collection. If I try to put the collection as in twitter_data.aggregated_data it says it doesn’t exist but I have it created in the twitter_data db in Atlas",
"username": "Jorge_Macos_Martos"
},
{
"code": "",
"text": "Just tried to write to the database alone (no collection added), as I can connect to it through the shell, but I still get the same error message",
"username": "Jorge_Macos_Martos"
},
{
"code": "",
"text": "You connect to your db not collection thru uri\nOnce you are connected to your db you can create/query your collection\nWhat operation you performed after connecting to your db with shell and what error you got?",
"username": "Ramachandra_Tummala"
},
{
"code": "mongosh \"mongodb+srv://twittermongoinstance.db1xm.mongodb.net/twitter_data\" --apiVersion 1 --username 'jmcmt87'agg_df.write.format(\"mongo\").mode(\"append\").option(\"uri\", mongoDB).save()\n'mongodb+srv://jmcmt87:<password>@twittermongoinstance.db1xm.mongodb.net/twitter_data?retryWrites=true&w=majority'\n",
"text": "In the shell I just tried connecting to it like this:mongosh \"mongodb+srv://twittermongoinstance.db1xm.mongodb.net/twitter_data\" --apiVersion 1 --username 'jmcmt87'But what I want to do is writing my data from PySpark to the MongoDB database in Atlas, this is the command I use:Where mongoDB variable is the following string:",
"username": "Jorge_Macos_Martos"
},
{
"code": "spark = SparkSession.builder \\\n .appName(appName) \\\n .config(\"spark.mongodb.input.uri\", \"mongodb+srv://user:[email protected]/databasename?retryWrites=true&w=majority\") \\\n .config(\"spark.mongodb.output.uri\", \"mongodb+srv://user:[email protected]/databasename?retryWrites=true&w=majority\") \\\n .getOrCreate()\n# Create dataframe named df\ndf.write.format(\"mongo\").option('spark.mongodb.output.collection', 'collection_name')\\\n .mode(\"append\") \\\n .save()\n# Read data from MongoDB\ndf = spark.read.format('mongo').option(\"spark.mongodb.input.collection\", \"collection_name\").load()\ndf.printSchema()\ndf.show()\n",
"text": "I used these settings to create the spark session:These for writing:And these for reading:Hope this help!!",
"username": "Omar_Cubano"
},
{
"code": "",
"text": "i’m having similar issues - i’ve created a separate post.\nAlso - here s the stackoverflow link",
"username": "Karan_Alang1"
}
]
| IllegalArgumentException: requirement failed: Invalid uri | 2022-04-06T16:10:47.195Z | IllegalArgumentException: requirement failed: Invalid uri | 5,666 |
null | [
"queries",
"swift"
]
| [
{
"code": " app.login(credentials: Credentials.anonymous) { results in\n\n switch results {\n case .failure(let error):\n print(\"Login failed: \\(error.localizedDescription)\")\n case .success(let user):\n print(\"Successfully logged in as user \\(user)\")\n\n let client = self.app.currentUser!.mongoClient(\"mongodb-atlas\")\n let database = client.database(named: \"Incidents\")\n let collection = database.collection(withName: \"Incidents\")\n\n let qf: Document = [\"incidentDetails.lastUpdated\":\"{$gt:ISODate('2022-07-23')}\"]\n\n collection.find(filter: qf) { result in\n switch result {\n case .failure(let error):\n print(\"Login failed: \\(error.localizedDescription)\")\n\n case .success(let documents):\n print(\"Great Success\")\n for doc in documents {\n print(\"Title: \\(doc)\")\n }\n\n }\n }\n }\n\n }\nlet qf: Document = [\"incidentDetails.lastUpdated\":\"{$gt:ISODate('2022-07-23')}\"]{\"incidentDetails.lastUpdated\": { $gt:ISODate('2022-07-23')}}",
"text": "I am trying to run an Atlas query via Realm using swift, code is as follows:This executes successfully, logs an anon. user in runs the query returning success, however I get no documents back. I assume at this point it is because of my filter documentlet qf: Document = [\"incidentDetails.lastUpdated\":\"{$gt:ISODate('2022-07-23')}\"]But I am at a loss as to how to write this in Swift. Essentially, I only want to return documents updated within the last few days. I can run this via Atlas using{\"incidentDetails.lastUpdated\": { $gt:ISODate('2022-07-23')}}Any help would be appreciated",
"username": "Gavin_Beard"
},
{
"code": "",
"text": "I should have, I’ve tried various versions of the filter document, with and without {} as well as placing them in different places",
"username": "Gavin_Beard"
},
{
"code": "let qf: Document = [\"some_property\": \"some_known_value\"]\ncollection.findOneDocument(filter: qf) { result in\nnew Date(\"<YYYY-mm-dd>\")ISODate(\"2022-07-23T 09:10:24.000Z\"",
"text": "Some basic troubleshooting may help clarify the issue.I assume the object you are querying for has other properties. Try to just read that object via another property with a known value and see if you get a result. If not, there’s a setup or config issue. If so, then the query in the question is malformed.Try something like thisAlso… isn’t the Date done line this new Date(\"<YYYY-mm-dd>\") which returns an ISODate or use the date format ISODate(\"2022-07-23T 09:10:24.000Z\"Report back your findings.",
"username": "Jay"
},
{
"code": "",
"text": "Hi Jay,This is the object:If I use let qf: Document = [“incidentDetails.incidentType”:“Small”] Then it returns 20 documents as expected.In terms of the right format for the date, I am not sure. Various forms work in Atlas:{“incidentDetails.incidentType”: {$gt:‘2022-07-26’}}\n{“incidentDetails.incidentType”: {$gt:Date(‘2022-07-26’)}}\n{“incidentDetails.incidentType”: {$gt:“2022-07-26”}}I just cannot get an equivalent working from RealmSwift",
"username": "Gavin_Beard"
},
{
"code": "let qf: Document = [\"incidentDetails.lastUpdated\": \"{$gt: new Date('2022-07-23')}\"]",
"text": "The code in the initial question looks pretty close - did you trylet qf: Document = [\"incidentDetails.lastUpdated\": \"{$gt: new Date('2022-07-23')}\"]",
"username": "Jay"
},
{
"code": "",
"text": "Thanks, I thought it looked close too, but no dice: using your code it finds zero documents, if I change to a non-date field 705:could it be that MongoDB via realm doesn’t support $gt ?",
"username": "Gavin_Beard"
},
{
"code": "",
"text": "Well, that’s frustrating. I am pretty sure $gt is supported as it’s defined in the manual.How is the actual date property defined on the object? Is it an ISODate()?",
"username": "Jay"
},
{
"code": "{\n \"name\": \"find\",\n \"arguments\": [\n {\n \"database\": \"Incidents\",\n \"collection\": \"Incidents\",\n \"query\": {\n \"incidentDetails.lastUpdated\": {\n \"$gt\": \"Date('2022-07-26')\"\n }\n }\n }\n ],\n \"service\": \"mongodb-atlas\"\n}\nlet qf: Document = [\"incidentDetails.lastUpdated\": [\"$gt\":\"Date('2022-07-23')\"]]",
"text": "It is indeed frustrating This is in the app logs of Realm:Annoyingly, if I take the query text and run it in Atlas, I get results, I’ve even tried nesting documents:let qf: Document = [\"incidentDetails.lastUpdated\": [\"$gt\":\"Date('2022-07-23')\"]]The field is a Date:\n\nScreenshot 2022-07-26 at 20.46.361980×220 38.9 KB\n",
"username": "Gavin_Beard"
},
{
"code": "let maxDate = Date(timeIntervalSinceNow: -3 * 24 * 3600)\nlet qf: Document = [\"incidentDetails.lastUpdated\":[\"$gt\": .datetime(maxDate)]]\n",
"text": "Resolved at last!!This returns the correct documents",
"username": "Gavin_Beard"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Querying MongoDB from Realm | 2022-07-26T15:49:49.272Z | Querying MongoDB from Realm | 2,277 |
null | [
"python",
"change-streams",
"storage"
]
| [
{
"code": "pymongo.errors.OperationFailure: Executor error during getMore :: caused by :: Committed view disappeared while running operation, full error: {'ok': 0.0, 'errmsg': 'Executor error during getMore :: caused by :: Committed view disappeared while running operation', 'code': 134, 'codeName': 'ReadConcernMajorityNotAvailableYet', '$clusterTime': {'clusterTime': Timestamp(1635457764, 1), 'signature': {'hash': b'\\xb9\\xedq\\xf5)\\xa2\\xd9s\\xe6\\xe8\\xbc\\x01\\xfd\\x1b\\xb3\\x1d[;#V', 'keyId': 7021065227265376259}}, 'operationTime': Timestamp(1635457764, 1)}\n",
"text": "We have a service that listen on one collection, we use change stream to watch for a collection. We use mongodb 5.0 version. I don’t know why but we encountered this error:About database server we use:I would like to get help on knowing what could be the cause of the error and how to prevent it ?Best,\nIgor",
"username": "Igor_Miazek"
},
{
"code": "",
"text": "+1 We are also encountering this issue when we add an arbiter to the three-node replicaset.1 PRIMARY and 2 SECONDARY",
"username": "Ben_Chiciudean"
},
{
"code": "",
"text": "I’ve reported this issue to the server team. Please follow https://jira.mongodb.org/browse/SERVER-68328 for updates.",
"username": "Shane"
}
]
| Committed view disappeared Read Concern Majority Not Available Yet code 134 | 2021-11-02T13:21:00.749Z | Committed view disappeared Read Concern Majority Not Available Yet code 134 | 3,965 |
null | []
| [
{
"code": "",
"text": "HelloMy name is George Botsaris , I’m from Greece and I start using Mongo DB because I’m using an application that use Mongo DB.\nI’m having development skills but using Sql Server and Oracle , So Mongo is something different to me.\nI was wondering if there is any course that I can watch just to understand who Mongo works ?\nFurthermore if anyone knows , is there any way to import and export data from Mongo.In any case\nThanks a lot for reading my message . ",
"username": "George_Botsaris"
},
{
"code": "mongoimportmongoexport",
"text": "Hello @George_Botsaris and welcome to the MongoDB community!MongoDB has several free courses that will help you learn to use the database.I would recommend the following as starters:As for importing and exporting data, you can look at the mongoimport and mongoexport commands.These two courses, and the documentation, should help you get started on your journey. As you work through things, the community is here to help you out and answer your questions.",
"username": "Doug_Duncan"
}
]
| Hello to everyone | 2022-07-26T10:31:32.510Z | Hello to everyone | 1,825 |
[
"flutter"
]
| [
{
"code": "authProvider.registerUser(email, password)app.logIn(emailCred);",
"text": "Hello,I’m trying by following the docs to register and then login with realm.\nUnfortunatly I got an error “Realm Exception : non-zero custom status code considered fatal” when trying to do :\nauthProvider.registerUser(email, password)\nor :\napp.logIn(emailCred);I created a repo to reproduce here :Contribute to geosebas/flutter_realm_auth development by creating an account on GitHub.You will just need a mongodb free tier account with an empty app and email/password auth enabled (with no confirm email and default reset function)I’m new to both flutter and realm so the solution may be very simple, please help me !Thanks",
"username": "Geoffrey_SEBASTIANELLI"
},
{
"code": "_handleSubmittedimport 'package:flutter/material.dart';\n\nimport 'package:realm/realm.dart';\n\nimport './password_field.dart';\nimport 'main.dart';\n\nclass LoginView extends StatefulWidget {\n const LoginView({super.key});\n\n @override\n LoginViewState createState() => LoginViewState();\n}\n\nclass LoginViewState extends State<LoginView> with RestorationMixin {\n var email = '';\n var password = '';\n\n late FocusNode _email, _password;\n\nimport 'package:flutter/material.dart';\n\nimport 'package:realm/realm.dart';\n\nimport 'main.dart';\nimport 'password_field.dart';\n\nclass RegisterView extends StatefulWidget {\n const RegisterView({super.key});\n\n @override\n RegisterViewState createState() => RegisterViewState();\n}\n\nclass RegisterViewState extends State<RegisterView> with RestorationMixin {\n var email = '';\n var password = '';\n\n late FocusNode _email, _password, _retypePassword;\n\n",
"text": "EDIT : You can look inside these 2 files and look for the _handleSubmitted function.Thanks",
"username": "Geoffrey_SEBASTIANELLI"
},
{
"code": "",
"text": "Hi,\nmost probably you are missing network entitlements in your app and it fails on Mac. Could you check you have added these or try running the app on another platform",
"username": "Lyubomir_Blagoev"
},
{
"code": "<uses-permission android:name=\"android.permission.INTERNET\" />",
"text": "Thanks for the answer !I already added this on the android manifest :<uses-permission android:name=\"android.permission.INTERNET\" />Did I miss anything else ?I thought about that so I tried to make a simple http request and it did work.Thanks for your help !",
"username": "Geoffrey_SEBASTIANELLI"
},
{
"code": "",
"text": "Just tried running on IOS and it didn’t work too ",
"username": "Geoffrey_SEBASTIANELLI"
},
{
"code": "import 'dart:convert';\nimport 'dart:io';\nimport 'package:flutter/services.dart' show rootBundle;\nimport 'package:flutter/material.dart';\nimport 'package:path_provider/path_provider.dart';\nimport 'package:realm/realm.dart';\nimport 'model.dart';\n\nvoid main() async {\n WidgetsFlutterBinding.ensureInitialized();\n final realmConfig = json.decode(await rootBundle.loadString('assets/atlas_app/realm_config.json'));\n String appId = realmConfig['app_id'];\n MyApp.allTasksRealm = await createRealm(appId, CollectionType.allTasks);\n MyApp.importantTasksRealm = await createRealm(appId, CollectionType.importantTasks);\n MyApp.normalTasksRealm = await createRealm(appId, CollectionType.normalTasks);\n\n runApp(const MyApp());\n}\n\nenum CollectionType { allTasks, importantTasks, normalTasks }\n",
"text": "Hi again !Managed to make it work by using another appId found in one of your samble here :Still not working with my own very simple near-empty realm app.\n[VERBOSE-2:ui_dart_state.cc(198)] Unhandled Exception: RealmException: non-zero custom status code considered fatalDo I need to do anything else beside creating an app (in mongodb) before starting to work ?",
"username": "Geoffrey_SEBASTIANELLI"
},
{
"code": "invalid username/password",
"text": "Hi,\nI wasn’t able to reproduce your issue with the sample app you provided. I created an empty app on the server and enabled the email auth provider there. Then when I try to login I get invalid username/password error which is understandable since I don’t have any users on the server.\nOn the server make sure you have “deployed” your app changes and not only “saved” them.Since you have been able to make this app work with different server app this seems an environment or configuration issue and not Realm issue itself.",
"username": "Lyubomir_Blagoev"
},
{
"code": "",
"text": "I just finally got the solution !By reading in details the doc, I saw that you cannot choose the GCP as a deployment region for app service, but my atlas cluster is in GCP, so by default when you create an app on app service, it will be on GCP and nothing (or at least user auth) will work.IMHO, it’s a very annoying bug as you cannot debug anything and the error you will get mean nothing.Have a nice day ",
"username": "Geoffrey_SEBASTIANELLI"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Flutter Realm auth not working | 2022-07-21T18:54:09.613Z | Flutter Realm auth not working | 4,156 |
|
null | [
"java"
]
| [
{
"code": "mongodb-driver-sync:4.4.1MongoClients.create()MongoClients.create()",
"text": "I am working on a demo Java application that uses MongoDB for basic retrievals of data and I am using mongodb-driver-sync:4.4.1 Java driver.In that, I am trying to analyze which operation of MongoDB is taking how much time, so that I can speed up my application by writing code efficiently. After experimenting, I noticed that MongoClients.create() function is taking about 170ms to execute. After only a few repeated continuous invocations, this time started decreasing to about 130ms. But I performed a brief context switch of applications in the OS (like spending time on browser and other applications). After coming back to the Java application, I noticed that the time taken by the above function increased to around 170ms, which again gradually decreased to about 130ms after repeated execution without any context switch.This behaviour is affecting the accuracy of my experiments and is causing the total time to vary by +/-40ms because of the MongoClients.create() function. Can anyone please explain why this behaviour is occurring? Is the OS trying to cache something? If yes, what is it trying to do?OS version: macOS Catalina, v10.15.7Any help is appreciated. Thanks.",
"username": "Shreetesh_M"
},
{
"code": "",
"text": "MongoClients.create() should not be part of your part of your performance benchmarks.You should be calling it only once, in the initialization code of your application, and reuse the result for your queries.",
"username": "steevej"
},
{
"code": "MongoClients.create()MongoClients.create()",
"text": "Thank you for replying.You are correct. I must not include that part for benchmarking. But I am not exactly benchmarking the application. I am executing the program only once, and I am trying to record the “wall clock” time difference a real user might informally experience while executing the whole program. This measurement must include the time for MongoClients.create() as it is also a part of the program execution and the end user faces the delay due to the above function as well. You can assume that this is not a server application that is continuously running.I only need to know why this particular function is behaving in such a manner. I have observed this behaviour in functions that contain I/O such as reading files. In such cases, the differences can be attributed to the OS caching the file. But what is causing MongoClients.create() to show a considerable difference of 40-45ms?",
"username": "Shreetesh_M"
},
{
"code": "",
"text": "A lot of things can explain.Where is mongod server running?Is it a dedicated server?Are the client and server running on a private network with isolated traffic? Both running on the same machine?Do you start and stop your java apps between each invocations? If you don’t then the first invocation is byte code interpretation, the others migth be the compiled version.There is probably more than 40ms overhead just to start your java applications.There is propably nothing inherently inside .create to justify that difference. You might even reach some TCP optimization when requesting some remote IP/port from same client.",
"username": "steevej"
},
{
"code": "",
"text": "Where is mongod server running?It is running locally, on localhost:27017.Are the client and server running on a private network with isolated traffic? Both running on the same machine?Both are running on the same machine. There is no external network involved and hence network delay cannot come into light here.Do you start and stop your java apps between each invocations?Yes, but only the client is restarted after each invocation. So, the JIT cannot come into play in this case.And yes, I went through the source code of the driver but not extensively. I too did not find anything that can explain this. Regarding TCP optimizations, what kind of optimizations are generally performed? Because are they capable of providing as high as 40ms difference in connection times?",
"username": "Shreetesh_M"
},
{
"code": "",
"text": "Is the MongoDB server trying to remember the client metadata when the same client connects?\nDoes the server skip a few steps if it detects the same client?Mongod version: 5.0.7",
"username": "Shreetesh_M"
},
{
"code": "",
"text": "repeated execution without any context switchI would be surprise if there is no context switch sinceBoth are running on the same machine.But context switches are not really an issue unless your machine needs paging, when browsing and does not need it when simply running your test.If I understand correctly, the user of your application will be running both the mongod server and your application on the same machine? If not, are your wall clock tests really pertinent, if your typical user will have a different architecture?I think you are worrying too much about this 130ms vs 170ms variation you use-case involve starting the whole application. That is what I assume withthis is not a server application that is continuously runningWhat is the average total wall clock of the use-case, from the start of the non-server application and the result a typical user will see? 130ms vs 170ms might be perceptible but 1130ms vs 1170ms might not be so much.Anyway, what ever optimizations TCP, Firewall or mongod is doing or not, I do not think you can do much.Things you can do if that 40ms is that important1 - try to connect direct with (not sure if it works in Windows)\nmongodb://%2Ftmp%2Fmongodb-27017.sock\n2 - play with pool size parameters, with a non server application, you might be able to reduce the number of connection to a very small value",
"username": "steevej"
},
{
"code": "",
"text": "If I understand correctly, the user of your application will be running both the mongod server and your application on the same machine?Yes, both need to be running on the same machine.Anyway, what ever optimizations TCP, Firewall or mongod is doing or not, I do not think you can do much.True, but at this point I only need to know the exact cause of this behaviour.The average time is around 500ms of the entire application, so the difference might be noticeable. Anyways, I will try to work on the last two points you have mentioned. Thanks.",
"username": "Shreetesh_M"
}
]
| Why is MongoClients.create() function speeding up after repeated function invocattions? | 2022-07-21T06:13:55.223Z | Why is MongoClients.create() function speeding up after repeated function invocattions? | 2,421 |
null | [
"compass",
"mongodb-shell"
]
| [
{
"code": "{ \"myLong\": 12345 }\"myLong\": { \"$numberLong\": \"12345\" }",
"text": "The following previous answer partially solved what I wanted to achieve.I’m pulling data out of a mongo instance running in a kubernettes cluster (and can’t connect using a GUI like Compass). Compass has a view of the data that converts the document to JSON. Using EJSON.stringify and the toArray() method almost gives the output I was hoping for. I’m totally new to mongo and have some follow up questions.Is there a way to output the EJSON.stringify format on queries once already running the mongosh? For my purposes the “pretty” output isn’t really useful as I (ideally) want to be able to copy and paste output.The output of the eval is mostly great but I’ve noticed that certain fields lose their data type descriptors. For example in my database we have some integers defined as Long. The current output of the EJSON.stringify for these fields is { \"myLong\": 12345 } whereas I was hoping for something more like \"myLong\": { \"$numberLong\": \"12345\" } which more closely matches the Compass output. Is there a way to retain this detail? The saved output will form part of a test that will inject this data back into an empty db so I’d like it as close to reality as possible.Thanks in advance, happy to be pointed to docs I may have missed. Still learning.",
"username": "David_Timms"
},
{
"code": "EJSON.stringify().toArray()EJSON.stringify(db.test.find().toArray(), null, 2)relaxed: falsetest> EJSON.stringify(db.longtest.find().toArray(), null, 2, { relaxed: false })\n[\n {\n \"_id\": {\n \"$oid\": \"60ca0404216deba577090bc4\"\n },\n \"l\": {\n \"$numberLong\": \"1\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"621767e082be3efa32ef43e0\"\n },\n \"l\": {\n \"$numberLong\": \"9007199254740993\"\n }\n }\n]\nnull, 2",
"text": "Is there a way to output the EJSON.stringify format on queries once already running the mongosh? ForYou can use EJSON.stringify() in the regular mongosh prompt as well on the result of .toArray(), e.g. EJSON.stringify(db.test.find().toArray(), null, 2) is totally fine.but I’ve noticed that certain fields lose their data type descriptorsYou might be looking for relaxed: false:(The null, 2 bits here are just for prettier, more human-readable output by adding 2 spaces indentation.)",
"username": "Anna_Henningsen"
},
{
"code": "",
"text": "Thank you for the fast response and taking the time to write an extremely clear answer! That works perfectly.I tried the EJSON within mongosh previously but must have messed it up.",
"username": "David_Timms"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| EJSON output from mongosh | 2022-07-25T14:43:45.220Z | EJSON output from mongosh | 2,501 |
null | []
| [
{
"code": "Microsoft Windows [Version 10.0.19042.928]\n(c) Microsoft Corporation. All rights reserved.\n\nC:\\Users\\cyber>cd C:\\Program Files\\MongoDB\\Server\\4.4\\bin\n\nC:\\Program Files\\MongoDB\\Server\\4.4\\bin>mongod\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.073-07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.457-07:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.458-07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.461-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":5172,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"Homebrewery\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.462-07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.462-07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.5\",\"gitVersion\":\"ff5cb77101b052fa02da43b8538093486cf9b3f7\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.463-07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19042)\"}}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.463-07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.471-07:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22271, \"ctx\":\"initandlisten\",\"msg\":\"Detected unclean shutdown - Lock file is not empty\",\"attr\":{\"lockFile\":\"C:\\\\data\\\\db\\\\mongod.lock\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.475-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"C:/data/db/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.476-07:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22302, \"ctx\":\"initandlisten\",\"msg\":\"Recovering data from the last clean checkpoint.\"}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.476-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=819M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.521-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490834:520889][5172:140711416190288], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.576-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490834:575893][5172:140711416190288], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.639-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490834:639875][5172:140711416190288], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/322560 to 2/256\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.641-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490834:641875][5172:140711416190288], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.695-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490834:694892][5172:140711416190288], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.741-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490834:740875][5172:140711416190288], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.741-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490834:740875][5172:140711416190288], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.777-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490834:776875][5172:140711416190288], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 4, snapshot max: 4 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.788-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":312}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.789-07:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.791-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4366408, \"ctx\":\"initandlisten\",\"msg\":\"No table logging settings modifications are required for existing WiredTiger tables\",\"attr\":{\"loggingEnabled\":true}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.794-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.797-07:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.797-07:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.806-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.947-07:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.951-07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:14.951-07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:47:15.013-07:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20631, \"ctx\":\"ftdc\",\"msg\":\"Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost\",\"attr\":{\"error\":{\"code\":0,\"codeName\":\"OK\"}}}\n{\"t\":{\"$date\":\"2021-04-15T05:48:14.795-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"WTCheckpointThread\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490894:794925][5172:140711416190288], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6, snapshot max: 6 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2021-04-15T05:49:14.804-07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"WTCheckpointThread\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1618490954:804137][5172:140711416190288],",
"text": "I am working on starting a localized version of MongoDB. The OS install is less than 12 hours old, fully updated with all the patches, and I installed version 4.4.5, went to the folder, ran mongod, and this is what I get. I can’t figure out why the server starts then shuts down…This is a virgin install, this should not be happening. Any help should be appreciated.",
"username": "Matthew_Ridge"
},
{
"code": "",
"text": "There’s no message in the log snippet you gave indicating it shutdown. Is that the complete log file?",
"username": "Daniel_Pasette"
},
{
"code": "",
"text": "Yup, that was everything.",
"username": "Matthew_Ridge"
},
{
"code": "",
"text": "did you check the event viewer? The logs show that the instance was up for a minute, began it’s normal storage engine checkpoint operation (which occurs every minute), and then was somehow killed abruptly. Everything looks normal up to that point.",
"username": "Daniel_Pasette"
},
{
"code": "",
"text": "3 posts were split to a new topic: MongoDB doesn’t start after unclean shutdown",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was merged into an existing topic: MongoDB doesn’t start after unclean shutdown",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
]
| Fresh Install of MongoDB - Not running out of the box? | 2021-04-15T13:02:53.859Z | Fresh Install of MongoDB - Not running out of the box? | 6,453 |
null | [
"node-js",
"data-modeling",
"mongoose-odm"
]
| [
{
"code": "const mongoose = require(\"mongoose\");\nconst paymentStatues = {\n 0: \"Normal\",\n 1: \"Internal Collection\",\n 2: \"External Collection\",\n 3: \"Legal Collection\",\n}\nconst requestSchema = mongoose.Schema({\n customer: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Customer\",\n },\n product: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Product\",\n },\n profit: {\n type: Number\n },\n\n downPayment: {\n type: Number,\n default: 0\n },\n planOfInstallment: {\n type: Number\n },\n moneyRequiredToPay: { // This will be the whole money which i need from the customer exclude the downpayment and add the profit\n type: Number,\n default: 0,\n },\n paymentschedule: [\n {\n monthName: String,\n dateToPay: {\n type: Date,\n },\n paid: {\n type: Boolean,\n default: false\n },\n payment: {\n type: Number\n },\n paymentRecieveDate: {\n type: Date,\n },\n }\n ],\n contractInitiated: {\n type:Boolean,\n default: false\n },\n contractStatus: {\n type: String,\n default: \"Normal\"\n },\n moneyRecieved: { // calculator\n type: Number,\n default: 0,\n },\n moneyLeft: {\n type: Number,\n },\n\n investor: [{\n investorDetail: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Investor\",\n },\n money: {\n type: Number,\n default: 0\n },\n date: {\n type: Date,\n default: Date.now()\n }\n }],\n \n documentContract: [\n {\n dept: {\n type: String,\n },\n recieveForm: {\n type: String\n },\n aggrement: {\n type: String\n }\n }\n ],\n createdDate: {\n type: Date,\n default: Date.now()\n }\n\n});\n\nrequestSchema.virtual(\"id\").get(function () {\n return this._id.toHexString();\n});\n\nrequestSchema.set(\"toJSON\", {\n virtuals: true,\n});\nexports.Request = mongoose.model(\"Request\", requestSchema);\n\n",
"text": "This is my schema and i want that in this schema some fields are like if on the field is updated from frontend so the other fields is updated automatically like subtracting or adding some money in field etc etc.",
"username": "arbabmuhammad_ramzan"
},
{
"code": "requestSchema.pre('updateOne', function(next) {\n // do stuff\n next();\n});\n",
"text": "Hello @arbabmuhammad_ramzan, Welcome to MongoDB Community Forum,I can see you are using mongoose NPM, So there is a middleware feature,You can set pre-middleware for your schema and do whatever your logic before query updates in the database,Types of middlewares you can create for your requirement,Ex:",
"username": "turivishal"
},
{
"code": "const mongoose = require(\"mongoose\");\nconst investorSchema = mongoose.Schema({\n username: {\n type: String,\n required: true,\n },\n email: {\n type: String,\n required: true,\n },\n phoneNumber: {\n type: String,\n required: true,\n },\n passwordHash: {\n type: String,\n required: true,\n },\n money: { type: [mongoose.Schema.ObjectId], ref: 'Money' },\n totalMoney: {\n type: Number,\n },\n verified: {\n type: Boolean,\n default: false\n },\n document: [\n {\n cpr: {\n type: String,\n },\n passport: {\n type: String,\n },\n salarySlip: {\n type: String,\n }\n }\n ],\n});\ninvestorSchema.pre(\"updateOne\", async function(done){\n if(this.isModified(\"money\")){\n const getAllMoney = await this.get(\"money\");\n console.log(\"money\",money)\n const getTotalMoney = getAllMoney?.map(item => item?.value);\n this.set(\"totalMoney\", getTotalMoney)\n }\n done()\n})\ninvestorSchema.virtual(\"id\").get(function () {\n return this._id.toHexString();\n});\n\ninvestorSchema.set(\"toJSON\", {\n virtuals: true,\n});\nexports.Investor = mongoose.model(\"Investor\", investorSchema);\n\n",
"text": "I have used this one but its not working can you please check that ?",
"username": "arbabmuhammad_ramzan"
},
{
"code": "totalMoney",
"text": "This is a different schema than your first post, I can see the money field has reference to another schema,I am not sure when that money schema will update, i think you should update totalMoney field when the money schema updated the amount.",
"username": "turivishal"
},
{
"code": "",
"text": "Hey,\nThanks its work",
"username": "arbabmuhammad_ramzan"
}
]
| I want that if one field value is updated so another field value is automatically updated by calculating automatically? | 2022-07-26T10:13:03.789Z | I want that if one field value is updated so another field value is automatically updated by calculating automatically? | 6,235 |
null | [
"node-js",
"replication",
"containers"
]
| [
{
"code": " const uri = 'mongodb://localhost';\n let client = new MongoClient(uri);\n return client.connect()\n .then(() => {\n db = client.db(dbName);\n })\n .catch(err => {\n logger.error('MONGODB', {\n msg: 'Failed to connect to MongoDB with NodeJs driver',\n method: 'connect',\n previousErr: err,\n });\n\n throw Error('Failed to connect to MongoDB');\n });\nversion: \"3.4\"\n\nservices:\n mongo:\n hostname: mongodb\n image: mongo:5.0.5\n volumes:\n - mongodb:/data/db\n ports:\n - 27017:27017\n healthcheck:\n test: test $(echo \"rs.initiate().ok || rs.status().ok\" | mongo --quiet) -eq 1\n interval: 10s\n start_period: 30s\n command: [\"/usr/bin/mongod\", \"--replSet\", \"rs0\", \"--bind_ip_all\"]\nservices:\n mongo1:\n container_name: mongo1\n image: mongo:5.0.5\n volumes:\n - ./scripts/rs-init.sh:/scripts/rs-init.sh\n networks:\n - mongo-network\n ports:\n - 27017:27017\n depends_on:\n - mongo2\n - mongo3\n links:\n - mongo2\n - mongo3\n restart: always\n entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n\n mongo2:\n container_name: mongo2\n image: mongo:5.0.5\n networks:\n - mongo-network\n ports:\n - 27018:27017\n restart: always\n entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n mongo3:\n container_name: mongo3\n image: mongo:5.0.5\n networks:\n - mongo-network\n ports:\n - 27019:27017\n restart: always\n entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n\nnetworks:\n mongo-network:\n driver: bridge\nvar config = {\n \"_id\": \"dbrs\",\n \"version\": 1,\n \"members\": [\n {\n \"_id\": 1,\n \"host\": \"mongo1:27017\",\n // \"priority\": 2\n },\n {\n \"_id\": 2,\n \"host\": \"mongo2:27017\",\n // \"priority\": 1\n },\n {\n \"_id\": 3,\n \"host\": \"mongo3:27017\",\n // \"priority\": 1\n arbiterOnly: true\n }\n ]\n};\nrs.initiate(config, { force: true });\n",
"text": "Hi,I have a problem since I have updated my Mongodb package to 4.7.0. I can’t connect to my local docker database for my e2e test. I’m using linuxHere my connection code:An error is throwed with a topology description type ‘ReplicaSetNoPrimary’I have tested with multiple connection string :\nconst uri = ‘mongodb://localhost:27017’;\nconst uri = ‘mongodb://localhost:27017/?replicaSet=rs0’;\nconst uri = ‘mongodb://localhost:27017/dbName/?replicaSet=rs0’;and try to replace localhost by 127.0.0.1Here my initial docker file:I have tested to create a real cluster with 3 mongoand init the replica set withAnd change my connection string to have 3 mogo:\nmongodb://127.0.0.1:27017,127.0.0.1:27018,127.0.0.1:27019/?replicaSet=rs0But I always have the same problem.I have search on Google and on your forum (Local replica set on docker-compose) but nothing help me. Each time I have the same error.Anyone can help me ?",
"username": "Mathieu_GOBERT"
},
{
"code": "etc/hosts127.0.0.1 mongodbversion: \"3.4\"\n\nservices:\n mongo:\n hostname: mongodb\n image: mongo:5.0.9\n volumes:\n - mongodb:/data/db\n expose:\n - 27017\n ports:\n - 27017:27017\n healthcheck:\n test: test $$(echo \"rs.initiate().ok || rs.status().ok\" | mongo --quiet) -eq 1\n interval: 10s\n start_period: 30s\n entrypoint: [\"/usr/bin/mongod\", \"--replSet\", \"rs0\", \"--bind_ip_all\"]\n const uri = 'mongodb://127.0.0.1:27017?replicaSet=rs0';\n const options = {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n };\n\n module.exports.connect = () => {\n if (client) {\n return Promise.resolve();\n }\n\n client = new MongoClient(uri, options);\n return client.connect()\n .then(() => {\n db = client.db(dbName);\n })\n .catch(err => {\n logger.error('MONGODB', {\n msg: 'Failed to connect to MongoDB with NodeJs driver',\n method: 'connect',\n previousErr: err,\n });\n\n throw Error('Failed to connect to MongoDB');\n });\n };\nMongoServerSelectionError: Server selection timed out after 30000 ms",
"text": "I find a fix, using my first docker-compose filejust add in etc/hosts this line 127.0.0.1 mongodbHere the final docker-compose:Here the connection code:But now I have some unexpected change stream error:\nMongoServerSelectionError: Server selection timed out after 30000 ms",
"username": "Mathieu_GOBERT"
},
{
"code": "?directConnection=true",
"text": "I finally found the solution, just add ?directConnection=true on the connection string on developpment and test environment which use local database",
"username": "Mathieu_GOBERT"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| ReplicaSetNoPrimary during connection with MongoDb 4.7.0 and docker | 2022-07-22T08:00:07.232Z | ReplicaSetNoPrimary during connection with MongoDb 4.7.0 and docker | 6,047 |
null | []
| [
{
"code": "",
"text": "I have an existing collection:\nData size : 5 TB (uncompressed)\nStorage Size: 1.8 TB (compressed)I am planning to shard this collection but I am unable to understand the Operation restriction in the collection size mentioned here - https://www.mongodb.com/docs/v5.0/reference/limits/#mongodb-limit-Sharding-Existing-Collection-Data-SizeCan someone please help me understand this ? Would I be able to shard a collection of this size if I increase the chunk size?",
"username": "Ishrat_Jahan"
},
{
"code": "",
"text": "Hello, I share my opinion.If you can calculate “Average Size of Shard Key Values”, you can check this table.\n\nimage1542×608 59.4 KB\nThe default chunk size in MongoDB 5.0 is 64 megabytes. If you use default chunk size and “Average Size of Shard Key Values” is 64 bytes, you can shard your collection (Max 8TB)",
"username": "noisia"
},
{
"code": "",
"text": "I do not know for sure the size. But its an alphanumeric string on around 28 characters, so I think it should fit in 64 bytes",
"username": "Ishrat_Jahan"
}
]
| Sharding an existing collection | 2022-07-20T07:36:46.321Z | Sharding an existing collection | 2,246 |
null | []
| [
{
"code": "",
"text": "Hey, does anyone have experience using Realm to make offline apps (read and write capabilities)? How do you feel about the development model — do you think it’s a good option?",
"username": "P_G1"
},
{
"code": "",
"text": "Hi!Thanks for your first post and welcome to the MongoDB Developer Community! I know it’s going to sound definitely as biased as a MongoDB employee but I did use Realm as an offline database before I joined the company and it’s one of the main reasons that made me enjoy the technology so much and why I decided to join years later.I wanted to build a database integration for an app I was developing on iOS. Other options like Core Data had a big learning curve and were not for me. So Realm was the way to go and I didn’t regret it because the database functionality integration was a walk in the park.Realm being an offline-first mobile database solves a lot of problems such as local storage, which is quite simple and lightweight, and you can access objects using the native query language of the platform you’re using. Also considering that your objects will always reflect the latest data stored in the database you can subscribe to changes, which will make it easier to keep the UI to date.Another good point is that if you ever decide to integrate Sync between devices, it’s quite easy to do and Device Sync does a lot of heavy lifting behind the scenes, so you don’t have to worry regarding conflict-solving between devices.You can find more information in our Realm documentation.I hope this solves your question and don’t hesitate to let me know if you have additional questions or concerns regarding the details above.",
"username": "Mar_Cabrera"
},
{
"code": "",
"text": "Welcome to the forums.I feel great about the development model. However, I also fee great about other tools as well. If you want some additional feedback we should have a basic understanding of your use case.Not all tools are a good fit for all cases.Can you perhaps add some brief additional information about how you’d be using Realm in your app? What’s your coding platform? Single user? Multi-User? Multi-Tenant? Are you going to Sync in the future? Do you need Authentication? How about file storage?Jay",
"username": "Jay"
},
{
"code": "",
"text": "Thanks so much, Jay. I will dm you.",
"username": "P_G1"
}
]
| Offline-First Mobile Apps using Realm | 2022-07-22T06:49:14.624Z | Offline-First Mobile Apps using Realm | 3,465 |
null | []
| [
{
"code": "",
"text": "Hi,I created a trigger and attached with eventbridge\nit was working fine for the first time\nthen i disabled it for 15 daysafter that when i enabled it,now when i update any document, it is not getting triggered at alltrigger configsPlease provide suggestions",
"username": "Nirali_88988"
},
{
"code": "",
"text": "Hi,\ncan anyone help with this?\nit has blocked production.",
"username": "Nirali_88988"
},
{
"code": "",
"text": "What does your application code look like that is performing the update? Are you sure that you’re not performing a replace?",
"username": "Christian_Deiderich"
},
{
"code": "",
"text": "It wasn’t replace\nI set Full Document: off and was adding match expression in advanced option\nit is working after I set Full Document: on\nsorry for the trouble",
"username": "Nirali_88988"
}
]
| Trigger is not triggered after document updated | 2022-07-18T12:19:09.561Z | Trigger is not triggered after document updated | 2,108 |
null | []
| [
{
"code": "{\n \"fullDocument\":\n {\n \"event\":\"A specific Desired Event\"\n }\n}\n",
"text": "I have the following “match” expression in my trigger:Problem is, with this expression, the trigger never fires. If I remove the expression, it fires for every update, and insert. I’m assuming the format of this expression is incorrect, but I’m having a heck of a time finding what it should be. Can someone give me an assist?What I am ultimately hoping to accomplish is to only trigger when the “event” field in the “Full Document” matches a “specific event” text string.",
"username": "William_Stock"
},
{
"code": "{\n \"fullDocument.event\" : \"A specific Desired Event\"\n}\n\"a\"{\n \"fullDocument.a\" : 2\n}\n{\"fullDocument.a\":{\"$numberInt\":\"2\"}}\n",
"text": "Hi @William_Stock - Welcome to the community Could you try and test the following within the match expression code box to see if it works for your use case? :There is a similar example of this in step 10 of the Create a Trigger page.On my test environment, I had got the test trigger to run with a match expression noted below (for when the \"a\" field is updated to an int value of 2) :Note: Once I had saved the above format match expression, reloading the same trigger then resulted with the following match expression:Hope this helps but if not, could you provide:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Ok, I googled and googled and never came across that article. It is most helpful. Thank, I will try your suggestion…appreciate the feedback.",
"username": "William_Stock"
},
{
"code": "",
"text": "No problem - Let us know if that helps or works ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Help with a Trigger Match Expression | 2022-07-21T19:20:27.589Z | Help with a Trigger Match Expression | 2,540 |
null | []
| [
{
"code": "",
"text": "The onboarding instructions directed me to post a message here. ",
"username": "Mask_Hoski"
},
{
"code": "",
"text": "What do I need to do to post a new topic in a another forum? The hover icon on the New Topic button is not allowed. Do I just have to wait until some amount of time has passed?",
"username": "Mask_Hoski"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Mask_Hoski !You should be able to start a new topic in any public category as long as it is not a product release or announcement category.For some common starting points, please see What is on-topic for discussions and events?.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Hello from Arizona (US) | 2022-07-25T20:58:22.844Z | Hello from Arizona (US) | 3,213 |
[
"aggregation"
]
| [
{
"code": "",
"text": "Hi everyone,I am setting up Data Federations and want to move data from cluster to S3 following guidance from How to Automate Continuous Data Copying from MongoDB to S3. I have setup my cluster and s3 as Data Sources in Data Federations. Next, I have created Linked Data Sources to service name “kaison-data-federation” for Data Federations.Next, in my triggers, I wrote below functions:exports = function () {\nconst datalake = context.services.get(“kaison-data-federation”);\nconst db = datalake.db(“targeting”)\nconst events = db.collection(“subscriptions”);const pipeline = [\n{\n$match: {\n$and: [\n{ status: ‘active’ }\n]\n}\n},\n{\n“$out”: {\n“s3”: {\n“bucket”: “test-mongo-atlas-data-lake”,\n“region”: “ap-southeast-1”,\n“filename”: “events”,\n“format”: {\n“name”: “parquet”,\n“maxFileSize”: “10GB”,\n“maxRowGroupSize”: “100MB”\n}\n}\n}\n}\n];return events.aggregate(pipeline);\n};However, I am hitting error of “(Unauthorized) not authorized” with no exact error mentioning. I have refer to other posts like Unauth Error on Moving Data from DataLake to S3. But still not working. I have stucked here for 3 days. Can anyone provide some insight to me on this? Thank you very much!\nimage1257×769 34 KB\n",
"username": "Yi_Chun_Tan"
},
{
"code": "",
"text": "Attached with additional screenshot:\nimage1693×622 37.7 KB\n",
"username": "Yi_Chun_Tan"
},
{
"code": "",
"text": "Attached with additional screenshot:\nimage1310×742 60.1 KB\n",
"username": "Yi_Chun_Tan"
},
{
"code": "",
"text": "You most likely have the IAM role misconfigured on the AWS side",
"username": "Christian_Deiderich"
}
]
| (Unauthorized) not authorized error when using context.services.get for Data Federation to move data to S3 | 2022-06-20T08:56:26.804Z | (Unauthorized) not authorized error when using context.services.get for Data Federation to move data to S3 | 2,786 |
|
null | [
"data-modeling",
"polymorphic-pattern"
]
| [
{
"code": "name: String, role: String, a: String, b: String, c: SubDocumentname: \"A\", rol: \"admin\", c: \"{createdSomething: \"[a]\"}\"",
"text": "I have a question. Given my own schema where i have a maybe.\nname: String, role: String, a: String, b: String, c: SubDocument\ni am creating a new user which has a role, and by it’s role i want to pass different fields.\nlike name: \"A\", rol: \"admin\", c: \"{createdSomething: \"[a]\"}\", but i am stuck here just blowing my mind in if it is ok to just not pass the other fields and what is happening to them are they null?, are they just going to be occuping memory?, is it a good practice to do that?. also should i apply here the Polymorphic Pattern?",
"username": "Jurgen_Ruegenberg_Buezo"
},
{
"code": "",
"text": "MongoDB is not a relational database. Fields present in one document in a collection do not have to match in shape other documents in the same collection, nor do fields need to be present at all. You can programmatically associate validation rules with a collection, in which case documents will have to conform, but that is optional.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thank you so much, so it is not wrong to have different users (different users saying with different roles like seller, customer or admin in one collection right?). And one extra question assuming i apply the validation rules, am I using the polymorphic pattern?",
"username": "Jurgen_Ruegenberg_Buezo"
},
{
"code": "",
"text": "so it is not wrong to have different users (different users saying with different roles like seller, customer or admin) in one collection right?It’s not “wrong” in the sense that it’s not a violation of the document model of MongoDB. It may or may not be optimal … that’s a design decision.And one extra question assuming i apply the validation rules , am I using the polymorphic pattern ?Validation rules enforce your design decisions. Your design may or may not be polymorphic.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "It may or may not be optimal … that’s a design decision.And how should i make it optimal? i want to ensure to apply all the best practices sorry, should i divide my users in 3 different collections? like one for admins, sellers and customers? or how thank you so much for the answer",
"username": "Jurgen_Ruegenberg_Buezo"
},
{
"code": "",
"text": "And how should i make it optimal ? i want to ensure to apply all the best practicesIt depends on one’s use case. MongoDB is both more flexible than an RDBMS and at the same time less mature. The design patterns for an RDBMS are more established than currently is the case with MongoDB.My advice is that whatever you chose, impose validation. Validation can be a measure of sorts: if it is terribly complex to write the validation rules, then probably your design is sub-optimal.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "impose validation. Validation can be a measure of sorts: if it is terribly complex to write the validation rules, then probably your design is sub-optimal.Ok thank you so much, one last question by Validation we are refering to the Validation rules?",
"username": "Jurgen_Ruegenberg_Buezo"
},
{
"code": "",
"text": "Yes, validation rules are the mechanism for validation in MongoDB.",
"username": "Jack_Woehr"
}
]
| Why is it ok to leave fields blank and why they dont show when i create the object | 2022-07-23T14:39:13.556Z | Why is it ok to leave fields blank and why they dont show when i create the object | 2,530 |
null | [
"react-js"
]
| [
{
"code": "",
"text": "Hi ,I have a react.js pwa application.I want to leverage relam sync feature for offline/online capabilities.\nCan i use it in Web PWA?\nWould appreciate if someone shares the Web SDK of React.js (Javascript)",
"username": "Taranbir_Bajwa"
},
{
"code": "",
"text": "Howdy, unfortunately not at the moment. According to the docs Realm Web SDK “The Web SDK does not support creating a local database or using sync.”",
"username": "Rob_Elliott"
},
{
"code": "",
"text": "According to the documentationThe Web SDK does not currently support Realm Database or Atlas Device Sync directly. However, it is compatible with apps that use sync through the MongoDB and GraphQL APIs.Atlas Device Sync OverviewBut I cannot get this working.",
"username": "Egidio_Caleiro"
}
]
| How to use MongoDB Realm sync feature in react-js pwa application | 2022-04-12T10:46:06.889Z | How to use MongoDB Realm sync feature in react-js pwa application | 4,017 |
null | [
"mongoose-odm"
]
| [
{
"code": "_history audit_logsprofileaccountaudit_logs collection\n[\n {\n editedBy: new Binary(Buffer.from(\"a927c2893cc64963b67f560c81a5dffa\", \"hex\"), 4),\n operation: 'addNewUser',\n _id: new ObjectId(\"62dd9815b11c3e3e3d01a41c\"),\n createdAt: 2022-07-24T19:05:57.855Z,\n changes: [\n {\n // 'profile' fullDocument captured by event stream\n },\n {\n // 'account' fullDocument captured by event stream \n }\n ]\n }\n...\n]\nprofileaccount_historylookup",
"text": "I’m building a wiki with versioning system similar to Wikipedia and GitHub, based on Change streams.Depending on the user action, it may result of one or more changes to a single or multiple _history collections. (we’re implement something like Type 4 of the Slowly Changing Dimension pattern).I’m planning to create an audit_logs collection to keep track changes as a unit (analogous to a git merge request). In each document, there’s an array of one or more change event documents.In the example below, if someone adds a new user to the system, it results in a change in the profile and the account collection, both of which have different schemas.Is it a common practice to embed documents with different schemas in an array?Initially, I tracked changes to profile and account in 2 separate _history collections but joining 3 collections with lookup is getting a bit out of hand or if we want to rollback a change.",
"username": "V11"
},
{
"code": "",
"text": "Is it a common practice to embed documents with different schemas in an array?It all depends of your use-cases. I think that for your situation it is indeed a very good way to implement your requirements.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Array of different subdocument schemas? | 2022-07-25T11:21:18.205Z | Array of different subdocument schemas? | 1,124 |
null | [
"replication",
"mongodb-shell"
]
| [
{
"code": "",
"text": "Hi,Currently we are trying to install mongodb on our k8s cluster using helm chart(bitnami/mongodb).\nMongo details ( repository: bitnami/mongodb, tag: 5.0.8-debian-10-r24)We are pretty much using the standard values with replicaset architecture and 3 replicaCount. However the setup is not working. Normally I would expect the cluster to work right away with these settings however when we go into the nodes and run rs.status() from mongosh we are getting back “MongoServerError: no replset config has been received” error in each of the nodes.Does anybody else having the same issue? Any help would be appreciated.Thanks.",
"username": "Can_Ertel"
},
{
"code": "",
"text": "Most likely rs.initiate() and rs.add() was not executed.",
"username": "steevej"
},
{
"code": "",
"text": "Hey stevej.Thanks for the reply. You are right. I am making some more way with rs.initiate and rs.add (although still havent been able to get it working yet. )Normally I would expect the initial setup to just work when you specify the number of replicas in the helm chart values, however it only just starts up the pods it seems. After that setting up the replicaSet is still seems to be a manual operation. I am planning to do the replica setup with a post deployment script.I will update this post if I manage to get this to work.Cheers",
"username": "Can_Ertel"
},
{
"code": "",
"text": "still havent been able to get it working yetWhat are the new issues?",
"username": "steevej"
},
{
"code": "",
"text": "Hello stevej,So I managed to get it working. The issue I was having turned out to be related to networking so not mongo related.And to get it to work with the Helm chart I ended up doing the initialization manually in the initdbScripts. After adding this to the initdbScripts in the helm chart it is working now.initdbScripts:\nsetup_replicaset_script.js : |\nrs.add(“mongo-mongodb-0.mongo-mongodb-headless.svc.cluster.local:27017”)\nrs.add(“mongo-mongodb-1.mongo-mongodb-headless.svc.cluster.local:27017”)\nrs.add(“mongo-mongodb-2.mongo-mongodb-headless.svc.cluster.local:27017”)Thanks a lot for the help.Cheers",
"username": "Can_Ertel"
},
{
"code": "",
"text": "By the way I added this into the values.yml file that I am passing in for the install, not the helm chart of course.",
"username": "Can_Ertel"
},
{
"code": "",
"text": "@Can_ErtelHello, I have mostly the same issue\nI deployed bitnami mongo chart with a replica set architecture type. I will use mongo only inside the cluster\nCan you please tell me how can I connect to the DB inside the cluster?\nWhat modifications you did to the values file?\nThanks",
"username": "Mykola_Buhryk"
}
]
| Unable to install ReplicaSet onto k8s using Helm | 2022-05-31T04:32:10.711Z | Unable to install ReplicaSet onto k8s using Helm | 4,993 |
null | [
"aggregation",
"golang"
]
| [
{
"code": "db.tbl. update({}, [{$set: {'labels_bak': '$labels'}}], {\"multi\": true})\nupdate := bson.M{\n\t\"$set\": bson.M{\n\t\t\"labels_bak\": \"$labels\",\n\t},\n}\n",
"text": "I used Mongo version 4.2.3, go-driver version 1.9.1\nI succeeded when I executed the command in the Mongo shell, but failed consistently when I used go-driver\nHere are the commands I executed in the Mongo shell:Here’s The go code I’m writing:",
"username": "Hu_XiaoYu"
},
{
"code": "filter := bson.D{{}}\npipe := bson.D{{ \"$set\", bson.D{{ \"labels_bak\", \"$labels\"}} }}\nupdateResult, err := collection.UpdateMany(ctx, filter, mongo.Pipeline{ pipe })\nmongo.Pipeline",
"text": "Hello @Hu_XiaoYu, welcome to the MongoDB Community forum!You can try this code:Note the usage of the mongo.Pipeline when using the Updates with Aggregation Pipeline.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you for your answer. It was very helpful to me!",
"username": "Hu_XiaoYu"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to assign a value to a field using $set in a go-driver? | 2022-07-25T06:43:46.501Z | How to assign a value to a field using $set in a go-driver? | 2,017 |
null | [
"dot-net"
]
| [
{
"code": "RealmChangedPropertychangedSubscribeForNotificationsAn unhandled exception of type 'System.IO.FileLoadException' occurred in Realm.dll\n`Could not load file or assembly \"System.Runtime.CompilerServices.Unsafe`\n",
"text": "I am listening for Property changes in my Desktop Bridge .NET application like described here\nUsing the RealmChanged event works fine, however when adding a Propertychanged Event, or subscribing using the SubscribeForNotifications i get an exception thrown:The object and query i am calling the events from are fine, what is causing this? Havent found anything similar on the forums yet",
"username": "Johannes_Tscholl"
},
{
"code": "",
"text": "Hi,Is there a test project that you could send us to investigate what’s happening?",
"username": "Andrea_Catalini"
},
{
"code": "PartitionSyncConfigurationvar writeConfig = new PartitionSyncConfiguration(writePartition, RealmApp.CurrentUser);\n//remove encryptionkey to fix\nwriteConfig.EncryptionKey = key;\n",
"text": "I was able to track this problem down.\nWhen using an encryptionkey in a PartitionSyncConfiguration this exception is thrown, when adding an PropertyChanged event. I am going to implement my own encryption.",
"username": "Johannes_Tscholl"
},
{
"code": "",
"text": "Desktop Bridge .NETI’m glad to hear that you could figure your issue out. However, we’d still like to have a look at this specific use case. Would you mind sending a minimal repro case? We’d really appreciate.Andrea",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "Dropbox is a free service that lets you bring your photos, docs, and videos anywhere and share them easily. Never email yourself a file again!",
"username": "Johannes_Tscholl"
}
]
| PropertyChanged causes System.IO.FileLoadException | 2022-07-21T13:55:27.860Z | PropertyChanged causes System.IO.FileLoadException | 1,461 |
null | [
"replication"
]
| [
{
"code": "",
"text": "Hi Team,Sorry if it’s a duplicate question.We do have one primary and one secondary replication setup. I’m facing an issue that the secondary server is going into a recovery state frequently. Could you please help me to know the major and exact reason for the cause so that I can prevent the issue in the future?\nI’ve stopped the mongod service on secondary and deleted the data and again restarted the service. Now the secondary state show as “STARTUP2” until the sync gets completed. Not sure what I’m missing. Even the last time when I followed the same process the secondary went into a Recovery state from “STARTUP2” once the sync got completed. The primary and secondary servers are having the same config. Please let me know if you require any more details.\nBelow is the error I could find in the mongod.log.“ctx”:“ReplCoordExtern-0”,“msg”:“Recreating cursor for oplog fetcher due to error”,“attr”:{“lastOpTimeFetched”:{“ts”:{\"$timestamp\":{“t”:1657624021,“i”:126}},“t”:1},“attemptsRemaining”:1,“error”:“CappedPositionLost: Error while getting the next batch in the oplog fetcher :: caused by :: CollectionScan died due to position in capped collection being deleted. Last seen record id: RecordId(7119440959259017343)”}}",
"username": "Sravan_Chowdary_Bala"
},
{
"code": "",
"text": "Hi,There are 2 dimensions to be considered in the replication proccess: the system resources available (cpu, ram, disk…) and the oplog size available.\nI see you have the same config in your servers, but if the Secondary receives a lot of read operations, for example, this can impact the replication performance, as both can be I/O intensive.If I understood correctly, you’ve tried to resync the node, but as soon as the STARTUP2 finishes, it goes into RECOVERY state again.\nThis happens because the time spent during the resync is greater than the oplog window size in the secondary node. So, when it finishes the resync, it has to replay the oplog, but the first positions are already lost, replaced for newer ones.If you see no server overhead that can be tuned to fasten the replication proccess, I suggest you set a larger OPLOG SIZE in both primary and secondary nodes, so you can have a oplog window that is larger than the full replication time.",
"username": "Felipe_Esteves1"
},
{
"code": "",
"text": "Sorry, if it’s a dumb question. I’m very new to MongoDB.How to consider the OPLOG SIZE to be set?",
"username": "Sravan_Chowdary_Bala"
}
]
| What is the major reason for secondary server going into Recovery state | 2022-07-13T06:13:21.569Z | What is the major reason for secondary server going into Recovery state | 2,559 |
null | []
| [
{
"code": "",
"text": "Hi,\nwe started to shard a collection. How can i unstarstand distribution of data is in progres or finished and how much take time?\nThank you.",
"username": "baki_sahin1"
},
{
"code": "",
"text": "Hi @baki_sahin1You can check the status of the balancer by using the commands to Manage Sharded Cluster Balancer.how much take time?Do you mean how long before the balancer would be done balancing? Unfortunately I don’t think it’s known from the outset since if the cluster is also receiving writes, it’s possible that the balancer can go on for a while. It’s also depends on how many chunks it needs to balance, and (to some degree) depends on your MongoDB version since there are improvements in newer versions.If you’re new to administering MongoDB, you might want to checkout the free MongoDB University course M103 Basic Cluster Administration. It should get you up to speed on getting a cluster up and running.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank you Kevin.\nRegards.",
"username": "baki_sahin1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sharding status of a collection | 2022-07-19T15:50:29.751Z | Sharding status of a collection | 1,638 |
[
"java",
"crud"
]
| [
{
"code": "mongoClient = MongoClients.create(uri);\ndb = mongoClient.getDatabase(\"testing\");\n\nMongoCollection<Document> hobby = db.getCollection(\"hobby\");\nhobby.drop();\nInsertOneResult gaming = hobby.insertOne(new Document(\"name\", \"gaming\"));\nInsertOneResult running = hobby.insertOne(new Document(\"name\", \"running\"));\n\nMongoCollection<Document> coll = db.getCollection(\"coll\");\ncoll.drop();\ncoll.insertMany(\n Arrays.asList(\n new Document(\"name\", \"Max\").append(\"age\", 24).append(\"hobby_id\", gaming.getInsertedId()),\n new Document(\"name\", \"Alex\").append(\"age\", 25).append(\"hobby_id\", gaming.getInsertedId()),\n new Document(\"name\", \"Claire\").append(\"age\", 19).append(\"hobby_id\", running.getInsertedId()))\n);\n",
"text": "I’m trying to use the java connector to create a link between two tablesAs you can see, it detects the hobby_id as an objectId, but doesn’t link the tables together.\nIs there some other way to do this without using POJO?",
"username": "Rafe"
},
{
"code": " Object getRef(InsertOneResult result) {\n MongoCollection<Document> hobby = db.getCollection(\"hobby\");\n return hobby.find(eq(\"_id\", result.getInsertedId())).first();\n }\n\n public MongoDbTesting(String uri) {\n\n mongoClient = MongoClients.create(uri);\n db = mongoClient.getDatabase(\"testing\");\n\n MongoCollection<Document> hobby = db.getCollection(\"hobby\");\n hobby.drop();\n InsertOneResult gaming = hobby.insertOne(new Document(\"name\", \"gaming\"));\n InsertOneResult running = hobby.insertOne(new Document(\"name\", \"running\"));\n\n MongoCollection<Document> person = db.getCollection(\"person\");\n person.drop();\n person.insertMany(\n Arrays.asList(\n new Document(\"name\", \"Max\").append(\"age\", 24).append(\"hobby\", getRef(gaming)),\n new Document(\"name\", \"Alex\").append(\"age\", 25).append(\"hobby\", getRef(gaming)),\n new Document(\"name\", \"Claire\").append(\"age\", 19).append(\"hobby\", getRef(running)))\n );\n }\n\n",
"text": "This is what I’ve come up with to fix this issue:",
"username": "Rafe"
}
]
| Trying to use ObjectRef with Java Connector to link two tables | 2022-07-22T08:48:03.388Z | Trying to use ObjectRef with Java Connector to link two tables | 1,559 |
|
null | [
"app-services-user-auth"
]
| [
{
"code": "",
"text": "Okay I honestly had a hard time to phrase the question.That’s why I will just describe what I want to achieve:I’m using the Realm Web SDK.\nWhen a user opens my app for the first time he should be automatically signed in to mongodb. Maybe through his device ID or some random ID that is generated through a function.\nThe user should be able to use basic functionality of the app from the go (but restricted of course) and then he can later decide if he wants to create an account.I saw this approach in free to play games which let you start to play their game without any (visible) registration process and your progress and data is still saved in the database. But if you want to do more specific things like joining a guild you are prompted to make an account.How would this be possible with Mongodb Realm and the Web SDK?PS: I know some intermediate Javascript but I’m not a super skilled backend binary code hacker.",
"username": "Nilom_N_A"
},
{
"code": "",
"text": "I think the best way to do this would be to use Anonymous authentication for new users. You can limit anonymous users in your rules while still letting them interact with the app. When it comes time for them to set up a proper account (e.g. email-password auth) you can use the built-in user account linking feature to associate all of their anonymous activity with their account.",
"username": "nlarew"
},
{
"code": "",
"text": "Thank you very much for the response!I thought about this. Do I then just store their account data in their user custom data?Because I don’t know how I can distinguish anonym users.Maybe I’m paranoid but for safety I split the users data between 3 collections.One that only the server can read and write. For stored purchases, account status etc. .One for private user data that only the owner can read and write. Like friend lists.And one for public user data. Like for the public user profile.Is this bad? Should I just store everything in the custom user data and just modify the field rights?",
"username": "Nilom_N_A"
}
]
| How would you implement an automatic registration and login with device id? | 2022-07-24T01:35:30.357Z | How would you implement an automatic registration and login with device id? | 2,295 |
null | [
"node-js",
"crud"
]
| [
{
"code": " Schema.invoiceModel.updateOne({id: req.params.id},\n // mogelijks gebeurt toch automatisch een insert!!!\n {\n $set:\n {\n \"history.$[elem].status\": true\n }\n }\n , {\n arrayFilters: [\n {\"elem.name\": \"voorschot betaald op\"}\n ]\n }).then(result => {\n res.status(201).json(\n )\n }).catch(err => {\n console.log(err)\n res.status(500).json({\n error: err.errors\n })\n })\n",
"text": "In the database there is one document with an array history and inside that array 3 nested documents (objects) with a property “name” and a property “status”. 2 of these 3 documents (objects) have a value false for the status property.This is the NodeJS code that is executed to update the document:Why the document doesn’t update? The values of the status stays on false instead of becoming true…",
"username": "Paul_Holsters"
},
{
"code": "{ id: req.params.id }req.params.idmongosh// Sample document:\n{\n id: 1,\n history: [\n { name: \"alpha\", status: false },\n { name: \"beta\", status: true },\n { name: \"zeta\", status: false }\n ]\n}\n// Update method\ndb.test.updateOne(\n { id: 1 }, \n { $set: { \"history.$[elem].status\": true } },\n { arrayFilters: [ { \"elem.name\": \"zeta\" } ] }\n)\n",
"text": "Hello @Paul_Holsters,Why the document doesn’t update? The values of the status stays on false instead of becoming true…The syntax of the code you had posted looks correct to me. It is difficult to tell what is the problem.First, you can check if there is document in the collection with the query filter:\n{ id: req.params.id }Also, verify the value in the variable req.params.id.I had tried some code verifying the syntax using an example, in the mongosh, and it works well:",
"username": "Prasad_Saya"
}
]
| Property of object in a nested array doesn't update when working with arrayFilters | 2022-07-24T20:21:43.623Z | Property of object in a nested array doesn’t update when working with arrayFilters | 2,691 |
null | [
"replication"
]
| [
{
"code": "",
"text": "I ran a mongo instance as a stand alone on Linux environment and it has 5 Tb of data. After that I started replication and added two instances in replica set, data is syncing from initial stage. And I have seen the error in the member logs like below. So please suggest a solutions\nError: We are too stale to use 127.0.0.1:27017 as a sync source. Blacklisting this sync source because our last fetched timestamp is before their earliest timestamp for until : 2022-01-06T12:43:01.321+1530",
"username": "Srikanth_Bommasani"
},
{
"code": "",
"text": "Hi @Srikanth_Bommasani welcome to the community!Have you had success in creating the replica set yet?That message seems to imply that the secondary has fell off the oplog and cannot sync to it anymore (see Resync a Member of a Replica Set). The typical cause and fix was mentioned in the page:A replica set member becomes “stale” when its replication process falls so far behind that the primary overwrites oplog entries the member has not yet replicated. The member cannot catch up and becomes “stale.” When this occurs, you must completely resynchronize the member by removing its data and performing an initial sync.If the oplog is too small to cover the initial sync time, you might need to Change the Size of the Oplog. The output of rs.printReplicationInfo() should give you a hint on the time the oplog covers.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| BlackListing Sync Source | 2022-07-18T07:47:08.465Z | BlackListing Sync Source | 4,016 |
[
"replication",
"performance"
]
| [
{
"code": "",
"text": "Hi,We’ve had instances in which our MongoDB cluster seems to be holding up for an extended period of time, not performing almost any action and consequently causing production job freezes/delays, despite having queries in its queue.Our setup :\nMongoDB v5.0.6 hosted on AWS EC2 (r6g.8xLarge), Primary-Secondary-Arbiter architecture.Some key aspects:Over the weekend we have a job which rebuilds most of our collections, and that one is moving ~100M documents.\nBy “rebuild” I mean creating a new collection, inserting data, renaming it and deleting the old collection.\nThe issue has never appeared on this weekend rebuild, which is much bigger than our daily jobs, where the issue is not consistently happening, but is common enough to heavily disrupt production.Quick note:\nWe’ve been running MongoDB v4.0 in production for a few years and this has never happened. Only when we upgraded to MongoDB v5 we started seeing this, and no change to our job infrastructure was done at all.Here’s some more details.\nWhen the issue comes up it looks like queries are just sitting in the MongoDB queue and are not being executed, then at some point something snaps and they all get executed very quickly.\n\nimage1452×2946 432 KB\nHowever we are not actively building indexes on populated collections, and we also don’t see any ongoing index builds, automatically checked every 2 seconds using what’s shown here: Index Builds on Populated Collections — MongoDB Manual and db.currentOp() — MongoDB Manual.For jobs where we run inserts, we usually create an empty collection, create the indexes whilst it’s empty, insert the data, and the rename the collection. So perhaps something is happening behind the hood.Any idea what the issue could be?Thanks!",
"username": "Marco_Bellini"
},
{
"code": "",
"text": "Hi @Marco_Bellini welcome to the community!This is a very detailed report on a peculiar issue, which is to say that to be able to determine the root cause, it’s likely to involve a personalized, deep troubleshooting session, unfortunately However, I may be able to provide some pointers:Best regards\nKevin",
"username": "kevinadi"
}
]
| MongoDB randomly locking up | 2022-07-19T16:05:32.235Z | MongoDB randomly locking up | 2,038 |
|
null | [
"python",
"atlas-cluster"
]
| [
{
"code": "with open('fandata.json') as file:\n\n f = json.load(file)\n\n Collection.insert_many(f)\n\nfile.close()\n",
"text": "This is my code to write into database***************\nimport json\nfrom pymongo import *\ndns = MongoClient(“mongodb+srv://m001-student:[email protected]/?retryWrites=true&w=majority”)\ndef InsertIntoDb(database,coll):\ndb = dns[database]\nCollection = db[coll]InsertIntoDb(‘company’,‘customers’)\nThis is the error i am getting****************\nraise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: sandbox-shard-00-00.bihwd.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),sandbox-shard-00-02.bihwd.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),sandbox-shard-00-01.bihwd.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 62d95e5bfd1449c82827d381, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (‘sandbox-shard-00-00.bihwd.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘sandbox-shard-00-00.bihwd.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)’)>, <ServerDescription (‘sandbox-shard-00-01.bihwd.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘sandbox-shard-00-01.bihwd.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)’)>, <ServerDescription (‘sandbox-shard-00-02.bihwd.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘sandbox-shard-00-02.bihwd.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)’)>]>i am trying to write a json file called fandata.json. where i am loading the data and using .insert_many() function to insert.\nplease do help me",
"username": "Sai_Gokul_K_P"
},
{
"code": "",
"text": "Hi @Sai_Gokul_K_PAre you still seeing this error?There’s a topic with a similar error message to what you’re seeing some time ago. Perhaps you can try the solution in that topic? https://www.mongodb.com/community/forums/t/ticket-connection-ssl-certificate-verify-failed/91943If it still doesn’t work out for you, could you post:Best regards\nKevin",
"username": "kevinadi"
}
]
| I am getting selection timeout error when i am trying to load the json file into the database | 2022-07-21T14:19:43.978Z | I am getting selection timeout error when i am trying to load the json file into the database | 1,602 |
null | [
"storage"
]
| [
{
"code": "",
"text": "Hi.\nFound a statement in docs-“WiredTiger cache uncompressed collections only, compressed collections cached by OS cache”.\nSo does that mean that if all the our collections compressed (with ZSTD) WiredTiger cache is useless and better to minimize its size to reuse memory for OS disk cache/etc?",
"username": "SeventhSon"
},
{
"code": "",
"text": "Hey @SeventhSon welcome to the community!The term “cache” for WiredTiger means more than cache, actually. It also serves as WiredTiger’s working memory, so it’s not as straightforward.If MongoDB requests data to WiredTiger, first it checks if the data exists in the cache or not. If not, it will fetch it from disk, where the OS typically will cache disk fetches in what is termed the filesystem cache. Note that WiredTiger & MongoDB have no control over this OS filesystem cache and it can’t really influence what’s being cached there.In basic terms, the more WiredTiger cache you have, the larger WiredTiger’s working memory is. And (ideally) if your working set is correctly sized for the amount of RAM you have, most of the data you need would either be in WiredTiger’s working memory, or the filesystem cache (last resort before hitting the disk). Disk is often the slowest part in the server, and you want to hit it as few times as possible.The default WiredTiger cache size of ~50% of RAM was selected as a compromise: you want as much working memory for WiredTiger as needed by your workload, but you also need to save some space in RAM for the filesystem cache, the OS’s requirements, and MongoDB’s requirements (RAM for connections, queries, aggregation, etc. are not part of WiredTiger cache).Generally, the default WiredTiger cache size works well for most workloads. Honestly I have yet to see a case where a server’s performance can be increased by changing this value. From my experience so far, if there are issues with server performance, usually there are slow (e.g. unindexed) queries, or the server is overwhelmed with work that it needs more hardware.I hope the explanation make sense Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,Thanks for detailed explanation. I mostly want to understand what is the difference between comressed and uncompressed collections from Mongo/WiredTiger engine side.\nIf collection compressed does Mongo just unzip it into WiredTiger cache and operate the same way as uncompressed? We see clearly high CPU load, obviously compressing/decompressing requires more CPU resources, just want to clarify how Mongo/WiredTiger work with compressed collections.",
"username": "SeventhSon"
},
{
"code": "",
"text": "If collection compressed does Mongo just unzip it into WiredTiger cache and operate the same way as uncompressed?Basically yes. WiredTiger would need to work with uncompressed data in its cache.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| WiredTiger cache and compressed collections | 2022-07-21T01:32:52.027Z | WiredTiger cache and compressed collections | 1,825 |
null | []
| [
{
"code": "exports = async function(payload,response) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const eventsdb = mongodb.db(\"CPaaS_Decision_Engine\");\n const eventscoll = eventsdb.collection(\"CPaaS_Vendors\");\n var body = JSON.parse(payload.body.text());\n var TransformX_Vendor_Name = body.TransformX_Vendor_Name;\n const result= await eventscoll.count({ \"TransformX_Vendor_Name\": TransformX_Vendor_Name });\n\n if(result.count > 0 ) {\n response.setBody( JSON.stringify({TransformX_Vendor_Name}) ); \n } else {\n const insert= await eventscoll.insertOne(payload.query);\n var id = result.insertedId.toString();\n}\n",
"text": "So, I’m using Realm to integrate my Google Sheet data with the MongoDB Database.I basically want to compare each row of Google Sheet with MongoDB Documents on the basis of ID and if there’s not a match then I want it to insert a document and if the id matches with one of the existing documents then I want it to check and update values.This is what I’ve done so far (I’ve just done it for TransformX_Vendor_Name for now but I want it to check for all the columns and not just one ):I’m new to MongoDB Realm. Any help would be really helpful, thank you!",
"username": "Munnazzah_Aslam"
},
{
"code": "exports = async function(payload) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const eventsdb = mongodb.db(\"my_db\");\n const eventscoll = eventsdb.collection(\"my_collection\");\n const result= await eventscoll.insertOne(payload.query);\n var id = result.insertedId.toString();\n if(result) {\n return JSON.stringify(id,false,false); \n }\n return { text: `Error saving` };\n}\nvar options = {\n 'method' : 'post',\n 'payload' : formData\n };\n\nvar insertID = UrlFetchApp.fetch('your_url', options);\neventIdCell.setValue(insertID); // Insert the new event ID\n",
"text": "I would suggest sending the id of the document as a response to your google sheet.And in your google sheet script, save the new ID. If your Id is in google sheet, it exists in the database :Also I tried for a few hours to do the verification in Mongo but without success.",
"username": "Timothee_Wright"
}
]
| MongoDB Integration with Google Sheets to Update documents | 2021-01-19T20:05:35.268Z | MongoDB Integration with Google Sheets to Update documents | 2,805 |
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "Hello people,\nI’m a beginner developer, creating a social media platform and I wanted to store posts as collections and comments as documents of each collection. Is it possible with mongoose to pass the collection’s name as a parameter from the client? <<function createCollection = (postTitle) => {\nmongoose.model(postTitle, postSchema);}\nStructure:\nallPosts(database) => singlePost(collection) => comment(document)\nIs there a better way to structure this?\nThank you!!",
"username": "Amitay_Cohen"
},
{
"code": "{\n_id : ObjectID(\"...\"),\npostId : 12234,\ntitle: \"Post_01\",\ncomments: [\n {\n commentId: 23213\n userName: \"Tom Cruise\",\n comment: \"Comment_01,\n dateofPosting: ISODate(\"2020-05-18T14:10:30.000Z\")\n ...\n },\n ]\n...\n}\n",
"text": "Hi @Amitay_Cohen,Welcome to the MongoDB Community forums I would rather suggest you two different approaches to design your schema for your social media platform: Approach 1: Make two separate collections one for posts and another for comments and use $lookup to join both collections while fetching the data. Approach 2: Design a schema in which make comments a embedded document within the post document, as shown below:For example, in your application, a post can have about 10 to 20 comments, with each comment having a few of text. This will help think about storing all the comment data within each post itself. Then the schema design and the queries will be simple as all the related data is stored together.The query will fetch post-related data like title, content, data, etc., and also all the comments associated with the post. You can use $projection to control the fields required on a particular page of the application. You can also, query for specific comments.If you have any doubts, please feel free to reach out to us.Best Regards,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you very much @Kushagra_Kesav!\nMy application is supposed to store videos. A user can upload a video, and other users can reply to this video with their own videos. Similar to Twitter, but with videos instead of tweets, and with only 2 layers (you cant reply with a video to a replied video). I hope that makes sense.\nI think that for such an application, it might be a good fit to store prime videos and replied videos in separate collections?\nThanks a lot!",
"username": "Amitay_Cohen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Passing collection's name as a parameter from the client? | 2022-07-18T13:41:51.603Z | Passing collection’s name as a parameter from the client? | 2,846 |
null | [
"connecting"
]
| [
{
"code": "Traceback (most recent call last):\n File \"/usr/local/lib/python3.9/site-packages/pymongo/pool.py\", line 1278, in _get_socket\n sock_info = self.sockets.popleft()\nIndexError: pop from an empty deque\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"<console>\", line 1, in <module>\n File \"/usr/local/lib/python3.9/site-packages/pymongo/cursor.py\", line 647, in __getitem__\n for doc in clone:\n File \"/usr/local/lib/python3.9/site-packages/pymongo/cursor.py\", line 1207, in next\n if len(self.__data) or self._refresh():\n File \"/usr/local/lib/python3.9/site-packages/pymongo/cursor.py\", line 1124, in _refresh\n self.__send_message(q)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/cursor.py\", line 999, in __send_message\n response = client._run_operation_with_response(\n File \"/usr/local/lib/python3.9/site-packages/pymongo/mongo_client.py\", line 1368, in _run_operation_with_response\n return self._retryable_read(\n File \"/usr/local/lib/python3.9/site-packages/pymongo/mongo_client.py\", line 1464, in _retryable_read\n with self._slaveok_for_server(read_pref, server, session,\n File \"/usr/local/lib/python3.9/contextlib.py\", line 117, in __enter__\n return next(self.gen)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/mongo_client.py\", line 1309, in _slaveok_for_server\n with self._get_socket(server, session, exhaust=exhaust) as sock_info:\n File \"/usr/local/lib/python3.9/contextlib.py\", line 117, in __enter__\n return next(self.gen)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/mongo_client.py\", line 1246, in _get_socket\n with server.get_socket(\n File \"/usr/local/lib/python3.9/contextlib.py\", line 117, in __enter__\n return next(self.gen)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/pool.py\", line 1231, in get_socket\n sock_info = self._get_socket(all_credentials)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/pool.py\", line 1281, in _get_socket\n sock_info = self.connect(all_credentials)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/pool.py\", line 1197, in connect\n sock_info.check_auth(all_credentials)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/pool.py\", line 793, in check_auth\n self.authenticate(credentials)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/pool.py\", line 810, in authenticate\n auth.authenticate(credentials, self)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/auth.py\", line 673, in authenticate\n auth_func(credentials, sock_info)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/auth.py\", line 591, in _authenticate_default\n return _authenticate_scram(credentials, sock_info, 'SCRAM-SHA-1')\n File \"/usr/local/lib/python3.9/site-packages/pymongo/auth.py\", line 295, in _authenticate_scram\n res = sock_info.command(source, cmd)\n File \"/usr/local/lib/python3.9/site-packages/pymongo/pool.py\", line 683, in command\n return command(self, dbname, spec, slave_ok,\n File \"/usr/local/lib/python3.9/site-packages/pymongo/network.py\", line 159, in command\n helpers._check_command_response(\n File \"/usr/local/lib/python3.9/site-packages/pymongo/helpers.py\", line 164, in _check_command_response\n raise OperationFailure(errmsg, code, response, max_wire_version)\npymongo.errors.OperationFailure: Authentication failed., full error: {'ok': 0, 'errmsg': 'Authentication failed.', 'code': 8000, 'codeName': 'AtlasError'}\n",
"text": "Hey community,… realy tried hard to get it working but I need your help:\nI want to deploy following tech-stack on Kubernetes (Django, Celery, Redis, Flower, …) Problem: I can’t connect to my MongoDB-Atlas Cluster as this is the only one not deployed directly in Kubernetes.\nIt throws AtlasError 8000 when I want to open the connection:I found a solution that fixes the connection problem but opens new issues that I can’t resolve.\nConnection works when I change dnsPolicy of my django deployment from ClusterFirst to Default but this crashes the Kubernetes DNS which I need to connect to different Kubernetes Services (e.g. with SQL_HOST = postgres or CELERY_BACKEND = redis://redis-svc:6379/0). It seems like, that there are connection problems between Kubernetes-Cluster and Atlas. I had the feeling, that I need to open/allow internet traffic for my Deployment, but this should be alright as I am able to run requests to public webpages with Reponse 200.Does anyone of you have an idea how to fix it?",
"username": "Philipp_Wuerfel"
},
{
"code": "",
"text": "If you’re having DNS problems I recommend looking into whether the mongodb cluster connection string DNS SRV record might be the issue. You can get the legacy longer connection string that doesn’t use SRV in the connection modal under the legacy driver versions: this can be used with newer drivers",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "If I ran it from Docker or tried a simple connection in a Python Script it worked.I manged to get it working in the Kubernetes Cluster by adding authSource=admin to the connection string.\nDB_URI=“mongodb+srv://username:[email protected]/mydatabase?retryWrites=true&w=majority&authSource=admin”The user itself does not have an admin role.\nIs this correct, that authSource=admin only refers to the authentication database which is by default “admin” on a MongoDB Cluster?If I got it right, following situation is the problem:Kubernetes Pods/Deployments change their IP, e.g. on recreation.My Pods run behind a service. The service acts as an access point to link traffic to the correct Pod. The service keeps track of the IP’s of the underlying Pods.My theory: The connection from PyMongo to Atlas worked but the response from Atlas back to PyMongo didn’t. Maybe because it takes an internal IP of the underlying Pod which is not accessible from outside.\nAdding “authSource=admin” to connection string seems to change the communication in a way that the response from Atlas can make it’s way back to PyMongo driver.Does someone have an idea why it works now?",
"username": "Philipp_Wuerfel"
},
{
"code": "authSourceauthSource=admin",
"text": "I ran into the same situation, but with a StatefulSet that doesn’t have any service backing it. It is strange because I had another StatefulSet application that had the exact same configuration for the mongo URI that worked fine without authSource and another that needed the authSource=admin param to make work. They were even running on the same K8s Node which rules out anything specific to the node proxy I think.Like in your case, when I ran it with the same config in a plain Docker container outside of K8s it worked fine.I’m using k3s so it might be something specific to that.",
"username": "Ben_Keith"
},
{
"code": "",
"text": "I’m facing the same issue and I’m using k3s.I tried adding authSource=admin to the URI but it didn’t work.I’ve posted it hereAny help is very much appreciated.",
"username": "Wilfred_Almeida"
}
]
| AtlasError 8000 when connecting from Kubernetes | 2021-05-10T14:06:33.042Z | AtlasError 8000 when connecting from Kubernetes | 8,137 |
[]
| [
{
"code": "",
"text": "I can’t figure out why there is unusually high data transfer (internet). In the 2 images attached, we have about 20GB/day of internet transfer, yet the network is showing spikes of 4mb/s for a short time. There is no way we have this much data in our database (3gb), so what could be causing MongoDB to say there is high transfer, and not showing up in their logs.Note: no change from last month in terms of product development / no new features, just spike in transfer.Screen Shot 2020-10-07 at 2.20.21 PM2260×206 20.7 KB Screen Shot 2020-10-07 at 2.19.56 PM776×432 38.1 KB",
"username": "Faheem_Gill"
},
{
"code": "",
"text": "Hi Faheem you may be better served asking for help via the in-UI chat support. I am not sure what this might be.-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hey Faheem. Did you figure this out? I am experiencing the same issue and cannot figure out why.",
"username": "Alex_Cyphus"
}
]
| Atlas AWS Data Transfer (Internet) has spiked for no reason | 2020-10-07T12:05:09.607Z | Atlas AWS Data Transfer (Internet) has spiked for no reason | 2,379 |
|
[
"atlas-device-sync",
"flexible-sync"
]
| [
{
"code": "$in$in",
"text": "In the Flexible Sync Permissions Guide, the Restricted News Feed use case uses the $in operator to allow permissions to any value in an array:But in what seems to me to be an analogous scenario in the Dynamic Collaboration use case, where I was expecting to also see the $in operator, it is not used:So I can better understand how this all works, can someone explain the distinction between the rule constructions of these two use cases? Thanks!",
"username": "Phil_Seeman"
},
{
"code": "",
"text": "@Ian_Ward perhaps? ",
"username": "Phil_Seeman"
},
{
"code": "subscribedTocollaboratorsowner_idcollaborators",
"text": "Hi Phil,The main difference between those two constructions is that in the Restricted News Feed example, the docs mention that the intention of this permission is to include documents whose authors have IDs in the user’s subscribedTo array. While the Dynamic Collaboration refers to a set of permissions where the user **may edit the document if the document’s collaborators array field contains their ID.In the Restricted News Feed section, the subject of the query is a string field (owner_id), checked against an array.In the Dynamic Collaboration, the subject is an array field (collaborators) checked against a string.I hope this helps.Regards,\nMar",
"username": "Mar_Cabrera"
},
{
"code": "",
"text": "Yes, this is very helpful - thanks, @Mar_Cabrera!",
"username": "Phil_Seeman"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Question about Flexible Sync operators | 2022-06-21T16:45:12.128Z | Question about Flexible Sync operators | 2,839 |
|
null | [
"installation",
"app-services-cli"
]
| [
{
"code": "npm install -g mongodb-realm-cli\n/usr/lib/node_modules",
"text": "Hi, I under Ubuntu 20.04.3 LTS with nodejs v16.10.0 and npm v6.14.13When I try to install the Realm CLI like this (with and without sudo) :I have this error :downloading “realm-cli” from “https://s3.amazonaws.com/realm-clis/realm_cli_rhel70_97239c6794575bad1486a178501366cea7e7d399_21_08_16_19_49_31/linux-amd64/realm-cli”\nfailed to download Realm CLI: Error: EACCES: permission denied, open ‘/usr/lib/node_modules/mongodb-realm-cli/realm-cli’\nat Object.openSync (node:fs:585:3)\nat /usr/lib/node_modules/mongodb-realm-cli/install.js:62:24\nat new Promise ()\nat requstBinary (/usr/lib/node_modules/mongodb-realm-cli/install.js:56:10)\nat Object. (/usr/lib/node_modules/mongodb-realm-cli/install.js:101:1)\nat Module._compile (node:internal/modules/cjs/loader:1101:14)\nat Object.Module._extensions…js (node:internal/modules/cjs/loader:1153:10)\nat Module.load (node:internal/modules/cjs/loader:981:32)\nat Function.Module._load (node:internal/modules/cjs/loader:822:12)\nat Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:79:12) {\nerrno: -13,\nsyscall: ‘open’,\ncode: ‘EACCES’,\npath: ‘/usr/lib/node_modules/mongodb-realm-cli/realm-cli’I checked the owner of /usr/lib/node_modules, it is “root:root”. I tried to change the owner, tried in sudo …Etc. Always the same error.Any idea?Thank you for your help",
"username": "Frederic_Meriot"
},
{
"code": "/usr/lib/node_modulesls -al /usr/lib/node_modules/mongodb-realm-cli/ls -al /usr/lib/node_modules/mongodb-realm-cli/realm-clisudochmod -R 777 /usr/lib/node_modules/mongodb-realm-cli/",
"text": "Hi @Frederic_Meriot,Seems like your /usr/lib/node_modules has the wrong permissions?Can you ls -al /usr/lib/node_modules/mongodb-realm-cli/ and ls -al /usr/lib/node_modules/mongodb-realm-cli/realm-cli and post here the listings to have a look?Running as sudo also fails? Then root doesn’t have enough permissions… Although it’s a security problem I’ll try to chmod -R 777 /usr/lib/node_modules/mongodb-realm-cli/ and try againLet us know and thanks for posting!Thanks!",
"username": "Diego_Freniche"
},
{
"code": "/usr/lib/node_module╰─ ls -al /usr/lib/node_modules/\ntotal 32\ndrwxrwxrwx 8 root root 4096 sept. 29 09:13 .\ndrwxr-xr-x 130 root root 4096 août 19 15:39 ..\ndrwxrwxrwx 3 root root 4096 avril 30 11:37 @angular\ndrwxrwxrwx 4 root root 4096 avril 28 09:43 aws-azure-login\ndrwxrwxrwx 4 root root 4096 sept. 28 15:29 corepack\ndrwxrwxrwx 3 root root 4096 juin 7 10:32 n\ndrwxrwxrwx 3 root root 4096 sept. 13 12:05 @nestjs\ndrwxrwxrwx 8 root root 4096 sept. 28 15:29 npm\n",
"text": "ls -al /usr/lib/node_modules/mongodb-realm-cli/The problem is that there is no mongodb-realm-cli under /usr/lib/node_module. NPM install does not succeed in creating the mongodb-realm-cli directory.And for this command :ls -al /usr/lib/node_modulesI have this output :As you can see I’ve also tried a chmod 777 before without success …",
"username": "Frederic_Meriot"
},
{
"code": "",
"text": "Forget about my problem. I figure it out.\nI realized that I was using “n” npm which allows you to switch from one version of nodejs to another, and I also installed nodejs with apt-get. So, big conflict. I removed the nodejs installed with apt-get, and everything works fine.Thank you.",
"username": "Frederic_Meriot"
},
{
"code": "sudo npm install -g mongodb-realm-cli --unsafe-perm=true --allow-root\n--unsafe-perm=true --allow-root",
"text": "I had the same problem on Ubuntu 20.04, node v14.18.3, npm 6.14.15, what actually worked for me is:adding --unsafe-perm=true --allow-root did the trick.",
"username": "Ravi_Misra"
},
{
"code": "sudo npm install -g mongodb-realm-cli --unsafe-perm=true --allow-root\nnpm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142\nnpm WARN deprecated [email protected]: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.\nnpm WARN deprecated [email protected]: this library is no longer supported\n[ ......] \\ extract:ajv: sill extract ajv@^6.12.3 extracted to /usr/lib/node_modules/.staging/ajv-2d511643 (328ms)\n",
"text": "I got this error while installing",
"username": "Protech_Code01"
},
{
"code": "",
"text": "I got the same issue. Could you solve it?",
"username": "Thomas_Anderl"
}
]
| Failed to download Realm CLI under Linux | 2021-09-28T13:42:50.183Z | Failed to download Realm CLI under Linux | 6,471 |
null | [
"upgrading"
]
| [
{
"code": "",
"text": "Hi,\nI upgrade mongod from 3.4 to 4.2 but the service is failing to start. below are the logs. Can anyone please help me how to fix the issue? Cheers● mongod.service - MongoDB Database Server\nLoaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\nActive: failed (Result: exit-code) since Sat 2022-07-16 21:43:42 UTC; 8s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 925479 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=100)\nProcess: 925477 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 925475 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 925473 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\nMain PID: 919663 (code=exited, status=0/SUCCESS)Jul 16 21:43:40 NMS systemd[1]: Starting MongoDB Database Server…\nJul 16 21:43:40 NMS mongod[925479]: about to fork child process, waiting until server is ready for connections.\nJul 16 21:43:40 NMS mongod[925479]: forked process: 925481\nJul 16 21:43:42 NMS mongod[925479]: ERROR: child process failed, exited with error number 100\nJul 16 21:43:42 NMS mongod[925479]: To see additional information in this output, start without the “–fork” option.\nJul 16 21:43:42 NMS systemd[1]: mongod.service: Control process exited, code=exited status=100\nJul 16 21:43:42 NMS systemd[1]: mongod.service: Failed with result ‘exit-code’.\nJul 16 21:43:42 NMS systemd[1]: Failed to start MongoDB Database Server.\n[root@NMS yum.repos.d]# mongod --config=/etc/mongod.conf\nabout to fork child process, waiting until server is ready for connections.\nforked process: 925613\nERROR: child process failed, exited with error number 100\nTo see additional information in this output, start without the “–fork” option.Thanks",
"username": "hassan_wahab"
},
{
"code": "mongod3.4 -> 3.6, 3.6 -> 4.0, 4.0 -> 4.2\n",
"text": "Hi,\nCan you explain what upgrade method you used? Did you upgrade the mongod binaries?\nIt will require upgrading from one major version to the next.you would have to upgrade in this order:",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Thanks for your response. I followed the steps shown in below link.\nLayerStack Tutorials - LayerStack - How to upgrade MongoDB on Linux Cloud Servers.",
"username": "hassan_wahab"
},
{
"code": "mongod",
"text": "if you upgrade the mongod binaries you can not upgrade directly from 3.4 to 4.2. it is always better to follow official MongoDB documentation, also there was a similar topic - please read:",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Assuming upgrade went fine i see issue could be due to permissions\nError 100 means permissions issue on your datafiles\nCheck ownership of your dbpath/logpath dirs\nYou should start service with systemctl which calls mongod to start mongodb\nI see you are starting mongod as root user?You should not use root user",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Ramachandra,\nPlease see the below permission for dbpath/logpath dirsDirectory rights\nstorage:\ndbPath: /var/lib/mongo\ndrwxr-xr-x. 4 mongod mongod 16K Jul 16 21:45 mongoSystem Log:\npath: /var/log/mongodb/mongod.log\ndrwxr-xr-x. 2 mongod mongod 24 Jun 15 20:01 mongodb\n-rw-r-----. 1 mongod mongod 29M Jul 20 03:40 mongod.logand which user should i use?Thanks",
"username": "hassan_wahab"
},
{
"code": "",
"text": "Use normal user and use sudo when you need root privileges\nYour mongod command output asks you to run the command without fork.Did you try this?\nAlso check mongod.log\nIt will give more details on why it is failing to startTry to spin up your own mongod using a different port,dbpath,logpath like belowmongod --port 29000 --dbpath your_home_dir --logpath your_home_dir --forkIf it is working you can check/troubleshoot why it is failing with default config file(/etc/mongod.conf)",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi, I did ran the command but still no success\n$ mongod\n2022-07-23T17:13:01.129+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\n2022-07-23T17:13:01.132+0000 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] MongoDB starting : pid=340778 port=27017 dbpath=/data/db 64-bit host=NMS\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] db version v4.2.21\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] git version: b0aeed9445ff41af07449fa757e1f231bce990b3\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1k FIPS 25 Mar 2021\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] allocator: tcmalloc\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] modules: none\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] build environment:\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] distmod: rhel80\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] distarch: x86_64\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] target_arch: x86_64\n2022-07-23T17:13:01.133+0000 I CONTROL [initandlisten] options: {}\n2022-07-23T17:13:01.133+0000 E NETWORK [initandlisten] Failed to unlink socket file /tmp/mongodb-27017.sock Operation not permitted\n2022-07-23T17:13:01.133+0000 F - [initandlisten] Fatal Assertion 40486 at src/mongo/transport/transport_layer_asio.cpp 684\n2022-07-23T17:13:01.133+0000 F - [initandlisten] \\n\\n***aborting after fassert() failure\\n\\n[Avall@NMS etc]$ mongod --dbpath /var/lib/mongo --logpath /var/log/mongodb/mongod.log --port 29000 --fork\nabout to fork child process, waiting until server is ready for connections.\nforked process: 340936\nERROR: child process failed, exited with error number 1\nTo see additional information in this output, start without the “–fork” option.\n[Avall@NMS etc]$ mongod --dbpath /var/lib/mongo --logpath /var/log/mongodb/mongod.log --port 29000\n2022-07-23T17:16:52.088+0000 F CONTROL [main] Failed global initialization: FileRenameFailed: Could not rename preexisting log file “/var/log/mongodb/mongod.log” to “/var/log/mongodb/mongod.log.2022-07-23T17-16-52”; run with --logappend or manually remove file: Permission denied\n[Avall@NMS etc]$ sudo mongod --dbpath /var/lib/mongo --logpath /var/log/mongodb/mongod.log --port 29000\n2022-07-23T17:17:44.695+0000 I CONTROL [main] log file “/var/log/mongodb/mongod.log” exists; moved to “/var/log/mongodb/mongod.log.2022-07-23T17-17-44”.\n[Avall@NMS etc]$ mongo --port 29000\nMongoDB shell version v4.2.21\nconnecting to: mongodb://127.0.0.1:29000/?compressors=disabled&gssapiServiceName=mongodb\n2022-07-23T17:18:15.038+0000 E QUERY [js] Error: couldn’t connect to server 127.0.0.1:29000, connection attempt failed: SocketException: Error connecting to 127.0.0.1:29000 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:353:17\n@(connect):2:6\n2022-07-23T17:18:15.041+0000 F - [main] exception: connect failed\n2022-07-23T17:18:15.041+0000 E - [main] exiting with code 1",
"username": "hassan_wahab"
},
{
"code": "",
"text": "Failed to unlink socket file /tmp/mongodb-27017.sock Operation not permittedCheck permissions/ownership of this file\nls -lrt /tmp/mongodb-27017.sock\nMost likely it is owned by root when you tried to start mongod as root\nYou may have to remove this file and try again.Before removing make sure no mongod is running on port 27017Most of the issues are due to permissions\nYour second command would have succeeded on port 29000 if you had given a different dirpath which i have mentioned clearly in my reply to use “your home dir” but you have given the same path that was used before.Was your third command successful?I dont see waiting for connections message.Check mongod.log\nYou can also check by ps -ef|grep mongod\nYou should not start mongod as sudo\nThis will create the files owned by root and when you try to start mongod as normal user it cannot read/write the files as they are owned by root\nShow the contents of /var/lib/mongo and /var/log/mongodb/\nBest thing is empty these dirs and start fresh",
"username": "Ramachandra_Tummala"
}
]
| Mongod service failing to start | 2022-07-16T21:57:55.631Z | Mongod service failing to start | 8,905 |
null | []
| [
{
"code": "",
"text": "I’ve read this nice article about how to set up MongoDB Atlas for Strapi (headless CMS): https://www.mongodb.com/developer/how-to/strapi-headless-cms-with-atlas/ by @Ado_Kukic.\nSince I’m already using Atlas for my existing app, I’m interested to set it up for Strapi as well.However, I’m a bit worried about longer term support. In the Strapi docs, Installing from CLI - Strapi Developer Docs , I find only SQL databases as the officially supported ones. Not a single word about MongoDB or even Atlas. So how safe and supported is this described setup of Strapi with MongoDB for a production use case?",
"username": "Michael_Scheible"
},
{
"code": "",
"text": "Should have googled a bit more…found this interesting post from Strapi: MongoDB support in Strapi: Past, Present & Future which answers part of my question…looks like official support for MongoDB by Strapi is still in flux…hope there will be a more positive answer soon.",
"username": "Michael_Scheible"
},
{
"code": "",
"text": "@Michael_Scheible,\nI locked for the same question… I knew this Strapi news on their Website.However, in this tutorial updated on 01/13/2022 (E-commerce website with Nuxt, GraphQL, Strapi and Stripe (1)), Pierre Burgy (one of the 3 fonders) of Strapi suggest it is possible to use Strapi with MongoDB: Read specefickly the Strapi part install.\nI don’t understand why and above all, how it is possible to link MongoDB to Strapi?If i have news,i will be back here…\nKinds regards, Emmanuel of France.",
"username": "Emmanuel_Tes"
},
{
"code": "",
"text": "This written tutorial is oudated and not follow by Pierre Burgy.\nSorry for disturb.",
"username": "Emmanuel_Tes"
},
{
"code": "",
"text": "If someone still looking for MongoDB Atlas connector for Strapi 3, they can try the following NPMMongoDB Atlas hook for the Strapi framework. Forked from the original Strapi mongoose hook to make it work with MongoDB Atlas. USE WITH CAUTION!. Latest version: 3.6.10, last published: 6 months ago. Start using strapi-connector-mongodb-atlas in your...",
"username": "Anil_Kumar_G"
}
]
| Setting up MongoDB Atlas for Strapi CMS | 2022-01-23T11:45:12.807Z | Setting up MongoDB Atlas for Strapi CMS | 4,979 |
null | []
| [
{
"code": "",
"text": "MongoDB Unofficial DiscordI have started a discord for MongoDB for quicker help. We stand at 644 members at the time of writing, and I’ve decided to put it here so it’s more accessible to those who wish to get help faster than they may do on the forums, but also an easier way of getting help with any MongoDB service.In short: Faster support, easier and active.To join, click here. If that doesn’t work for you click the link bellow:",
"username": "david_denwood"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Unofficial Discord Community | 2022-07-23T18:38:22.660Z | MongoDB Unofficial Discord Community | 2,542 |
null | [
"replication",
"storage"
]
| [
{
"code": "",
"text": "I created a new Replica Set on Rocky Linux 8.5\nI installed mongo v5.0.7\nI want to make 3 node replica set\nwhen I run the other members on same server my 3 node replica set running with no problem\nwhereas when I try to use different hosts in replica set it shows the other nodes are passive not secondaryI have 3 nodes\nmongo 192.168.56.25\nmongo2 192.168.56.26\nmongo3 192.168.56.27Here is the rs.isMaster() output\nMongoDB Enterprise mongo-cls:PRIMARY> rs.isMaster()\n{\n“topologyVersion” : {\n“processId” : ObjectId(“62da62dc92ae9bb1a1331415”),\n“counter” : NumberLong(8)\n},\n“hosts” : [\n“mongo.localdomain:27000”\n],\n“passives” : [\n“mongo2:27000”,\n“mongo3:27000”\n],\n“setName” : “mongo-cls”,\n“setVersion” : 3,\n“ismaster” : true,\n“secondary” : false,\n“primary” : “mongo.localdomain:27000”,\n“me” : “mongo.localdomain:27000”,my mongod configuration files are like :[root@mongo ~]# cat /etc/mongod.confsystemLog:\ndestination: file\nlogAppend: true\npath: /var/mongodb/log/mongod.logstorage:\ndbPath: /var/mongodb/db/dataprocessManagement:\nfork: true # fork and run in background\ntimeZoneInfo: /usr/share/zoneinfonet:\nport: 27000\nbindIp: 0.0.0.0security:\nauthorization: enabled\nkeyFile: /var/mongodb/pki/cls_key#operationProfiling:replication:\nreplSetName: mongo-cls\n#sharding:#auditLog:#snmp:[root@mongo ~]#on the other nodes configuration files are same",
"username": "Hakan_KILIC"
},
{
"code": "",
"text": "I am not sure but the cause might be that the one node that works is known as mongo.localdomain while the two that do not work mongo2 and mongo3 do not have .localdomain.I would try to rs.add() the 2 nodes as mongo2.localdomain and mongo3.localdomain.",
"username": "steevej"
}
]
| Replica Set , primary shows the other members passive | 2022-07-22T11:10:40.006Z | Replica Set , primary shows the other members passive | 2,121 |
null | [
"queries",
"node-js",
"crud"
]
| [
{
"code": "{\n \"semester\": \"4\",\n \"subjectData\": {\n \"subjectName\": \"english\",\n \"questionBank\": {\n \"question\": \"who are you?\",\n \"answer\": {\n \"introduction\": \"Hello my name is BHoola\",\n \"features\": [\n \"tall\",\n \"handsome\"\n ],\n \"kinds\": [\n \"very\",\n \"kind\"\n ],\n \"conclusion\": \"Mighty man\",\n \n }\n }\n }\n}\nconst insertResult = await semesterData.updateMany( { $and: [ { semester: semesterRecieved, 'subjectData.subjectName': subject, }, ], }, { $set: { 'subjectData.questionBank.question': questionRecieved, 'subjectData.questionBank.answer': answerRecieved, }, }, { upsert: false }, ); \"semester\": \"4\",\n \"subjectData\": {\n \"subjectName\": \"english\",\n \"questionBank\": {\n \"question\": \"why are you so tall?\",\n \"answer\": {\n \"introduction\": \"Hello my name is 3333\",\n \"features\": [\n \"tall\",\n \"handsome\"\n ],\n \"kinds\": [\n \"very\",\n \"kind\"\n ],\n \"conclusion\": \"Mighty man\",\n \n },\n \n },\n {\n \"question\": \"who are you?\",\n \"answer\": {\n \"introduction\": \"Hello my name is 33333\",\n \"features\": [\n \"tall\",\n \"handsome\"\n ],\n \"kinds\": [\n \"very\",\n \"kind\"\n ],\n \"conclusion\": \"Mighty man\",\n \n },\n \n }\n }\n }```",
"text": "** I have this collection which I want to update without affectin any previous data **** Currently I am trying to do this with** const insertResult = await semesterData.updateMany( { $and: [ { semester: semesterRecieved, 'subjectData.subjectName': subject, }, ], }, { $set: { 'subjectData.questionBank.question': questionRecieved, 'subjectData.questionBank.answer': answerRecieved, }, }, { upsert: false }, );\n** to which I am getting acknowledge: false **** I want my collection to look like this **",
"username": "Izaan_Anwar"
},
{
"code": "",
"text": "Hello @Izaan_Anwar,** I want my collection to look like this **Your expected result is not valid JSON. Can you please provide valid JSON for what you are looking for?",
"username": "turivishal"
},
{
"code": "",
"text": "oh yeah I fixed it it was very trivila I just added an array in questionBank and I am keeping question and answers in an object and when I post new data it goes in that array making a new object.",
"username": "Izaan_Anwar"
},
{
"code": "",
"text": "So I had to make the questionBank object into an array containing onjects and I used collection.update with $addToSet and it worked!",
"username": "Izaan_Anwar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable to update my collection | 2022-07-23T07:32:16.930Z | Unable to update my collection | 1,231 |
null | [
"atlas-search",
"text-search"
]
| [
{
"code": "title: \"elephant is an elephant\"\ndescription: \"this elephant is an elephant\"\ntitle: \"duck\"\ndescription: \"duck is not an elephant\"\n",
"text": "Let’s say I have 2 documents:Document 1:Document 2:How can I make Atlas search give both these results the same search score for “elephant”? I want it to only look for a keyword once and not weight the result higher if the keyword appears more often.Note: Matches of different words should still rank higher than matching a single word.\nWhen the user searches for “duck elephant”, document 2 should be listed higher because it matches both words.",
"username": "Florian_Walther"
},
{
"code": "",
"text": "My current idea is to use a constant search score and add entries to the search aggregation step dynamically for each word in the search query (which I turn into an array beforehand).Is there a better way to handle this?",
"username": "Florian_Walther"
},
{
"code": "const shouldQueries = searchTerms.map(searchTerm: string) => ({\n wildcard: {\n query: searchTerm,\n path: ['title', 'description'],\n allowAnalyzedField: true,\n score: { constant: { value: 1 } }\n }\n}));\n\nlet aggregation = Resource.aggregate()\n .search({\n compound: {\n must: [...],\n should: [...shouldQueries]\n }\n })\n",
"text": "What do you think about this approach? It seems to work. Is there a better way to do it?",
"username": "Florian_Walther"
}
]
| How can I make Atlas search look for a keyword only once? | 2022-07-22T21:53:26.914Z | How can I make Atlas search look for a keyword only once? | 2,524 |
null | [
"queries",
"indexes"
]
| [
{
"code": "IXSCAN { blockchain: 1, tx.txid: 1 }, IXSCAN { blockchain: 1, tx.vout.scriptPubKey.addresses: 1 }",
"text": "Hi there, I’m unable to find a good index for my query. Does anybody have a good idea for this query?collection.find({\n_id: { $gt: index },\nblockchain: ‘test’,\nblockHeight: { $gte: 0, $lte: 19500000 },\n$or: [\n{ ‘tx.txid’: { $in: [strings…] } },\n{ ‘tx.vout.scriptPubKey.addresses’: { $in: [strings…] } },\n],\n})\n.sort({ _id: 1 })\n.limit(100)Currently mongo uses only these two indexes, but this takes currently 4s to finish.\nIXSCAN { blockchain: 1, tx.txid: 1 }, IXSCAN { blockchain: 1, tx.vout.scriptPubKey.addresses: 1 }These are my current indexes:\nblockchain_1_tx.txid_1\nblockchain_1_tx.vout.scriptPubKey.addresses_1\nblockchain_1_blockHeight_-1\nblockHeight_1__id_1\n_id_1_blockchain_1_bockHeight_1_tx.txid_1\n_id_1_blockchain_1_bockHeight_1_tx.vout.scriptPubKey.addresses_1\n_id_1_blockchain_1_bockHeight_1_tx.vout.scriptPubKey.addresses_1_tx.txid_1",
"username": "Philipp_Sambohl"
},
{
"code": "",
"text": "Equality, sort, range filters → blockchain, tx.txid, _id, blockHeight seems to work fine.",
"username": "Philipp_Sambohl"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable to find the right index | 2022-07-22T11:00:29.109Z | Unable to find the right index | 1,298 |
[
"graphql"
]
| [
{
"code": "",
"text": "Can’t understand why we get the permisson error?\nimage1696×491 23.6 KB\n\nimage1301×910 44.6 KB\n",
"username": "Viktor_Nilsson"
},
{
"code": "",
"text": "What does this mean? Is it some underlying validation problem, very hard to get an understanding of why this happens.",
"username": "Viktor_Nilsson"
},
{
"code": "",
"text": "In case anyone else lands here, I was able to get past this issue by changing the Validation Action from “Error” to “Warn”. I’m not sure if that’s the best solution but it worked for me \nScreen Shot 2022-07-22 at 7.44.17 PM2640×966 157 KB\n",
"username": "Noel_80361"
}
]
| Getting “FunctionError: read not permitted” | 2021-09-01T12:07:00.623Z | Getting “FunctionError: read not permitted” | 5,235 |
|
null | []
| [
{
"code": "",
"text": "My application endpoint is: https://us-east-1.aws.realm.mongodb.com/api/client/v2.0/app/mychariti-db-cecem/graphqlI am trying to query from postman using a simple query and I get the below error in the realm ui logs:ERROR: could not validate document ‘ObjectID(“61027debf15e5a81e049f952”)’ for read\nLogs:\n[\n“FunctionError: read not permitted”\n]",
"username": "Mahnoor_Malik"
},
{
"code": "",
"text": "Hi @Mahnoor_Malik I am getting a similar error. Were you able to resolve this?",
"username": "Noel_80361"
}
]
| Querying graphql realm from postman returns read permission error | 2021-08-25T19:01:29.936Z | Querying graphql realm from postman returns read permission error | 2,007 |
[
"atlas-data-lake"
]
| [
{
"code": "",
"text": "I’m trying to setup and play around with Data Lake as part of my initial evaluation of MongoDB. I’ve upgraded from Shared M0 to Shared M2 in order to get a tier that has backups enabled, but I still can’t setup a Data Lake.I’m getting the message: ‘The selected cluster doesn’t have backups enabled. Choose a different cluster to create your pipeline with or configure backup on “Cluster0”.’But as far as I can tell I do have backups enabled.\n\nimage1608×575 32.7 KB\nDo I need to upgrade to M10?",
"username": "Nicholas_Vandehey"
},
{
"code": "",
"text": "graded from Shared M0 to Shared M2 in order to get a tier that has backups enabled, but I still can’t setup aHere is the screenshot of the error message, which comes up after selecting Cluster0 (my only cluster):\nimage1181×508 17.1 KB\n",
"username": "Nicholas_Vandehey"
},
{
"code": "",
"text": "Hello @Nicholas_Vandehey, glad to see you trying this out!Looks like our error messages are not great here, we’ll get that fixed. You do need to use an M10 or above for this functionality. Shared tiers use a different mechanism for backup which unfortunately does not support Data Lake Storage.",
"username": "Benjamin_Flast"
}
]
| Can't setup data lake | 2022-07-22T18:11:49.253Z | Can’t setup data lake | 2,550 |
|
null | [
"installation"
]
| [
{
"code": "",
"text": "Hey everyone,I’m trying to install MongoDB Server on a Windows Server 2016 or 2022.\nOn both servers the installation went well, no error messages or something like that.\nBut the server is not installed at all.\nI’ve tried to install from a admin CMD, on another location, with an own user.\nEyerything fails in the same way without any message.How can I find out whats going wrong?Best greets,\nDaniel",
"username": "Daniel_Hebel"
},
{
"code": "",
"text": "It seems that the server is installed propper but that is not shown in the installer when running it again.\nThe installer shows that the server is not installed when running “change”.Compass was not installed so I’m unable to connect by gui but the service is present and running.\nSo I’m trying to install Compass manually with the PS script now.",
"username": "Daniel_Hebel"
},
{
"code": "",
"text": "Copy & Paste works well.\nNo idea whats wrong with the installation packages but manually CP worked for me.",
"username": "Daniel_Hebel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Installing MongoDB Server Community Edition on a Windows Server 2016 or 2022 fails | 2022-07-22T17:13:45.496Z | Installing MongoDB Server Community Edition on a Windows Server 2016 or 2022 fails | 2,452 |
null | []
| [
{
"code": "meta\"meta\": {\n \"key2\": \"value2\",\n \"key1\": \"value1\",\n \"two words\": \"value1 value2\",\n \"foo\": \"bar\"\n},\n",
"text": "I have a collect that has a meta field with can contain an array of key/value pairs. The key could be any string and the value could be any string.example:I’m wondering if I can do some advanced searches like:",
"username": "djedi"
},
{
"code": "query = { \"meta.title\" : { \"$exists\" : true } }\ncollection.find( query )\n",
"text": "Find documents where the meta key contains the word “title” (doesn’t matter what the value is).Find documents where the meta value contains the word “red” (doesn’t matter what the key is).I do not know but with Building with Patterns: The Attribute Pattern | MongoDB Blog it should be eazy.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks, but I need to use Atlas search because that is where the indexes are. Also, I want to search the key for a word. For example, if I search “title” it could find meta keys just as “Title”, “Movie Title”, etc.",
"username": "djedi"
},
{
"code": "",
"text": "What ever you use for search using the Attribute Pattern will simply your life.It will be hard to define an index if you have dynamic field names. The Attribute Pattern solves that by making your dynamic keys the values of a static field name.",
"username": "steevej"
},
{
"code": "",
"text": "I agree there are better ways to store the data. But this DB pre-dates me. It is 10 years old and refactoring it will be quite a project. We have a solution now by syncing to elasticsearch, but we are looking to move to Atlas search. I’m hoping to find a way to index and query this data without a refactor, but I’m starting to feel that probably is not possible.",
"username": "djedi"
},
{
"code": "path: meta keyquery: meta value",
"text": "Could you use EmbeddedDocument? Where path: meta key, and query: meta value ?",
"username": "Elle_Shwer"
}
]
| Searching dynamic embedded documents | 2022-07-20T19:44:20.562Z | Searching dynamic embedded documents | 2,389 |
null | []
| [
{
"code": "",
"text": "Hi All expert,\nI am trying to use atlas API to get indexes of my collection. But it always return authorized.\nI created API key from organization level. Then assigned project owner run on project level.\nVia follow steps from https://www.mongodb.com/docs/atlas/reference/api/fts-indexes-get-all/\nIt should work but it always retrun 401. May u know any way to check why this is happening?",
"username": "yichao_wang"
},
{
"code": "",
"text": "I think you’ll get a 401 if they don’t include the API Key in the request or if they use an invalid one. You might also need to add their IP to the access list: https://www.mongodb.com/docs/atlas/configure-api-access/#edit-an-api-key-s-access-list",
"username": "Elle_Shwer"
}
]
| Atlas index api returns 401 but api public key and private has required authorization | 2022-07-11T08:54:07.128Z | Atlas index api returns 401 but api public key and private has required authorization | 1,587 |
null | []
| [
{
"code": "",
"text": "Hello! I’m Kim from Korea.\nI’m working as a MongoDB DBA (since 2018)I glad to join MongoDB Community.\nI’d like to get information about troubleshooting or new features here.",
"username": "noisia"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @noisia !Are there any specific types of troubleshooting tips you are looking for? There are some great past discussions if you browse or search the forums.For some of the latest MongoDB features, you may also be interested in the MongoDB 6.0 GA announcement and 7 Big Reasons to Upgrade to MongoDB 6.0 article from earlier this week.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Hello! I'm Kim from Korea | 2022-07-22T08:17:29.461Z | Hello! I’m Kim from Korea | 2,022 |
null | [
"swift"
]
| [
{
"code": "",
"text": "Hello,I would like to know if it is possible to “force” a client reset error after changing server permissions values.I have a use case where, at some point, an Atlas function is called and changes some values in user custom data. Those values are used by server read/write permissions. So, as it should be, after changing those user custom data values, any objects created on device will be rejected by the server due to changing the permissions.I tried realmUser.refreshCustomData() but nothing changed.My question: is there anything I can do to generate a client reset error (or other session update mechanism) where I can handle this permission change? I wouldn’t like to ask the user to restart the app to update the sync session.Thank you!",
"username": "horatiu_anghel"
},
{
"code": "",
"text": "Are you using flexible sync or partition-based sync? For flexible sync, it should happen automatically (with a short delay) and trigger a client reset.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Flexible sync. But it doesn’t trigger. Only after a restart.I will investigate further and come back if I can’t make it work.P.S. But even if the client reset is triggered, all realms instances must be nil in order to handle the reset process, right? I think I will have an issue here, also.",
"username": "horatiu_anghel"
},
{
"code": "",
"text": "Hi, sorry, following back up here. Let me know if this is still an issue? When you make a change to your permissions we will have all clients will get notified of that within 15 minutes when they refresh their permissions, but you are right that I think this should be more automatic and this is an easy change for us so I will file something now and hopefully it will get fixed soon.Thanks\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi Tyler,Great, hope that the change will fit my use case where an immediate client reset is required after the custom data is modified.I am working on an app where multiple companies can be managed. And I store in user’s custom data the UUID of the current working company.When the user selects the working company the custom data is updated and I would need to download only the data belonging to that company. So if a client reset is triggered right away, it will allow to refresh the realm data.I hope I was able to make myself clear.Horatiu",
"username": "horatiu_anghel"
}
]
| Force sync session client reset error | 2022-07-07T08:24:49.589Z | Force sync session client reset error | 2,072 |
null | [
"serverless"
]
| [
{
"code": "",
"text": "can we connect azure with mongo atlas (shared and serverless).",
"username": "Avish_Pratap_Singh"
},
{
"code": "",
"text": "Absolutely, you can always leverage public IP Access List management and for an M10+ cluster you can leverage Vnet peering or Private Endpoints.",
"username": "Andrew_Davidson"
}
]
| How to create connection between azure and mongo atlas | 2022-07-21T06:43:50.504Z | How to create connection between azure and mongo atlas | 1,681 |
null | [
"aggregation",
"queries",
"node-js",
"compass"
]
| [
{
"code": "",
"text": "In MongoDb i can get the aggregate results:\nQuery in MongoDB compass:db.products.aggregate([\n{\n$lookup: {\nfrom: “storeorders”,\nlocalField: “_id”,\nforeignField: “productid”,\nas: “StoreOrderDetails”,\n},\n},\n])Query Result MongoDB compass:\nStoreOrderDetails is populated with data{ _id: ObjectId(“62d9117456eb3251243b8466”),\nskuid: ‘2’,\nproduct: ‘apple2’,\norigin: ‘mexico2’,\nprice: 52,\nisActive: true,\ndatetime: 2019-05-29T21:19:15.187Z,\ncreatedAt: 2022-07-21T08:42:28.207Z,\nupdatedAt: 2022-07-21T08:42:28.207Z,\n__v: 0,\nStoreOrderDetails:\n[ { _id: ObjectId(“62d968e672d0ab397447c5c3”),\nproductid: ObjectId(“62d9117456eb3251243b8466”),\nskuid: ‘2’,\ncurrQty: 4,\nnewQty: 6,\nappQty: 0,\norderstatus: ‘submittedByStore’,\nsubBy: ‘arun’,\ndatetime: 2019-04-29T21:19:15.187Z,\nstoreName: ‘store1’,\nstoreAddress: ‘chicago,stree’,\ncityName: ‘chicago’,\ncreatedAt: 2022-07-21T14:55:34.112Z,\nupdatedAt: 2022-07-21T14:55:34.112Z,\n__v: 0 } ] }But in Node/express.js file , StoreOrderDetails is not populated.\nCode is below//aggregate-PRODUCTS\nrouter.route(“/”).get((req, res) => {Product.aggregate([\n{\n$lookup: {\nfrom: “Storeorder”,\nlocalField: “_id”,\nforeignField: “productid”,\nas: “StoreOrderDetails”,\n},\n},\n])\n.then((storeorder1) => {\nres.json(storeorder1);\n})\n.catch((err) => res.status(400).json(“Error” + err));\n});get results http://localhost:5000/storeorder:\nbelow the “StoreOrderDetails”: has an empty array[\n{\n“_id”: “62d9117456eb3251243b8466”,\n“skuid”: “2”,\n“product”: “apple2”,\n“origin”: “mexico2”,\n“price”: 52,\n“isActive”: true,\n“datetime”: “2019-05-29T21:19:15.187Z”,\n“createdAt”: “2022-07-21T08:42:28.207Z”,\n“updatedAt”: “2022-07-21T08:42:28.207Z”,\n“__v”: 0,\n“StoreOrderDetails”: \n},\n]",
"username": "Arun_Nair1"
},
{
"code": "",
"text": "Simplefrom: “storeorders”,is not the same asfrom: “Storeorder”,",
"username": "steevej"
},
{
"code": "",
"text": "",
"username": "SourabhBagrecha"
}
]
| Aggregation in nodejs | 2022-07-22T05:22:03.637Z | Aggregation in nodejs | 1,500 |
null | []
| [
{
"code": "User mismatch for client file identifier (IDENT)2022-07-15 05:17:05.055 Info: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false 2022-07-15 05:17:05.126 Info: Connected to endpoint '13.54.209.90:443' (from '127.0.0.1:59626') 2022-07-15 05:17:05.148 Info: Verifying server SSL certificate using 155 root certificates 2022-07-15 05:17:05.971 Info: Connection[1]: Session[1]: Received: ERROR \"User mismatch for client file identifier (IDENT)\" (error_code=223, try_again=true, recovery_disabled=false) 2022-07-15 05:17:05.984 Info: Connection[1]: Disconnected 2022-07-15 05:17:08.004 Info: Connected to endpoint '13.54.209.90:443' (from '127.0.0.1:59627') 2022-07-15 05:17:08.025 Info: Verifying server SSL certificate using 155 root certificates 2022-07-15 05:17:08.336 Info: Connection[1]: Session[1]: Received: ERROR \"User mismatch for client file identifier (IDENT)\" (error_code=223, try_again=true, recovery_disabled=false) 2022-07-15 05:17:08.336 Info: Connection[1]: Disconnected 2022-07-15 05:17:12.357 Info: Connected to endpoint '13.54.209.90:443' (from '127.0.0.1:59630') 2022-07-15 05:17:12.377 Info: Verifying server SSL certificate using 155 root certificates 2022-07-15 05:17:12.566 Info: Connection[1]: Session[1]: Received: ERROR \"User mismatch for client file identifier (IDENT)\" (error_code=223, try_again=true, recovery_disabled=false) 2022-07-15 05:17:12.567 Info: Connection[1]: Disconnected 2022-07-15 05:17:20.588 Info: Connected to endpoint '13.54.209.90:443' (from '127.0.0.1:59631') 2022-07-15 05:17:20.609 Info: Verifying server SSL certificate using 155 root certificates 2022-07-15 05:17:20.799 Info: Connection[1]: Session[1]: Received: ERROR \"User mismatch for client file identifier (IDENT)\" (error_code=223, try_again=true, recovery_disabled=false) 2022-07-15 05:17:20.800 Info: Connection[1]: Disconnected",
"text": "I get the following error when syncing my app;User mismatch for client file identifier (IDENT)I know why the error is being raised, my question is why isn’t the error returned to my code?The following shows the log entries raised by the SDK;2022-07-15 05:17:05.055 Info: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false 2022-07-15 05:17:05.126 Info: Connected to endpoint '13.54.209.90:443' (from '127.0.0.1:59626') 2022-07-15 05:17:05.148 Info: Verifying server SSL certificate using 155 root certificates 2022-07-15 05:17:05.971 Info: Connection[1]: Session[1]: Received: ERROR \"User mismatch for client file identifier (IDENT)\" (error_code=223, try_again=true, recovery_disabled=false) 2022-07-15 05:17:05.984 Info: Connection[1]: Disconnected 2022-07-15 05:17:08.004 Info: Connected to endpoint '13.54.209.90:443' (from '127.0.0.1:59627') 2022-07-15 05:17:08.025 Info: Verifying server SSL certificate using 155 root certificates 2022-07-15 05:17:08.336 Info: Connection[1]: Session[1]: Received: ERROR \"User mismatch for client file identifier (IDENT)\" (error_code=223, try_again=true, recovery_disabled=false) 2022-07-15 05:17:08.336 Info: Connection[1]: Disconnected 2022-07-15 05:17:12.357 Info: Connected to endpoint '13.54.209.90:443' (from '127.0.0.1:59630') 2022-07-15 05:17:12.377 Info: Verifying server SSL certificate using 155 root certificates 2022-07-15 05:17:12.566 Info: Connection[1]: Session[1]: Received: ERROR \"User mismatch for client file identifier (IDENT)\" (error_code=223, try_again=true, recovery_disabled=false) 2022-07-15 05:17:12.567 Info: Connection[1]: Disconnected 2022-07-15 05:17:20.588 Info: Connected to endpoint '13.54.209.90:443' (from '127.0.0.1:59631') 2022-07-15 05:17:20.609 Info: Verifying server SSL certificate using 155 root certificates 2022-07-15 05:17:20.799 Info: Connection[1]: Session[1]: Received: ERROR \"User mismatch for client file identifier (IDENT)\" (error_code=223, try_again=true, recovery_disabled=false) 2022-07-15 05:17:20.800 Info: Connection[1]: DisconnectedMy code uses a cancellation token to cancel the procedure if the realm isn’t opened within a suitable timeframe but it would be far more efficient if the SDK returned an error for my code to handle.",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "Often times, including the code you’re using can help clarify the question and can give us a bit more insight as to when any why the error is ocurring. Can you include the code you’re using?",
"username": "Jay"
},
{
"code": "public static async Task<(bool Synchronised, Realm Realm)> GetSyncedRealm(SyncConfiguration config, int timeout = 2)\n{\n\tRealm realm = null;\n\tvar cts = new CancellationTokenSource();\n\tcts.CancelAfter(TimeSpan.FromSeconds(timeout));\n\ttry\n\t{\n\t\trealm = await Realm.GetInstanceAsync(config, cts.Token);\n\t\treturn (true, realm);\n\t}\n\tcatch (OperationCanceledException)\n\t{\n\t\treturn (false, realm);\n\t}\n\tcatch (Exception ex)\n\t{\n\t\tTrace.WriteLine(ex.GetFirstMessage());\n\t\tthrow;\n\t}\n}\nUser mismatch for client file identifier (IDENT)",
"text": "Following is the code used;The error User mismatch for client file identifier (IDENT) is shown multiple times in the log until the cancellation token expires and control is returned to the code. This occurs for other errors raised by the SDK but are never thrown to the calling code to be handled. If the code did get these errors we could take steps to correct the problem.",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "Hi @Raymond_Brack,In order to handle session errors you need to set a callback in SyncConfigurationBase.OnSessionError. That in conjunction with SyncConfigurationBase.ClientResetHandler should cover any session related problem.\nYou can read more about client reset in our docs.Andrea",
"username": "Andrea_Catalini"
},
{
"code": "private async Task SetupRealmEnvironment()\n{\n\t_adminUser = await GetRealmUser(_realmAppId, _realmSettings.AdminUser, _realmSettings.PIN);\n\tif (_adminUser == null) throw new Exception($\"Failed to locate a Realm user for \\\"{_realmSettings.AdminUser}\\\".\");\n\n\tPartitionSyncConfiguration syncConfig = ConnectionService.GetSyncConfig(Enums.AppConfiguration.CrmRealmWebApi, RealmConstants.PUBLIC_PARTITION, _adminUser);\n\n\tsyncConfig.ClientResetHandler = new ManualRecoveryHandler(HandleResetError);\n\t//syncConfig.OnSessionError = new SyncConfigurationBase.SessionErrorCallback(HandleSessionError);\n\tsyncConfig.OnSessionError += HandleSessionError;\n\n\t(bool IsSynced, Realm realm) result;\n\tresult = await ConnectionService.GetSyncedRealm(syncConfig, 20);\n\n\tif (result.realm == null) throw new Exception(\"Failed to open a synchronised or local realm for the Admin user.\");\n\n\tif (!result.IsSynced)\n\t{\n\t\tawait result.realm.SyncSession.WaitForDownloadAsync();\n\t\tresult.IsSynced = true;\n\t}\n\t_realm = result.realm;\n\t_driverSvc = new SxRealm.Services.DriverService(_realm);\n}\n\nprivate static void HandleSessionError(Session session, SessionException error)\n{\n\tConsole.Write($\"{error}\");\n}\nConsole.Write($\"{error}\");2022-07-19 01:36:06.790 Info: Connection[1]: Session[1]: Received: ERROR \"Invalid schema change (UPLOAD): failed to add app schema for ns='mongodb-atlas.DriverApp.Driver' for new top-level schema \"Driver\": sync-incompatible app schema for the same collection already exists. This app schema is incompatible with error: \"property \\\"AppVersion\\\" has invalid type: type [string,null] is not supported, property \\\"DepotId\\\" has invalid type: type [uuid,null] is not supported, property \\\"Email\\\" has invalid type: type [string,null] is not supported, property \\\"LastPingDtTm\\\" has invalid type: type [date,null] is not supported, property \\\"LastProcessedDtTm\\\" has invalid type: type [date,null] is not supported, property \\\"Mobile\\\" has invalid type: type [string,null] is not supported\". To continue, delete the app schema in Atlas App Services, or update it to match the app schema defined on the device\" (error_code=225, try_again=false, recovery_disabled=false)syncConfig.OnSessionError = new SyncConfigurationBase.SessionErrorCallback(HandleSessionError)\nsyncConfig.OnSessionError += HandleSessionError\n",
"text": "Hi Andrea,Thanks for the tip. I tried adding the OnSessionError handler as you suggested;But the code never hit the Console.Write($\"{error}\"); line even though the following error was displayed in the session logs;2022-07-19 01:36:06.790 Info: Connection[1]: Session[1]: Received: ERROR \"Invalid schema change (UPLOAD): failed to add app schema for ns='mongodb-atlas.DriverApp.Driver' for new top-level schema \"Driver\": sync-incompatible app schema for the same collection already exists. This app schema is incompatible with error: \"property \\\"AppVersion\\\" has invalid type: type [string,null] is not supported, property \\\"DepotId\\\" has invalid type: type [uuid,null] is not supported, property \\\"Email\\\" has invalid type: type [string,null] is not supported, property \\\"LastPingDtTm\\\" has invalid type: type [date,null] is not supported, property \\\"LastProcessedDtTm\\\" has invalid type: type [date,null] is not supported, property \\\"Mobile\\\" has invalid type: type [string,null] is not supported\". To continue, delete the app schema in Atlas App Services, or update it to match the app schema defined on the device\" (error_code=225, try_again=false, recovery_disabled=false)As you can see from the code above I tried configuring the handler using;andWere these hooked up correctly?",
"username": "Raymond_Brack"
},
{
"code": "ManualRecoveryHandlerClientResetCallbackClientResetExceptionSessionSessionExceptionOnSessionError+=OnSessionError=PartitionSyncConfiguration syncConfig = ConnectionService.GetSyncConfig(Enums.AppConfiguration.CrmRealmWebApi, RealmConstants.PUBLIC_PARTITION, _adminUser);\n\nsyncConfig.OnSessionError = (sender, e) =>\n{\n Console.Write($\"Session error code {e.ErrorCode}: {e.Message}\");\n \n // handle the various session errors here\n switch(e.ErrorCode)\n {\n case ErrorCode.UserMismatch:\n // this should be the one you're experiencing\n break;\n case ErrorCode.*SomeCode*:\n break;\n case ErrorCode.*SomeCode*:\n break;\n // etc\n default:\n break;\n }\n};\n\nsyncConfig.ClientResetHandler = new DiscardLocalResetHandler\n{\n // All of the following callbacks are optional, I put them here just to make you aware.\n // You can skip them for now.\n OnBeforeReset = (beforeFrozen) => { /* your code */ },\n OnAfterReset = (beforeFrozen, after) => { /* your code */ },\n ManualResetFallback = (clientResetException) => { /* your code */ },\n};\n\nSyncConfigurationBase",
"text": "The code that you showed should not compile as\nManualRecoveryHandler takes the delegate ClientResetCallback which expects a ClientResetException as parameter. But you pass a delegate with 2 parameters, a Session and a SessionException.\nAre you sure you’re running the code that you are showing?As a side note, OnSessionError is not an event, so there’s no need to use the subscribe syntax (+=). OnSessionError is a property so just set (=) your callback.About client reset, we strongly encourage not to use the manual client strategy. Instead, users are encouraged to use the discard local strategy as that is generally much simpler to use (but be sure you understand that it discards unsynced changes in the event of a client reset).Your code should look likeWhen I said:you need to set a callback in SyncConfigurationBase.OnSessionErrorI simply meant any subclass of SyncConfigurationBase, not the base class itself.I hope this helps.",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "Hi Andrea,Thanks for your assistance however I’m still having problems with the OnSessionError and ClientResetHandler so I’ve raised a support ticket.",
"username": "Raymond_Brack"
},
{
"code": "OnSessionError",
"text": "What type of problems? Is your OnSessionError callback never reaching? Or what is the problem?And where did you raise a support ticket? We usually monitor a bunch of resources but none of those show your ticket.",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "Hi Andrea,This the ticket I raised: 00968351: Using OnSessionError and ClientResetHandler. I attached the test project and the SDK logs to show what was happening.",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "Hi,Apparently I don’t have access to that portal, that’s why I could not see the ticket. I hope they can help you figure out what the problem is.Andrea",
"username": "Andrea_Catalini"
}
]
| Sync Errors Not Returned | 2022-07-15T05:32:01.563Z | Sync Errors Not Returned | 2,985 |
null | [
"replication"
]
| [
{
"code": "",
"text": "I’m running a free tier Atlas 5.0.9 replica set cluster and I used to be able to query the collections using op_query with no filter. As of a few days ago, attempting this returns no records(I can see the empty results returned in the network trace). If I add a simple filter to the op_query request then rows are returned. Did something change in Atlas recently?\nI know op_query is deprecated and I am planning to switch to op_msg, but was there some change on the Atlas server which caused this change in behavior?",
"username": "Brian_Derwart"
},
{
"code": "op_queryop_queryop_msgop_query",
"text": "Hi @Brian_Derwart welcome to the community!Since op_query is deprecated and will be removed in future MongoDB version, I think this is an excellent time for you to update the code you have Did something change in Atlas recently?Yes Atlas is constantly evolving and changing, although what exactly changed in the op_query front, I’m not really sure. My suggestion is it’s best to move to op_msg sooner rather than later since this will be the supported one moving forward, and will avoid any future issues with using op_query.Best regards\nKevin",
"username": "kevinadi"
}
]
| Atlas no longer returns rows to op_query without a filter | 2022-07-21T17:49:52.690Z | Atlas no longer returns rows to op_query without a filter | 1,260 |
null | []
| [
{
"code": "",
"text": "Hello friends and MongoDB developers. I’m digging all the shards and enjoy learning about MongoDB. Have now completed a few courses (M001, M131, M201, M220N) and wanted to stop in and introduce myself. Happy to be a part of the community and enjoying my time at University. Also, I had a blast at Mongo World 2022 and may have met some of you in person! Loved the concert! Will Atwood",
"username": "Will_Atwood1"
},
{
"code": "",
"text": "Welcome @Will_Atwood1 to the community!Glad you enjoyed the events so far, and thanks for the kind words We are currently in the middle of rebooting the MongoDB User Groups (MUGs), and I thought you might be interested in meeting your local MongoDB communities. You can find the current list of MUGs in User Groups & Events - MongoDB Developer Community Forums so please join the one closest to you! Hope you’ll have fun.Best regards\nKevin",
"username": "kevinadi"
}
]
| New friends at MongoDB | 2022-07-22T02:31:19.881Z | New friends at MongoDB | 2,229 |
null | [
"next-js"
]
| [
{
"code": "",
"text": "I am facing similar issue as 159188 and 161897, but in v1, on a M0 Sandbox cluster, while connecting from an app deployed on Vercel in US region. It’s taking more than 4s to perform a simple Insert into a fresh collection. Due to which, my Edge function on Vercel is timing out.Which is the biggest bottleneck here?Have theinternal optimizationsmentioned in the previous threads already been implemented in v1?The lightweight nature of REST workflow is getting voided by the excessive latency with it.",
"username": "Harshith_Thota"
},
{
"code": "",
"text": "Hi Harshith -we now provide a workaround to use a local region for your URL, if you are running Azure or a region that is not supported in our global deployment, we now provide the ability to make this a local deployment option:Here’s how to do it:Go to App ServicesClick ’ Create new app ’In the form, choose ‘Advanced’ and choose a local settingGo to Rules , and create default read and write for your linked clusterGo to Authentication and enable ’ api key '. Save that config, then create an API key.Go to HTTPS endpoints and click on the Data API tab. Enable the Data API.This will create a new URL for your Data API, as we don’t yet support the ability to change the deployment model of an existing instance of the app/Data API. Your logs, etc. will have to be accessed via this new app.We will make this easier to do in the Atlas UI soon, but let me know if this helps with performance, as well as offer GCP soon.",
"username": "Sumedha_Mehta1"
}
]
| Data API is still slow in v1 | 2022-07-12T12:49:41.623Z | Data API is still slow in v1 | 3,016 |
null | []
| [
{
"code": "",
"text": "Hey guys!I am trying to connect to my database with Power BI\nI have setup the ODBC following the provided guides and i am getting a successful connection.\nProblem is, i can ONLY see the INFORMATION_SCHEMA and mysql databases and not my own ones.What to do`?",
"username": "Jake_Martin_Jacob_Madsen"
},
{
"code": "",
"text": "Any luck with this? Having the same problem, no idea why. It can see old ones, but not new ones.",
"username": "Sean_Brophy"
},
{
"code": "",
"text": "Got the same problem too… Any clue?",
"username": "Joseph_Boyede"
},
{
"code": "",
"text": "@Joseph_Boyede - My issue turned out to be the refresh rate of our BI connector. We set it to refresh daily (which I then promptly forgot about). Changing the refresh temporarily will show you the new collections, but don’t forget to set it back to what you need it.",
"username": "Sean_Brophy"
}
]
| Missing Databases in ODBC for power bi | 2021-12-17T00:31:19.750Z | Missing Databases in ODBC for power bi | 2,761 |
[]
| [
{
"code": "final class Idea: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: String = UUID().uuidString\n @Persisted var ownerID: String = \"\" // user._id\n @Persisted var title: String = \"\"\n @Persisted var created: Date = Date()\n @Persisted var body: String = \"\"\n}\n{\n \"rules\": {},\n \"defaultRoles\": [\n {\n \"name\": \"owner-write\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": {\n \"ownerID\": \"%%user.id\"\n }\n }\n ]\n}\nflexibleSyncConfigurationinitialSubscription if let user = app.currentUser {\n let config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"user_ideas\") != nil {\n return\n } else {\n subs.append(QuerySubscription<Idea>(name: \"user_ideas\") {\n $0.ownerID == user.id\n })\n }\n })\n ContentView()\n .environment(\\.realmConfiguration, config)\n }\nstruct ContentView: View {\n @ObservedResults(Idea.self) var ideas\n @State private var newIdea = \"\"\n \n var body: some View {\n VStack(alignment: .leading) {\n TextEditor(text: $newIdea)\n if newIdea != \"\" {\n Image(systemName: \"plus.circle.fill\")\n .onTapGesture {\n addIdea()\n }\n }\n }\n }\n \n private func addIdea() {\n let idea = Idea()\n idea.title = newIdea\n idea.ownerID = app.currentUser!.id\n $ideas.append(idea)\n newIdea = \"\"\n }\n}\nrealm.subscriptions.write",
"text": "I’m attempting my first pass at an app using Flexible Sync.The current error I’m getting is when writing to the Realm, this shows up on the server:\nScreenshot 2022-07-20 at 10.03.36 AM1874×288 35.5 KB\nFor search purposes: Client attempted a write that is outside of permissions or query filters; it has been reverted (ProtocolErrorCode = 231) cannot write to table before opening a subscription on itEverything I’m doing thus far has been from examples, but with my own objects.The Idea Object looks like this:And my sync permissions look like this:And I’m creating a flexibleSyncConfiguration with initialSubscription like this:Here is my simplified code in the ContentView(). Add some text to a state variable, show a button, tap the button to add, add the idea object and reset the state variable like this:I assumed this would be an extremely basic implementation, but the error doesn’t make any sense. There is a subscription (I did that while creating the flexible sync configuration) and that is passed in as the environment.I looked at @Andrew_Morgan’s article about RChat, and although there are lots of spots where subscriptions are added (realm.subscriptions.write), there are no other mentions of adding data, so I assumed that we do this as we always have.Am I missing something here?",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Also having this issue. Wish I could help",
"username": "Ethan_Lipnik"
},
{
"code": "",
"text": "Turns out the issue I had was dev mode was still on",
"username": "Ethan_Lipnik"
},
{
"code": "",
"text": "Wow! You’re right, that fixed it. So weird that it doesn’t say anywhere in the “Development Mode” warnings / prompts / etc anything about not being able to make writes while Development Mode is on.With Partition Sync I was able to leave development mode on while developing. It seems like I will have to just toggle Development Mode on when wanting to update the models and then toggle it off to test usage.So weird. If anyone at Mongo could explain why this is so different (and undocumented) for Flexible, that would be great! Thanks.",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "This is very weird, we do all of our tests with development mode activated and this is working. do you get any error when the view opens the realm, can you check if the subscription exist before adding the object?, It may give us a clue why this is happening.",
"username": "Diana_Maria_Perez_Af"
},
{
"code": "",
"text": "Hi, also looking into this on the server-side and not seeing any clear reason why this would be happening but working on reproducing it. Do you mind sending a link to your application in the cloud-ui?",
"username": "Tyler_Kaye"
},
{
"code": "membersowner._id == user.idownerId == user.idmembersmembersIds",
"text": "Dev mode seemed to have fix my issue at first but it returned. The solution was something also quite simple but a pain to figure out. I was subscribing to a table but the query didn’t give me any warnings or errors against checking a relationship collection.For example, if I’m subscribing for projects and need to check if the members contains the current user by id or if the owner is the current user owner._id == user.id it silently failed. Replacing it with ownerId == user.id fixed it. This isn’t an ideal solution since it could make the members and membersIds out of sync. Hopefully this gets resolved or at least better documented. Spent a few hours on this issue",
"username": "Ethan_Lipnik"
},
{
"code": "IdeaIdea@ObservedResults\"user_ideas\"flexibleSyncConfiigurationinitialSubscriptions",
"text": "There is no error on the device. The button writes the Idea object to the realm. The Idea shows in a list utilizing the @ObservedResults and then quickly disappears. That’s why I checked the server logs and saw this error, that the server is reverting the change.As far as checking if the subscription exists before adding the object, as noted above, I only have one subscription \"user_ideas\", and it is initialized in the flexibleSyncConfiiguration. If it doesn’t exist before adding the object, then I don’t understand what initialSubscriptions is supposed to do.",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Yeah, this one I did know from this part of the documentation:Flexible Sync only supports top-level primitive fields with a scalar type as queryable fieldsIt has to be top level. It’s very much the least Realmish thing about Flexible sync that I’ve found. Instead of just dealing with objects all the way down, we are now mixing MongoDB patterns that aren’t abstracted away. When linking an object, the mongodb document is just holding the _id of that object. We have to do that now in the top level with Flexible sync.As far as it being out of sync though, that shouldn’t happen because the _id cannot change. SO it should always point to the same object if you’re pointing to immutable fields like _id.Hope that makes sense.",
"username": "Kurt_Libby1"
},
{
"code": "membersIdsmembersmembersIds",
"text": "The issue I have with the pattern is I was never alerted that it was wrong so the subscription succeeded.Also for stuff being out of sync, I mean as I would need to have 2 variables membersIds and members (one for querying and one for actually getting the user objects). I could just use membersIds and query the individual members if I need but obviously thats not ideal when locally I could have just used the relationships before. I hope we get non-top level primitive querying soon",
"username": "Ethan_Lipnik"
},
{
"code": "",
"text": "I agree, that’s a major headache that I haven’t considered.Two top level fields:That’s definitely a mess I hadn’t thought of ",
"username": "Kurt_Libby1"
},
{
"code": "subscriptions.append(QuerySubscription<SKAUserObject>(name: subscription, query: {\n $0.projects._id == id\n }))\n",
"text": "With these limitations in mind, how would I accomplish something like this? Since I can’t access the relationships I don’t know any solution that would allow me to access the members of a project",
"username": "Ethan_Lipnik"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Writing to Realm with Flexible Sync | 2022-07-20T15:33:51.328Z | Writing to Realm with Flexible Sync | 3,211 |
|
null | [
"data-modeling",
"connector-for-bi"
]
| [
{
"code": "",
"text": "I am trying to install this driver on laptop with .dmg file but not able to install it Releases · mongodb/mongo-bi-connector-odbc-driver · GitHubIts says the installer encountered an error that caused the installation to fail. contact the software manufacturer for assistance",
"username": "Upasana_Mittal"
},
{
"code": "",
"text": "It could be due to security settings\nCheck this",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Its not MongoDB. Its mongo Bi connector ODBC driver to connect with tableau that is not installing",
"username": "Upasana_Mittal"
},
{
"code": "",
"text": "Yes i know that\nSince software is not from Mac/apple you have to check “trust software from other sources”",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "\nimage1236×882 89.5 KB\n\nNo. I am guessing it’s not permissions issue. This is probably incompatible with Macbook M1Pro. Have a look at the screenshot.",
"username": "Upasana_Mittal"
},
{
"code": "",
"text": "Check if more details in installer logs",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "how to check installer logs?",
"username": "Upasana_Mittal"
},
{
"code": "",
"text": "Check /var/log/install.log\nThere may be other methods like keyborad shortcuts\nCheck this linkZoom Support may need to get install logs from your Mac computer to further troubleshoot a particular issue. You may be required by Technical Support to submit the install log if you experienced an...",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I have the same issue on my m1 mac. My Mnago BI connector was working with no issues but ended up erroring out, so when I tried to delete and re install it , I get the same error as above.",
"username": "Harun_Gunasekaran"
},
{
"code": "",
"text": "Have you found any alternative or solution to this issue. Kindly share if know. Thanks",
"username": "Upasana_Mittal"
},
{
"code": "",
"text": "Install log for reference :\nMongoDB ODBC Driver 1.4.1 Installation Log2022-06-20 13:36:28-04 hgs-MacBook-Pro Installer[3004]: Opened from: /Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg2022-06-20 13:36:28-04 hgs-MacBook-Pro Installer[3004]: Failed to load specified background image2022-06-20 13:36:28-04 hgs-MacBook-Pro Installer[3004]: Product archive /Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg trustLevel=3502022-06-20 13:36:28-04 hgs-MacBook-Pro Installer[3004]: Could not load resource readme: (null)2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: ================================================================================2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: User picked Standard Install2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: Choices selected for installation:2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: Install: “MongoDB ODBC Driver 1.4.1”2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: Install: “ODBC Manager”2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#odbc-manager-component.pkg : odbc_manager : 02022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: Install: “MongoDB ODBC”2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#mongodb-odbc-component.pkg : mongodb_odbc : 02022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: ================================================================================2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: It took 0.00 seconds to summarize the package selections.2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: -[IFDInstallController(Private) _buildInstallPlanReturningError:]: location = file://localhost2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: -[IFDInstallController(Private) _buildInstallPlanReturningError:]: file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#odbc-manager-component.pkg2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: -[IFDInstallController(Private) _buildInstallPlanReturningError:]: file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#mongodb-odbc-component.pkg2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: Set authorization level to root for session2022-06-20 13:36:33-04 hgs-MacBook-Pro Installer[3004]: Authorization is being checked, waiting until authorization arrives.2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Administrator authorization granted.2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Packages have been authorized for installation.2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Will use PK session2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Using authorization level of root for IFPKInstallElement2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Install request is requesting Rosetta translation.2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Starting installation:2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Configuring volume “Macintosh HD”2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Preparing disk for local booted install.2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Free space on “Macintosh HD”: 367.96 GB (367964041216 bytes).2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Create temporary directory “/var/folders/f1/kn3r7dvn45b8gp0t81hm1frw0000gn/T//Install.3004OvHcor”2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: IFPKInstallElement (2 packages)2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: Current Path: /System/Library/CoreServices/Installer.app/Contents/MacOS/Installer2022-06-20 13:36:35-04 hgs-MacBook-Pro installd[7782]: PackageKit: Adding client PKInstallDaemonClient pid=3004, uid=501 (/System/Library/CoreServices/Installer.app/Contents/MacOS/Installer)2022-06-20 13:36:35-04 hgs-MacBook-Pro Installer[3004]: PackageKit: Enqueuing install with framework-specified quality of service (utility)2022-06-20 13:36:35-04 hgs-MacBook-Pro installd[7782]: PackageKit: Set reponsibility for install to 30042022-06-20 13:36:35-04 hgs-MacBook-Pro installd[7782]: PackageKit: ----- Begin install -----2022-06-20 13:36:35-04 hgs-MacBook-Pro installd[7782]: PackageKit: request=PKInstallRequest <2 packages, destination=/>2022-06-20 13:36:35-04 hgs-MacBook-Pro installd[7782]: PackageKit: packages=(“PKLeopardPackage <id=ODBC Manager, version=0, url=file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#odbc-manager-component.pkg>”,“PKLeopardPackage <id=MongoDB ODBC, version=0, url=file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#mongodb-odbc-component.pkg>”)2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Extracting file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#odbc-manager-component.pkg (destination=/Library/InstallerSandboxes/.PKInstallSandboxManager/F815995D-60D8-426F-9D2D-78D881FFD014.activeSandbox/Root, uid=0)2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Extracting file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#mongodb-odbc-component.pkg (destination=/Library/InstallerSandboxes/.PKInstallSandboxManager/F815995D-60D8-426F-9D2D-78D881FFD014.activeSandbox/Root, uid=0)2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: prevent user idle system sleep2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: suspending backupd2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Using trashcan path /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/PKInstallSandboxTrash/F815995D-60D8-426F-9D2D-78D881FFD014.sandboxTrash for sandbox /Library/InstallerSandboxes/.PKInstallSandboxManager/F815995D-60D8-426F-9D2D-78D881FFD014.activeSandbox2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: PKInformSystemPolicyInstallOperation failed with error:An error occurred while registering installation with Gatekeeper.2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Shoving /Library/InstallerSandboxes/.PKInstallSandboxManager/F815995D-60D8-426F-9D2D-78D881FFD014.activeSandbox/Root (2 items) to /2022-06-20 13:36:36-04 hgs-MacBook-Pro install_monitor[3009]: Temporarily excluding: /Applications, /Library, /System, /bin, /private, /sbin, /usr2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit (package_script_service): Preparing to execute script “./postinstall” in /private/tmp/PKInstallSandbox.XL9hIu/Scripts/MongoDB ODBC.O0fBYz2022-06-20 13:36:36-04 hgs-MacBook-Pro package_script_service[7792]: PackageKit: Executing script “postinstall” in /tmp/PKInstallSandbox.XL9hIu/Scripts/MongoDB ODBC.O0fBYz2022-06-20 13:36:36-04 hgs-MacBook-Pro package_script_service[7792]: Set responsibility to pid: 3004, responsible_path: /System/Library/CoreServices/Installer.app/Contents/MacOS/Installer2022-06-20 13:36:36-04 hgs-MacBook-Pro package_script_service[7792]: Preparing to execute with Rosetta Intel Translation: ‘/tmp/PKInstallSandbox.XL9hIu/Scripts/MongoDB ODBC.O0fBYz/postinstall’2022-06-20 13:36:36-04 hgs-MacBook-Pro package_script_service[7792]: ./postinstall: arch: posix_spawnp: /tmp/PKInstallSandbox.XL9hIu/Scripts/MongoDB ODBC.O0fBYz/postinstall: No such file or directory2022-06-20 13:36:36-04 hgs-MacBook-Pro package_script_service[7792]: Responsibility set back to self.2022-06-20 13:36:36-04 hgs-MacBook-Pro install_monitor[3009]: Re-included: /Applications, /Library, /System, /bin, /private, /sbin, /usr2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: releasing backupd2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: allow user idle system sleep2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Install Failed: Error Domain=PKInstallErrorDomain Code=112 “An error occurred while running scripts from the package “mongodb-connector-odbc-1.4.1-macos-x86-64.pkg”.” UserInfo={NSFilePath=./postinstall, NSURL=file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#mongodb-odbc-component.pkg, PKInstallPackageIdentifier=MongoDB ODBC, NSLocalizedDescription=An error occurred while running scripts from the package “mongodb-connector-odbc-1.4.1-macos-x86-64.pkg”.} {NSFilePath = “./postinstall”;NSLocalizedDescription = “An error occurred while running scripts from the package \\U201cmongodb-connector-odbc-1.4.1-macos-x86-64.pkg\\U201d.”;NSURL = “file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#mongodb-odbc-component.pkg”;PKInstallPackageIdentifier = “MongoDB ODBC”;}2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Cleared responsibility for install from 3004.2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Cleared permissions on Installer.app2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Running idle tasks2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Done with sandbox removals2022-06-20 13:36:36-04 hgs-MacBook-Pro Installer[3004]: install:didFailWithError:Error Domain=PKInstallErrorDomain Code=112 “An error occurred while running scripts from the package “mongodb-connector-odbc-1.4.1-macos-x86-64.pkg”.” UserInfo={NSFilePath=./postinstall, NSURL=file://localhost/Volumes/mongodb-odbc/mongodb-connector-odbc-1.4.1-macos-x86-64.pkg#mongodb-odbc-component.pkg, PKInstallPackageIdentifier=MongoDB ODBC, NSLocalizedDescription=An error occurred while running scripts from the package “mongodb-connector-odbc-1.4.1-macos-x86-64.pkg”.}2022-06-20 13:36:36-04 hgs-MacBook-Pro installd[7782]: PackageKit: Removing client PKInstallDaemonClient pid=3004, uid=501 (/System/Library/CoreServices/Installer.app/Contents/MacOS/Installer)2022-06-20 13:36:37-04 hgs-MacBook-Pro Installer[3004]: Install failed: The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.2022-06-20 13:36:37-04 hgs-MacBook-Pro Installer[3004]: IFDInstallController 351F320 state = 82022-06-20 13:36:37-04 hgs-MacBook-Pro Installer[3004]: Displaying ‘Install Failed’ UI.2022-06-20 13:36:38-04 hgs-MacBook-Pro Installer[3004]: ‘Install Failed’ UI displayed message:‘The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.’.2022-06-20 13:36:40-04 hgs-MacBook-Pro Installer[3004]: Package Removal: Package is not writable. Not offering removal.2022-06-20 13:36:40-04 hgs-MacBook-Pro Installer[3004]: Package Removal: Package cannot be removed.",
"username": "Harun_Gunasekaran"
},
{
"code": "",
"text": "Mongo had a latest release of drivers. Releases · mongodb/mongo-bi-connector-odbc-driver · GitHubThis worked for me. should work for you too",
"username": "Upasana_Mittal"
},
{
"code": "",
"text": "I was able to install it but wasnt able to start the ODBC manager , which still states the developers need to update the application to run on this OS version. I am running 12.4 . How about you ?",
"username": "Harun_Gunasekaran"
},
{
"code": "",
"text": "I have downloaded iODBC from somewhere else separately. iODBC Driver Manager: iODBC Downloadsand this is working for me to create system DSN for mongo",
"username": "Upasana_Mittal"
},
{
"code": "",
"text": "I tried that ! What keywords did you use to set up the DNS ? I am not sure but iODBC just crashes for me when I set the following:host\nport\ndatabase\nusername\npassword",
"username": "Harun_Gunasekaran"
}
]
| Not able to install Mongo BI connector ODBC Driver on MAC OS | 2022-06-03T20:41:38.682Z | Not able to install Mongo BI connector ODBC Driver on MAC OS | 5,479 |
[
"node-js"
]
| [
{
"code": "",
"text": "Ok so below is an image showing the “new” and “old” object ID and timestamp format mongodb uses. Old is referring to the imported json data and new is referring to data i have created.\n\nNow the main issue i have with this difference is that i cannot ammend the imported data but i can created new data and read the imported - And i believe the reason is because the imported data for some reason doesn’t have the ObjectId() or ISODate() functions just the string.So i want to find a way to add those or import it such that it imports usable - and yes, I have already tried manually adding ISODate() and ObjectId then reimporting it to match the new data, that just throws JSON errors and doesnt properly import.Thanks!",
"username": "Magestic_Chicen"
},
{
"code": "",
"text": "For added clarification*I imported a json file that contained my mongodb database using a mongoimport --collection jsons --jsonArray -d main -v databasemain.json command.\nThe old is the imported data new is created data\nI cannot edit or add to the imported data",
"username": "Magestic_Chicen"
},
{
"code": "_idupdatedAt",
"text": "it would be better if you included those JSON error messages so to tell what is wrong._id field is not required to be a proper ObjectId to insert the data, as your updatedAt field not being a Date. it will just be a bit different to index that document. You may try importing your data to a temporary collection, then use other query methods to change these strings into proper ObjectId and Date formats to insert into a new collection, then drop the old one.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I fixed it by removing and reinserting all of the data",
"username": "Magestic_Chicen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error importing JSON data involving timestamps and objectIDs | 2022-07-16T21:27:51.381Z | Error importing JSON data involving timestamps and objectIDs | 3,419 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.