image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"app-services-cli"
]
| [
{
"code": "",
"text": "We are building a web app, and realized through some testing that users could easily delete their own user account by being logged in, in the browser, and running the command 'await app.deleteUser(app.curentuser); ’This immediately deletes the current user from app users in Realm. how can I block or prevent this? Our app is membership based, and ties into recurring billing, and we do not want the users to be able to self-delete the user record if at all possible.Thanks in advance",
"username": "Minnesotaa_MN"
},
{
"code": "",
"text": "I don’t have an answer but you have a very good question; a user being able to call that function to delete themselves seems like an oversight.While it doesn’t appear to be a security risk, the issue is it deletes everything that user is connected to in Atlas - sure they can sign up again but that will be a new user account with a different uid.It’s documented (node.js) here Delete User as well as in other SDK’s. Perhaps one of the MongoDB folks can chime in on how to prevent that from happening - maybe a rule?",
"username": "Jay"
}
]
| How to prevent users from deleting themselves | 2022-10-18T04:39:40.782Z | How to prevent users from deleting themselves | 2,259 |
null | [
"queries",
"graphql",
"realm-web"
]
| [
{
"code": "{\n \"businesses\": {\n \"ref\": \"#/relationship/occasionally-business-db/Business/Businesses\",\n \"source_key\": \"businesses\",\n \"foreign_key\": \"_id\",\n \"is_list\": true\n }\n}\n{\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"userId\": {\n \"bsonType\": \"objectId\"\n },\n \"businesses\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"objectId\"\n }\n },\n \"createdAt\": {\n \"bsonType\": \"date\"\n },\n \"updatedAt\": {\n \"bsonType\": \"date\"\n }\n },\n \"title\": \"AdminAccess\"\n}\nquery {\n adminAccess {\n _id\n\t\tcreatedAt\n\t\tupdatedAt\n\t\tuserId\n businesses {\n name\n }\n }\n}\n",
"text": "Okay so I want one collection to have a field with an array of objectIds that point to another collection. The objectids are the _id field on this other collection.The problem is it doesn’t work.this is the relationship.json file on the one in the one-to-many relationshipAnd then here is the schema on that collection:Interestingly, if I use a string field on the business collection, which is the many in the one-to-many relationship, it does work. Seems like it should work in this case. What am I missing?And when I say it doesn’t work, I mean it doesn’t work for graphql.I am trying to query like this:but businesses returns an empty array even though it shouldn’t. Has anyone had this problem?Thanks!",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "I don’t know if this related but I noticed that when I had a relationship and tried to use a custom resolver the GraphiQL tool didn’t give me results but when I used a built in resolver the results were available.",
"username": "thecyrusj13_N_A"
}
]
| Realm one-to-many relationship with _id foreign_key and array of objectIds not working for graphql | 2022-02-25T00:08:01.433Z | Realm one-to-many relationship with _id foreign_key and array of objectIds not working for graphql | 3,988 |
null | [
"queries",
"python"
]
| [
{
"code": " File \"dot_find_test.py\", line 154\n data = my_collection.find({ ID : { $lt: BPACO-00001 } })\n ^\nSyntaxError: invalid syntax\n",
"text": "I can’t find anyone else that has this problem, do I need to import something? The error is specifically pointing to the ‘$’",
"username": "Jake_Cordes"
},
{
"code": "",
"text": "You have to put literals within quotes.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Syntax error with "$lt" and "$gt" | 2022-10-21T15:12:23.415Z | Syntax error with “$lt” and “$gt” | 1,720 |
null | [
"node-js",
"crud",
"mongoose-odm"
]
| [
{
"code": "{\n id:1\n name: 'John',\n phone: {cell: 1234, work: 15345}, \n grades: [\n { id: 'C1', grade: 5, q:1},\n { id: 'C2', grade: 6, q:1},\n { id: 'C3', grade: 3, q:2}\n ]\n}\ninput = { \n id:1, \n name='Samanta', \n phone: { cell: 2346},\n grades: { id:'C2', grade: 7}\n}\n{\n id:1\n name: 'Samanta',\n phone: {cell: 2346, work: 15345}, \n grades: [\n { id: 'C1', grade: 5, q:1},\n { id: 'C2', grade: 7, q:1},\n { id: 'C3', grade: 3, q:2}\n ]\n}\n",
"text": "Hi, I would like to update some fields by MongoDB.sample of schema:Change Input (find id=1 and change the name and grade with id=C2 and change only phone cell)Expected Result:Actually, I am looking to find a solution to merge the input and document by Ids (document and subdocument)\nI appreciate any advice (it can be MongoDB or mongoose).",
"username": "Mehran_Ishanian1"
},
{
"code": "$set : { \"phone.work\" : 15435 }\n",
"text": "For setting the work phone something likeshould work.For updating, one element of an array, I am pretty sure that you will need something like $map and $mergeObjects.",
"username": "steevej"
},
{
"code": " db.mongoose.updateOne({ \"id\": 1, \"grades.id\": \"C2\" }, { $set: { \"grades.$.grade\": 7, \"phone.cell\": 4566 , \"name\": \"Samanta\"} })\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\nmongoose> db.mongoose.find()\n[\n {\n _id: ObjectId(\"6350c1e16424749782987f72\"),\n id: 1,\n name: 'Samanta',\n phone: { cell: 4566, work: 15345 },\n grades: [\n { id: 'C1', grade: 5, q: 1 },\n { id: 'C2', grade: 7, q: 1 },\n { id: 'C3', grade: 3, q: 2 }\n ]\n }\n]\nmongosh",
"text": "Hi @Mehran_Ishanian1 and welcome to the community forum!!Further to @steevej’s suggestion, I experimented a little with your example document and arrive at this query:However, please note that the above command is based on the sample data provided above and has been tested on the latest mongoDB version 6.0.2 on mongosh\nPlease test the above on your environment based on the version and sample data in the collection.Also, please note that, if there lies a possibility to alter the data at the application level and not on the database level.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks, steevej for your advice.",
"username": "Mehran_Ishanian1"
},
{
"code": "",
"text": "Dear @Aasawari Thanks for your warm welcome.\nThank you, it is working fine.",
"username": "Mehran_Ishanian1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB updateOne multiple fields | 2022-10-19T15:02:31.574Z | MongoDB updateOne multiple fields | 4,509 |
null | []
| [
{
"code": "eu-west-1eu-west-1",
"text": "I am trying to deploy and manage MongoDB cluster & databases using CDK as mention in mongodb blog. As a first step, following AWS registry extension in Cloudformation:",
"username": "Krishnapriya_Sreekumar"
},
{
"code": "",
"text": "Thanks Krishnapriya. We are working to expand both the number of MongoDB Atlas resources for AWS CloudFormation as well as expanding into more AWS regions. You can expect these updates to be available in Q1 2023. Unfortunately, since AWS CDK deploys via AWS CloudFormation the same region availability restrictions will apply to AWS CDK as well.In the interim, suggest explore the Terraform CDK which deploys via Terraform instead. Here you will be able to deploy MongoDB Atlas clusters into AWS eu-west-1 region. To learn more see here: CDK for Terraform | Terraform | HashiCorp Developer",
"username": "Zuhair_Ahmed"
},
{
"code": "",
"text": "Thank you Zuhair for the recommendation. Another approach suggested to me was to upload the registry in github manually to the region where this is not present and then use it",
"username": "Krishnapriya_Sreekumar"
},
{
"code": "us-east-1eu-west-1",
"text": "Would that manually approach also continue to stay updated with latest commits? Our team as well as community members are regularly contributing to the MongoDB Atlas Resources for AWS CDK repo on GitHub. If it is not possible to leverage us-east-1 on AWS CDK for a months while we work on region expansion then suggest exploring if your workload can be deployed via Terraform CDK which also has a free community edition and will allow you to deploy MongoDB Atlas resources in AWS eu-west-1 today.",
"username": "Zuhair_Ahmed"
}
]
| Using CDK to deploy and manage MongoDb | 2022-10-12T10:17:41.235Z | Using CDK to deploy and manage MongoDb | 2,694 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "db.dataGroupSchema.aggregate([\n {\n \"$project\":{\n \"_id\":1,\n \"dataSourceId\":\"$dataSourceIds.value\"\n }\n },\n {\n \"$unwind\":\"$dataSourceId\"\n },\n {\n \"$group\":{\n \"_id\":\"$dataSourceId\",\n \"count\":{\n \"$sum\":1\n }\n }\n },\n {\n \"$match\":{\n \"count\":{\n \"$gt\":0\n }\n }\n },\n {\n \"$group\":{\n \"_id\":\"\",\n \"duplicatedDataSourceIds\":{\n \"$push\":\"$_id\"\n }\n }\n },\n {\n \"$project\":{\n \"duplicatedDataSourceIds\":1,\n \"_id\":0\n }\n }\n]);\ndb.dataGroupSchema.distinct(\"dataSourceIds.value\")",
"text": "Hi guys,I have an aggregation pipeline query which outputs a BSON object with a key-value pair, value being an array of strings:output:{\n“duplicatedDataSourceIds” : [\n“a75de0d0-df16-4618-9d3e-73c59b5137a5”,\n“3a83da20-3524-4fb7-aeda-26e1daadc214”,\n“450fdceb-13f1-4e4d-9ac7-4890b35268f1”,\n“409c68ea-d600-42f5-ab3f-d29aac86fc68”\n]\n}I want to reuse this array as a variable so I want just an array of strings without a key and not in an object. Similar to the result of the distinct():db.dataGroupSchema.distinct(\"dataSourceIds.value\")[\n“3a83da20-3524-4fb7-aeda-26e1daadc214”,\n“409c68ea-d600-42f5-ab3f-d29aac86fc68”,\n“450fdceb-13f1-4e4d-9ac7-4890b35268f1”,\n“a75de0d0-df16-4618-9d3e-73c59b5137a5”\n]Is there an option to do it in plain vanilla mongo script?",
"username": "Anton_Volov"
},
{
"code": "mongosh> document = {\n \"duplicatedDataSourceIds\" : [\n \"a75de0d0-df16-4618-9d3e-73c59b5137a5\",\n \"3a83da20-3524-4fb7-aeda-26e1daadc214\", \n \"450fdceb-13f1-4e4d-9ac7-4890b35268f1\",\n \"409c68ea-d600-42f5-ab3f-d29aac86fc68\"\n]\n}\nmongosh> array = document.duplicatedDataSourcesIds\n/* output is */\n[\n 'a75de0d0-df16-4618-9d3e-73c59b5137a5',\n '3a83da20-3524-4fb7-aeda-26e1daadc214',\n '450fdceb-13f1-4e4d-9ac7-4890b35268f1',\n '409c68ea-d600-42f5-ab3f-d29aac86fc68'\n]\n",
"text": "Mongo shell is javascript. In JS, to get a given field, you simply access the field, just like you do for any JS object. For example,",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your reply @steevej. I am not sure how to implement it:\nScreenshot 2022-10-20 at 00.22.352740×664 35.2 KB\n",
"username": "Anton_Volov"
},
{
"code": "/* Store all the documents from the results cursor */\nall_results = results.toArray()\n/* Get the first document */\nfirst_result = all_results[0]\n/* Get the array of strings */\narray = first_result.duplicatedDataSourceIds\n",
"text": "The function aggregate return a cursor, not an object.Something like the following should work:Please read Formatting code and log snippets in posts before next post so that we can cut-n-paste your code.I hope you are safe.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you, @steevej!",
"username": "Anton_Volov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to present the results of the aggregation as an array of strings (values) and not as a BSON object with a key and and a value? | 2022-10-17T18:14:45.269Z | How to present the results of the aggregation as an array of strings (values) and not as a BSON object with a key and and a value? | 5,060 |
null | []
| [
{
"code": "",
"text": "Hi! Well, I think the title of the topic is pretty clear. I couldn’t find anything related to using synonyms with autocomplete operator, so I guess it is not possible. But I just wanted to make sure. I hope I’m wrong!",
"username": "German_Medaglia"
},
{
"code": "",
"text": "Hi @German_Medaglia , welcome to the MongoDB Community!You are correct, we do not support using synonyms with the autocomplete operator today. Can you share more about your use case? I’d also recommend adding this as a feedback item here so that others who are looking for something similar can vote on it too. ",
"username": "amyjian"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is there any way to use synonyms with autocomplete operator | 2022-10-17T22:58:39.029Z | Is there any way to use synonyms with autocomplete operator | 1,317 |
[
"node-js",
"compass",
"atlas-cluster"
]
| [
{
"code": "mongodmongomongodbind_ip0.0.0.0/etc/mongodb.confmongodb://<IP address of ubuntu server>:27017/?tls=true\nmongodb+srv://<username>:<password>@<some address>.mongodb.net/?retryWrites=true&w=majorityAuthentication Method",
"text": "Hello. Since I’m very new to this topic of working with MongoDB on a remote Virtual Private Server, things are a bit more complicated than that on my side.My Node.JS app and MongoDB Community Edition are both on a remote Ubuntu server.\nI remember when working with MongoDB on my local Windows 10 machine, I had to activate it with mongod , first. Only then I could enter the Mongo shell in another PowerShell instance with mongo .A. What if my website’s end users want to add or remove data to and from my in-server database? Will the server keep mongod command active even if I don’t do that? Is such thing even necessary in this scenario?This article says I have to use the IP address of my remote server hosting MongoDB when trying to connect to it (from for example a Win10 machine) when using Compass:If you want to try MongoDB, here's a GUI to make it much easier. Jack Wallen shows you how to install it.\nEst. reading time: 4 minutes\n\nI’ve changed the bind_ip to 0.0.0.0 in /etc/mongodb.confB. Is this URI string correct for connecting Compass to the remote server db:I used this URI string in my app when using the MongoDB cluster:\nmongodb+srv://<username>:<password>@<some address>.mongodb.net/?retryWrites=true&w=majorityC. Where is the username and password when MongoDB is installed on a remote server? Do I have to create them in Mongo shell on the remote server? If yes, then how?D. Does creating a user on in-server database change the URI string for Compass? Does it have something to do with Authentication Method section in Compass?E. In absence of such user, can Compass access the in-server db directly?I know that’s a lot. I appreciate your help.",
"username": "mj69"
},
{
"code": "mongodmongodb://<IP address of ubuntu server>:27017/?tls=true\nmongodb+srv://<username>:<password>@<some address>.mongodb.net/?retryWrites=true&w=majorityAuthentication Method",
"text": "Hi @mj69,Welcome to the MongoDB Community forums A. What if my website’s end users want to add or remove data to and from my in-server database? Will the server keep the mongod command active even if I don’t do that? Is such a thing even necessary in this scenario?If you start MongoDB as a service on Ubuntu, it will be running in the background and your web server will have access to the database until you shut it down manually - ReferenceB. Is this URI string correct for connecting Compass to the remote server DB:I used this URI string in my app when using the MongoDB cluster:\nmongodb+srv://<username>:<password>@<some address>.mongodb.net/?retryWrites=true&w=majorityIf you are open to using a managed service, I recommend that you consider using MongoDB Atlas before deploying a database manually, as an improperly configured database may pose security risks.C. Where is the username and password when MongoDB is installed on a remote server? Do I have to create them in Mongo shell on the remote server? If yes, then how?You need to configure security and access control - Please refer to the MongoDB Security Checklist. You should enable authentication, configure a Role-Based Access Control, and configure network encryption with TLS.D. Does creating a user on the in-server database change the URI string for Compass? Does it have something to do with the Authentication Method section in Compass?This depends on whether you wish to use Compass as a different user. In some cases, you might not want to run Compass as a super user on a regular basis. For this purpose, it would be necessary to create a user with limited permissions and use that user as part of the Compass connection string.E. In absence of the such a user, can Compass access the in-server DB directly?MongoDB Compass can connect to local or remote deployments.To gain a deeper understanding of MongoDB, I would recommend you take these courses from MongoDB University.I hope it helps!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank You. How can I connect to MongoDB Community Edition installed on my server without SSH, remotely?",
"username": "mj69"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Connecting a remote server to Compass? | 2022-10-09T02:03:52.782Z | Connecting a remote server to Compass? | 9,439 |
|
null | [
"spark-connector"
]
| [
{
"code": "",
"text": "Based on the documentation - https://www.mongodb.com/docs/spark-connector/current/configuration/write/, insert/update/replace operations are supported. Are there any plans to support delete operation?",
"username": "sp667"
},
{
"code": "",
"text": "Hello,Checking the JIRA link below, it explains the reason there is no support for delete operation, because there are no delete functions in the Spark API, and so there is no native support. the alternative mentioned is by using withCollection methods that can loan access to a collection and its API, but that is only available in the Scala and Java API.",
"username": "Mohamed_Elshafey"
}
]
| Is there a plan to support delete write operation? | 2022-10-21T00:42:32.113Z | Is there a plan to support delete write operation? | 2,014 |
null | [
"mongodb-shell",
"database-tools",
"backup"
]
| [
{
"code": "",
"text": "Hi everyone,I am a mongodb novice. I was tasked to migrate existing mongodb database to a remote server. The issue starts with me changing the dB path (mongodump and mongorestore work fine with the default dB path at /var/lib/mo go) at the new server.I tried to change to another directory and met with a few issues, mostly due to permissions. I then switch back to the default directory and upon executing ‘mongosh’, it is somehow stuck at the new directory, although I have revert the changes at /etc/mongo.conf for the dB path.I then attempted to uninstall the entire mongodb to try everything from scratch again but somehow mongodb shell is not uninstalled after performing sudo yum remove. I can still see it at /usr/bin.Can I just delete the mongosh folder at /usr/bin? Or is there a better way to uninstall the mongodb shell?Thanks,\nKevin",
"username": "Kevin_Choo"
},
{
"code": "sudo apt remove mongodb-mongoshsudo yum remove mongodb-mongoshmongodumpmongorestoremongoshmongod",
"text": "Hi @Kevin_ChooMongosh is seperate from the mongodb-tools and the mongodb server packages.Depending on the OS and installation method;\nubuntu, debian: sudo apt remove mongodb-mongosh\nredhat,centos: sudo yum remove mongodb-mongoshI am a mongodb novice. I was tasked to migrate existing mongodb database to a remote server. The issue starts with me changing the dB path (mongodump and mongorestore work fine with the default dB path at /var/lib/mo go) at the new server.mongodump and mongorestore don’t have a direct relationship with the data directory unless you are reading or writing dump files there. Nor does mongosh. The problems you are having are more likely related to the server (mongod) configuration.https://university.mongodb.com has some great training on the tasks you are attempting:\nMongoDB Courses and Trainings | MongoDB University\nMongoDB Courses and Trainings | MongoDB University",
"username": "chris"
},
{
"code": "",
"text": "Hi @chris ,Thanks for the prompt response. I really appreciate it. I managed to uninstall mongosh now and did a reinstall. I am using cent os by the way.I would like to actually use the non default directory(desired directory: /home/db) for the db directory. My current permission for home/db is\ndrwxr-xr-x. 3 db mongod 115 Oct 20 22:20 /home/db/However, it seems like mongod is not able to be started successfully. This is the error message I see in the terminal:\nProcess: 24751 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=100)Upon checking /var/log/mongodb/mongod.log; I see the following error:\n{“t”:{\"$date\":“2022-10-20T22:19:38.646+07:00”},“s”:“E”, “c”:“CONTROL”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“Location28596: Unable to determine status of lock file in the data directory /home/db: boost::filesystem::status: Permission denied: “/home/db/mongod.lock””}}I have googled for several hours but did not find any workaround that allow me to bypass this error. Appreciate it if you can point out any incorrect setup from my end that result in this issue.P.S: Please let me know if I need to open a new thread for this.Thanks,\nKevin",
"username": "Kevin_Choo"
},
{
"code": "getenforce/home/dbdb/home/db",
"text": "If you have SELinux enabled you are going to get a lot of issues very quickly if you change any of the default locations or users. It seems this would be a likely cause. Use getenforce to get SELinux status to see if this is the cause./home/db is owned by db mongod may need to be configured to run as this user.\nand/or mongod does not appear to have write permission on /home/db .Deviating from a standard setup will expose you to many configuration items that are configured by default.",
"username": "chris"
},
{
"code": "",
"text": "Hi @chris,Thanks for the reply. Yes I also managed to find out that it was due to SELinux issue. I set it to permissive and managed to get mongodb working.I have selected your response as the solution. Thanks again.Best regards,\nKevin",
"username": "Kevin_Choo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongodb shell is not uninstalled | 2022-10-20T12:31:19.309Z | Mongodb shell is not uninstalled | 2,519 |
null | [
"aggregation",
"crud",
"sharding"
]
| [
{
"code": "",
"text": "Hello,i´ve had an issue with using updateMany as update aggregation on collection to update structure of our entity including embedded documents.We are using a sharded environment. As i tried to update all documents, there occured an exception to include full shard key inside filter criteria if using updateMany with upsert: true. But normally upsert flag should be false as default like specified in official documentation. Nevertheless i had to include upsert: false as explicit parameter to deactivate upsert functionality.Is there something i have missed or missunderstood?Thank you,Best regardsPhilipp",
"username": "Philipp_Allstadt"
},
{
"code": "",
"text": "Hi @Philipp_Allstadt ,That does sound wierd as upsert is by default false.What driver and version you use against what MongoDB cluster?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny ,i used mongosh to run updateMany operation.Our cluster runs on 5.0.13 and mongosh vesion is 1.3.1.",
"username": "Philipp_Allstadt"
}
]
| Default upsert flag when using updateMany operation | 2022-10-20T11:58:16.972Z | Default upsert flag when using updateMany operation | 1,373 |
[
"performance"
]
| [
{
"code": "",
"text": "When executing performance tests, we are provisioning M30 cluster. During the test, I’ve noticed something very strange. There’s a dip in opcouters every 15 mins.\nimage1583×348 56.9 KB\nWhenever there’s a dip in opcounters, I have noticed corresponding spike in avg response time. Our application response times were best between the time ranges 12:00-12:30 & 1:00-1:30 where opcouters is steadily greater than 200/sThis is not a random occurance, we are noticing this pattern pretty much every time we run a load test.What could be the possible explanation for this anamoly?",
"username": "Sai"
},
{
"code": "",
"text": "Hi @Sai welcome to the community!There’s really not enough information to tell what’s going on based on only three graphs, but since this is only occuring during a load test (correct me if I’m wrong), then it’s very likely have something to do with the test. The 15 minutes cadence could be entirely accidental, e.g. you’re putting enough load to overwhelm the server so it needs to stop processing incoming work after ~15 mins to catch up with the queued work, then it catches up, then the cycle begins again.You might want to vary the load test with more (or less) work, and observe if the 15 minutes cadence is repeated, or they have different timings depending on the load you’re putting on them.The server logs during the test might be more enlightening on what exactly happened.Also, if this is of concern to you, have you contacted Atlas support for help with this?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "So, we have an M30 atlas cluster with 3 replicas. We’re trying to benchmark our application by running some performance tests. During the run, we observed an oddity with Atlas cluster. We noticed periodic spikes in System CPU usage across all 3 nodes of the replica set.\nimage1581×338 58.5 KB\nWhat is causing the spike every 15 mins?",
"username": "Sai"
},
{
"code": "",
"text": "Thank you @kevinadiApologies for the delay and confusion.There was no issue with opcouters, and the cluster size was not M30 from the beginning.We’ve auto-scaling enabled for our perf Mongo atlas cluster with initial instance size as M10. Major time of our perf test was spent on M10 and M20 instances (as you can tell, the red vertical lines denote the jump from one cluster size to another). As documented atlas provisioned burstable instances which resulted in extreme System CPU Steal % (~130-140%) for large periods of our perf test, which ultimately resulted in poor performanceAfter that for long-running perf tests we started using M30 instancess and both Process CPU usage and Application response times were very smooth and predictable. But nonetheless as noted here, there were periodic spikes in system CPU usage, although it did not have a drastic effect on the response time I am just curious to know what’s the reason behind periodic spikes in system CPU usage.",
"username": "Sai"
}
]
| Dip in opcounters every 15 mins | 2022-10-18T18:40:21.544Z | Dip in opcounters every 15 mins | 2,515 |
|
null | [
"swift",
"app-services-user-auth"
]
| [
{
"code": "",
"text": "Hello! I am struggling a bit with my authentication flow. I have been able to get Google Auth, Apple Auth and Email/password auth working but now I have ran into the following problem:When I try to create a new account with an email that already has an existing account on my app it still redirects it to the create account view while it should just log the user in. Same thing for when a user logs in without an active account on my app it should be redirected to the create account view instead of being logged in.I was thinking about checking whether the email address is already present in the database and use that in a simple if else statement to decide if the create account view should be shown or not. But since this is happening before a user is signed in I don’t have ‘access’ to the realm. So what would be the best way to check if a user already has an account or not?Thanks in advance!",
"username": "Jesse_van_der_Voorn"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Check for existing users RealmSwift | 2022-10-17T11:54:09.162Z | Check for existing users RealmSwift | 1,644 |
null | []
| [
{
"code": "",
"text": "In case I have a collection A and a collection B linked together (not embedded), is it possible to automatically cascade the deletion of a document b1 when deleting a document a1 belonging to A linked to b1 ?",
"username": "Khaled_Ben_Ahmed"
},
{
"code": "",
"text": "Hi @Khaled_Ben_Ahmed ,I’ve written an article using Atlas triggers and preimage to do exactly that:Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny ,\nThat’s exactly what I was looking for.Thanks",
"username": "Khaled_Ben_Ahmed"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Cascade delete between linked documents belonging to different collections | 2022-10-20T08:17:05.823Z | Cascade delete between linked documents belonging to different collections | 2,053 |
null | [
"atlas-functions"
]
| [
{
"code": "",
"text": "Hi community!What does “per request” mean here? Does it mean that only requests inside of functions must be executed in 150 seconds, or does it meant that function itself should executed in less than 150 seconds?I’m planning to write a function which traverse big database and made some changes. It may take hours, but every single request to DB is pretty fast. Can I do it?",
"username": "Trdat_Mkrtchyan"
},
{
"code": "",
"text": "Hi Trdat,It means the entire function needs to complete in 150 seconds as well as any other functions that are called from it.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Maybe you have some clues, how can I organize my task in Atlas ecosystem. I don’t wanna setup dedicated machine to run jobs on mongodb when something changes on DB, but from the other side jobs will take hours, so I’m not able to use functions.",
"username": "Trdat_Mkrtchyan"
},
{
"code": "",
"text": "Hi @Trdat_Mkrtchyan,There are two questions that arise from your description:If the tasks take hours because they run, say, once per day or even larger intervals, then functions can still be used from scheduled triggers that run more frequently (for example, 10-15 mins), limiting the job that each run has to do to a defined chunk, and staying under the 150 secs.If however the tasks take hours and need to run on the whole DB all the time, then having a dedicated machine is the least of your problems: an architecture that requires such a continuous maintenance/processing would have costed also in computing hours, and probably require a higher cluster tier just to ensure that these tasks don’t affect the overall performance…",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Ideally I’d like to run function when collection changes. But there are some restriction:I have a very big collection (millions) which sometimes totally updates from third party, but updates come randomly, they not separated on some logical chunks. Only way I can distinguish that third party finished updates is detect that updates started and for a period of time there’s no update occurred in collection. So I’d like to start trigger not on any insert but say after an hour since last update.Function itself can work chunk by chunk, I have fields in collection which allow to group jobs by some sign. But I can’t figure out how to organize whole task:Third party updates collection → I’m assured that updates are finished → Iterate functions chunk by chunk",
"username": "Trdat_Mkrtchyan"
},
{
"code": "",
"text": "And I’m ok to start job manually if it’s possible to iterate though DB",
"username": "Trdat_Mkrtchyan"
},
{
"code": "$clusterTime$clusterTime",
"text": "Hi @Trdat_Mkrtchyan,There are a number of possible solutions to that, one possibility, for example:Does the above make sense?",
"username": "Paolo_Manna"
},
{
"code": "operationsoperationscurrent = fromoperationscurrent = from + 1current != to",
"text": "Hi @Paolo_MannaThanks for response. Scheduling itself is not essential, and moreover I feel observing collection is not quite good idea cuz functions will change collection and fall into infinite loop. The thing that I can’t understand, how to run function on chunks. Ideally I’d like to run “something”, and it will traverse and collection DB. Only thing that comes to mind is:",
"username": "Trdat_Mkrtchyan"
},
{
"code": "",
"text": "Hi @Trdat_Mkrtchyan ,You’re of course aware of the tasks, so, as I wrote, mine was just one possibility, you may well find a different one that suits better. One thing however I wanted to clarifyfunctions will change collection and fall into infinite loop.That’s a common point that triggers have to face, and there are standard procedures to avoid that (for example, that’s what match expressions are for): as long as you can identify which kind of changes you want to react to (or not), you’ll be fine.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| What does mean "runtime per request" in Atlas Functions limits | 2022-10-19T14:41:14.600Z | What does mean “runtime per request” in Atlas Functions limits | 2,494 |
null | [
"queries",
"dot-net"
]
| [
{
"code": "",
"text": "is LINQ integrated and support by MongoDB Driver ?\nplease clear this doubt this is so confusing.",
"username": "Pankaj_Shah1"
},
{
"code": "",
"text": "if it is supported and integrated then find method is MongoDB Query API or LINQ Method?",
"username": "Pankaj_Shah1"
},
{
"code": "",
"text": "Hi @Pankaj_Shah1, welcome to the community.is LINQ integrated and support by MongoDB Driver ?Yes, LINQ queries are supported. Check out our recently published .Net Core Application tutorial using LINQ to query MongoDB:Learn how to use LINQ to interact with MongoDB in a .NET Core application.if it is supported and integrated then find method is MongoDB Query API or LINQ Method?Yes, both APIs are available for:If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is find method is LINQ method? | 2022-10-20T06:15:47.749Z | Is find method is LINQ method? | 1,330 |
null | [
"python",
"crud"
]
| [
{
"code": "is it fuccessfully functioningmodified contentsmodified data count.inserted_count.upserted_countx_new = json.dumps(x)import pymongo\nimport datetime\nimport json\n\ndef init_db(ip, db, coll):\n try:\n myclient = pymongo.MongoClient('mongodb://' + ip + '/')\n mydb = myclient[db]\n mycol = mydb[coll]\n success_condition = \"success on initializ operation\"\n\n except Exception as e: \n success_condition = \"failed on initializ operation\"\n \n return mydb, mycol, success_condition\n\n # ins_data = insert_db_data\ndef ins_data(one_or_many_bool, insert_values_json):\n try: \n if one_or_many_bool == True:\n x = mycol.insert_many(insert_values_json)\n else:\n x = mycol.insert_one(insert_values_json)\n\n success_condition_insert = \"success on ins_data operation\"\n\n except Exception as e: \n success_condition_insert = \"failed on ins_data operation\"\n\n return x , success_condition_insert\n\nip_input = input(\"Enter the ip: \")\nexist_DB_name = input(\"Enter exist DB name: \")\nexist_coll_name = input(\"Enter exist collection name: \")\nmydb, mycol, success_condition = init_db(ip_input, exist_DB_name, exist_coll_name)\nprint(success_condition)\n\ninsert_one_or_many = input(\"U are update one or many values? ( 1 for many, 0 for one ): \")\nnewvalues_str = input(\"Enter new values: \")\n\none_or_many_bool = bool(int(insert_one_or_many))\ninsert_values_json =json.loads(newvalues_str)\n\nx , success_condition_insert = ins_data(one_or_many_bool, insert_values_json)\nprint(success_condition_insert)\nx_new = json.dumps(x)\nprint(x_new)\nprint(type(x_new))\nTraceback (most recent call last):\n File \"C:\\Users\\chuan\\OneDrive\\Desktop\\10.17_connect_mongoD_練習\\test.py\", line 56, in <module>\n x_new = json.dumps(x)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\json\\__init__.py\", line 231, in dumps\n return _default_encoder.encode(obj)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\json\\encoder.py\", line 199, in encode\n chunks = self.iterencode(o, _one_shot=True)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\json\\encoder.py\", line 257, in iterencode\n return _iterencode(o, 0)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\json\\encoder.py\", line 179, in default\n raise TypeError(f'Object of type {o.__class__.__name__} '\nTypeError: Object of type InsertManyResult is not JSON serializable\n",
"text": "I want to return Json result like below,\ncontains is it fuccessfully functioning (use 0/1) , modified contents and modified data countJson result:{“ok” : 1, “msg” : [{ “name” : “Moma”, “Age” : 33} , { “name” :\n“Kara”, “Age” : 44} ], “count”: 2 }The tried code below:\nI tried to use .inserted_count and .upserted_count to count the number modified\nand x_new = json.dumps(x) should transfer data into Jsonseems I kind of know most logic, but not sure how to make logic workI want function can output in Json like this{“ok” : 1, “msg” : [{ “name” : “Moma”, “Age” : 33} , { “name” :\n“Kara”, “Age” : 44} ], “count”: 2 }",
"username": "j_ton"
},
{
"code": "json_object = json.dumps(dict(mycol.find_one({\"_id\": x.inserted_id}, { \"_id\": 0, “name” : 1, “Age” : 1 }))) \nprint(json_object)\n\n{ “name” : “Moma”, “Age” : 33} \n",
"text": "Hi @j_ton and welcome to the MongoDB community forum!!The insert_one and insert_many functions return instance of InsertOneResult and\nInsertManyResults respectively.\nHence the objects are not JSON serialisable.{“ok” : 1, “msg” : [{ “name” : “Moma”, “Age” : 33} , { “name” :\n“Kara”, “Age” : 44} ], “count”: 2 }However since currently the InsertOneResult object do not contain the full documents but rather only the inserted _id, there may not be a method to show exactly the information in your desired example.The following code below is an example to do the following:Let us know if you have further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Counting number of documents inserted or upserted via PyMongo | 2022-10-19T01:20:14.665Z | Counting number of documents inserted or upserted via PyMongo | 2,000 |
null | []
| [
{
"code": "db.getCollection('test_msg').find({},{'65534':1,'65533':1}).sort({'65534':1,'65533':1})\n\n/* 1 */\n{\n \"_id\" : ObjectId(\"5e9fa79a7b6a0000a5005962\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:34.628Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 2 */\n{\n \"_id\" : ObjectId(\"5e9fa79a7b6a0000a5005964\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:34.907Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 3 */\n{\n \"_id\" : ObjectId(\"5e9fa79b7b6a0000a5005967\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:35.177Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 4 */\n{\n \"_id\" : ObjectId(\"5e9fa79b7b6a0000a500596c\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:35.452Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 5 */\n{\n \"_id\" : ObjectId(\"5e9fa79b7b6a0000a500596e\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:35.456Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 6 */\n{\n \"_id\" : ObjectId(\"5e9fa79b7b6a0000a5005971\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:35.459Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 7 */\n{\n \"_id\" : ObjectId(\"5e9fa79b7b6a0000a5005975\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:35.733Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 8 */\n{\n \"_id\" : ObjectId(\"5e9fa79c7b6a0000a5005979\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:36.576Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 9 */\n{\n \"_id\" : ObjectId(\"5e9fa79c7b6a0000a5005980\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:36.857Z\"),\n \"65533\" : NumberLong(0)\n}\n\n/* 10 */\n**{**\n** \"_id\" : ObjectId(\"5e9fa79b7b6a0000a5005969\"),**\n** \"65534\" : ISODate(\"2020-04-22T02:10:35.181Z\"),**\n** \"65533\" : NumberLong(1)**\n**}**\n\n/* 11 */\n{\n \"_id\" : ObjectId(\"5e9fa79c7b6a0000a500597d\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:36.580Z\"),\n \"65533\" : NumberLong(1)\n}\n\n/* 12 */\n{\n \"_id\" : ObjectId(\"5e9fa79c7b6a0000a5005984\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:36.861Z\"),\n \"65533\" : NumberLong(1)\n}\n\n/* 13 */\n{\n \"_id\" : ObjectId(\"5e9fa79c7b6a0000a5005986\"),\n \"65534\" : ISODate(\"2020-04-22T02:10:36.862Z\"),\n \"65533\" : NumberLong(2)\n}\n",
"text": "Hello,I am new to mongoDB. From documentation, it says the query result can be sorted in multiple fields. I tried to sort by the first field (‘65534’) in date format, then the second field (‘65533’) in number format. Here is the query I issued and the result produced:You can spot well that the returned item #10 is not in the order as the sorting criteria: its date field value is earlier than the former item #9. Well, I found the output is like sorting only using the numeric field ‘65533’.I tried creating an index {‘65534’:1,‘65533’:1} but it did not help. [Well, I don’t think it helps other than performance matter; am I right?]I tested on Community edition 4.2.6, 4.2.5, 4.0.18 but all come to the results not in my expectation.Could any expert tell whether I have a wrong understanding of the documentation or mongodb behavior? What is the correct way to produce a my expected sorting result of multiple fields in my case?Thank you first for your advice.Amon",
"username": "Amon_Tse"
},
{
"code": "{ \"65533\": 1, \"65534\": 1 }{ \"65534\": 1, \"65533\": 1 }{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:34.628Z\") }\n{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:34.907Z\") }\n{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:35.177Z\") }\n{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:35.452Z\") }\n{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:35.456Z\") }\n{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:35.459Z\") }\n{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:35.733Z\") }\n{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:36.576Z\") }\n{ \"65533\" : NumberLong(0), \"65534\" : ISODate(\"2020-04-22T02:10:36.857Z\") }\n\n{ \"65533\" : NumberLong(1), \"65534\" : ISODate(\"2020-04-22T02:10:35.181Z\") }\n{ \"65533\" : NumberLong(1), \"65534\" : ISODate(\"2020-04-22T02:10:36.580Z\") }\n{ \"65533\" : NumberLong(1), \"65534\" : ISODate(\"2020-04-22T02:10:36.861Z\") }\n\n{ \"65533\" : NumberLong(2), \"65534\" : ISODate(\"2020-04-22T02:10:36.862Z\") }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:34.628Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:34.907Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:35.177Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:35.181Z\"), \"num\" : NumberLong(1) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:35.452Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:35.456Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:35.459Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:35.733Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:36.576Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:36.580Z\"), \"num\" : NumberLong(1) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:36.857Z\"), \"num\" : NumberLong(0) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:36.861Z\"), \"num\" : NumberLong(1) }\n{ \"dt\" : ISODate(\"2020-04-22T02:10:36.862Z\"), \"num\" : NumberLong(2) }\n",
"text": "Yes, what you observed is correct.The result actually looks like as if the sort is done with { \"65533\": 1, \"65534\": 1 } and not as actually performed with { \"65534\": 1, \"65533\": 1 }.Also, the result is same with both sort patterns.[ EDIT ADD ]I renamed the field names from “65533” and “65534” to “num” and “dt” respectively, and found the sorting happens correctly:References:",
"username": "Prasad_Saya"
},
{
"code": "mongo> var sort = {'65534':1,'65533':1}\n> sort\n{ \"65533\" : 1, \"65534\" : 1 }\nMapmongo",
"text": "Welcome to the community @Amon_Tse!Based on your output, I assume you are using Robo3T (although similar behaviour can be reproduced in the mongo shell).The issue you are seeing is because you are using key values that look like numbers, and JavaScript is quirky when it comes to the order of keys in an object. Keys that look like numbers will end up sorted first, so the server ends up being sent a different sort order than you intended.For example:In JavaScript you would either want to use alphanumeric key names (which don’t get sorted) or an order-preserving data structure like Map (which is how the Node.js driver implemented ordered options: NODE-578). All official drivers or languages include support for creating ordered objects where order is significant.The mongo shell embeds a JavaScript interpreter, but unfortunately does not currently have a workaround for this edge case outside of avoiding numeric-like field names. Some relevant issues to upvote & watch are SERVER-11358 and SERVER-28569.If you are using a MongoDB admin UI which doesn’t have a solution for this, I would report the behaviour as a bug.I noticed that this bug also exists in Compass, so created COMPASS-4258.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I found that bug had been logged https://jira.mongodb.org/browse/SERVER-11358 since 2013 and it is still opened!",
"username": "Amon_Tse"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
},
{
"code": "",
"text": "Hi,Just noting that SERVER-11358 was closed as “Works as Designed” because this behaviour is part of the JavaScript spec:This is inherent to javascript and spec’d behavior, there’s nothing that mongosh or any other JS-based shell can do about this: ECMAScript 2015 Language Specification – ECMA-262 6th EditionRegards,\nStennie",
"username": "Stennie_X"
}
]
| Sorting multiple fields produces wrong order | 2020-04-22T03:25:30.081Z | Sorting multiple fields produces wrong order | 4,689 |
null | [
"replication",
"java",
"kafka-connector"
]
| [
{
"code": "_idname=mongodb-local-sink-4\nconnector.class=com.mongodb.kafka.connect.MongoSinkConnector\nconnection.uri=mongodb://localhost/?replicaSet=rs0\ntasks.max=1\ntopics=some_topic\ndatabase=sink_test\ncollection=sink_test\ndocument.id.strategy=com.mongodb.kafka.connect.sink.processor.id.strategy.FullKeyStrategy\n#document.id.strategy=com.mongodb.kafka.connect.sink.processor.id.strategy.UuidProvidedInKeyStrategy\nkey.converter=org.apache.kafka.connect.storage.StringConverter\nkey.converter.schemas.enable=false\nvalue.converter=org.apache.kafka.connect.json.JsonConverter\n[2022-10-17 12:01:16,001] ERROR [mongodb-local-sink|task-0] WorkerSinkTask{id=mongodb-local-sink-4-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:195)\norg.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:611)\n[...]\nCaused by: org.apache.kafka.connect.errors.DataException: org.apache.kafka.connect.errors.DataException: Could not convert key `00a7e296-a4b5-4404-836d-b15fc54122a7` into a BsonDocument.\n\tat com.mongodb.kafka.connect.sink.StartedMongoSinkTask.handleTolerableWriteException(StartedMongoSinkTask.java:228)\n[...]\nCaused by: org.apache.kafka.connect.errors.DataException: Could not convert key `00a7e296-a4b5-4404-836d-b15fc54122a7` into a BsonDocument.\n\tat com.mongodb.kafka.connect.sink.converter.LazyBsonDocument.getUnwrapped(LazyBsonDocument.java:161)\n[...]\nCaused by: org.bson.json.JsonParseException: Invalid JSON number\n\tat org.bson.json.JsonScanner.scanNumber(JsonScanner.java:444)\n00a7e296-a4b5-4404-836d-b15fc54122a7",
"text": "I’m trying to read messages from our Kafka cluster into my local MongoDB database.\nFor testing I’m using the MongoDB Kafka Sink connector 1.8.0. I’m currently running macOS 12.6, but the eventual target environment will be some linux.I’m able to dump the Kafka messages content, however, when I’m trying to utilize the message key as the _id I’m running into issues:My test configuration is:My error message is:For some reason the connector attempts to convert our message key 00a7e296-a4b5-4404-836d-b15fc54122a7 into a number and fails as there are some non-numeric characters. For my processing I’ll need the full key.Checking the documentation, I’m not sure on how I can tell the connector to directly use the external key instead of trying to generate an ObjectID.",
"username": "Udo_Held"
},
{
"code": "StringRecordConverter",
"text": "I just started with the sink connector and came here for this exact reason. The root cause seems to be that there is an implicit assumption throughout the Sink connector code that the Kafka key, if any, is parseable as a Document. It looks like you are using scalar strings as keys, which is what we are doing also. If you look in StringRecordConverter you’ll see that these are parsed as BsonDocuments, not BsonStrings.As someone that has a lot of experience with Kafka and stream processing, I can say for sure that using complex types of any sort as keys in Kafka is a Bad Idea. The fact that the sink connector seems to require this is even worse, since it seems likely to encourage people to do a bad thing. The reason you don’t want to do this is because the semantics of these complex types don’t match Kafka’s own semantics for determining key equality. Things like the order of keys in an object, pretty-printing/extra whitespace, etc that are insignificant to JSON are very significant to Kafka and can cause issues with partitioning, log compaction, etc.We are trying to figure out how to work around this now but at this moment this behavior feels like a deal breaker if you’re unable or unwilling to use complex keys.",
"username": "Thomas_Becker"
},
{
"code": "",
"text": "FYI I found this post which details using simple connect transforms to force the key into a document shape before processing by the sink connector, which seems to work: Kafka sink connector : How to get Kafka message key into document - #3 by hpgrahsl",
"username": "Thomas_Becker"
},
{
"code": "name=mongodb-local-sink-4\nconnector.class=com.mongodb.kafka.connect.MongoSinkConnector\nconnection.uri=mongodb://localhost/?replicaSet=rs0\ntasks.max=1\ntopics=some_topic\ndatabase=sink_test\ncollection=sink_test\ndocument.id.strategy=com.mongodb.kafka.connect.sink.processor.id.strategy.ProvidedInKeyStrategy\ntransforms=hk\ntransforms.hk.type=org.apache.kafka.connect.transforms.HoistField$Key\ntransforms.hk.field=_id\nkey.converter=org.apache.kafka.connect.storage.StringConverter\nkey.converter.schemas.enable=false\nvalue.converter=org.apache.kafka.connect.json.JsonConverter\n",
"text": "Thanks, with some minor modifications that worked for me. I wanted to keep the other fields.",
"username": "Udo_Held"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Key conversion Error with Kafka Sink connector | 2022-10-18T03:30:39.922Z | Key conversion Error with Kafka Sink connector | 3,773 |
[
"queries"
]
| [
{
"code": "",
"text": "I am trying to migrate data from the AWS instance to the Atlas cluster using the live migration feature. But after validating the setting atlas throws an error “live Migration encountered an error: could not initialize source connection: could not connect to server: server selection error: server selection timeout, current topology:”. I have already created a replica set and whitelist the IPs in the security group. Can anyone please help regarding the issue?\n\nScreenshot (13)1920×1080 157 KB\n",
"username": "Michael_Trueman"
},
{
"code": "",
"text": "I’m having the exact same issue. Validate connection works fine. I click start migration and it stays in an “initializing” state for several minutes. and then gives this error:Live Migration encountered an error: could not initialize source connection: could not connect to server: server selection error: server selection timeout, current topology:Were you ever able to get this to work? Did anyone follow up?",
"username": "Joe_Banks"
},
{
"code": "",
"text": "I have the exact same issue in October 2022!! I can’t believe this thread has no official answer or suggestion!",
"username": "Didac_Royo"
},
{
"code": "",
"text": "I would recommended to contact the Atlas support team via the in-app chat to have the error noted investigated further. You can additionally raise a support case if you have a support subscription. The community forums are for public discussion and we cannot help with service or account / billing enquiries.The Atlas support team will have further insight into what may be possibly causing the error.You may also wish to refer to the Troubleshoot Live Migration documentation for pre and post validation issues.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Problem while doing live migration | 2022-01-13T11:59:15.474Z | Problem while doing live migration | 2,357 |
|
null | [
"queries",
"crud"
]
| [
{
"code": "const checkIfExist = db.collection.find({field1: ''x\"},{field2: ''x\"},{field3: ''x\"},{field4: ''x\"});\n\nif(!checkIfExist ) {\n db.collection.create({field1: ''x\"},{field2: ''x\"},{field3: ''x\"},{field4: ''x\"})\n}\nupdateOne",
"text": "Hey guys, can you guys help me with an issue using mongo?I have a database, that has 4 fields, and I register more than 100k new registers in a day, and I was having a lot of performance issues checking if the register it’s already created(to prevent duplicate registers).For example:I did theAlso, I tried to use updateOne but I also get a performance issuePS: I am using AWS DocumentDB r52x.large, so isn’t problem of the cloud machine.If I create some index unique, that check if these 4 fields it’s unique, it’s good for performance?Have any other solution to improve the performance to create registers, without duplicated registers?",
"username": "Matheus_Lopes"
},
{
"code": "findinsertupsert",
"text": "Welcome to the MongoDB Community @Matheus_Lopes !I am using AWS DocumentDB r52x.large, so isn’t problem of the cloud machine.Amazon DocumentDB is an independent emulation of a subset of features for the associated MongoDB server version they claim compatibility with.The server implementations do not have any code in common, so behaviour like indexing may differ.If you are trying to understand performance issues for DocumentDB, I recommend asking on Stack Overflow or an AWS product community: Newest ‘aws-documentdb’ Questions - Stack Overflow.If I create some index unique, that check if these 4 fields it’s unique, it’s good for performance?If your use case requires unique indexes, the main consideration is correctness rather than performance.Unnecessary indexes will be unhelpful for performance as they take up RAM and add a bit of write I/O. Useful indexes will support common queries.Have any other solution to improve the performance to create registers, without duplicated registers?The general approach you are taking with two separate commands (find followed an insert) is subject to race conditions. The recommended pattern would be to Insert or Update in a Single Operation using an upsert.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Help with performance - DocumentDB | 2022-10-20T18:42:21.211Z | Help with performance - DocumentDB | 1,581 |
null | [
"atlas-cluster"
]
| [
{
"code": "",
"text": "Hi all,how can If my system is taking advantage the storage-memory-cpu profile of my current cluster (M30)?\nwhich metrics can I look for and get into this type of conclusions? (e.g: System CPU, System memory)Thank you.",
"username": "Shay_I"
},
{
"code": "",
"text": "Hi @Shay_I,Perhaps the How to Monitor MongoDB page may be a good place to start it contains some details in regards to specific Atlas metrics to monitor as well.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Atlas cluster utilization | 2022-10-18T11:05:56.530Z | Atlas cluster utilization | 1,367 |
null | [
"production",
"php",
"field-encryption"
]
| [
{
"code": "pecl install mongodb-1.14.1\npecl upgrade mongodb-1.14.1\n",
"text": "The PHP team is happy to announce that version 1.14.1 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release upgrades our libbson and libmongoc dependencies to 1.22.1. The libmongocrypt dependency has been upgraded to 1.5.2.A complete list of resolved issues in this release may be found in JIRA.DocumentationDocumentation is available on PHP.net.InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL.",
"username": "jmikola"
},
{
"code": "",
"text": "On Ubuntu 22.04 with PHP 8.1.2 I’m experiencing this issue … is a new release coming soon?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "See PHPC-1706: Don't try linking against libresolv on AIX (#1172) · mongodb/mongo-php-driver@b581f2a · GitHub … apparently broken on Ubuntu 22.04 as well.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "1.14.2 has been released and includes the fix for PHPC-2152.",
"username": "jmikola"
},
{
"code": "",
"text": "Thanks, Jeremy. Calvin sez hi ",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB PHP Extension 1.14.1 Released | 2022-09-09T23:29:30.556Z | MongoDB PHP Extension 1.14.1 Released | 3,213 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[ { from: '09:00', till: '10:00' }, { from: '11:00', till: '12:00' } ]\n[ { room: 'A', from: '09:00', till: '10:00' }, { room: 'B': from: '09:15', till: '10:15' }, { room: 'C', from: '11:30', till: '12:30' } ]\n[ { from: '09:00', till: '10:00', booked: 2 }, { from: '11:00', till: '12:00', booked: 1} ]\nfromtill$group$bucket",
"text": "Lets say I want to have on a given day slots like:There are existing appointments in DB, some may not be aligned to the official slots, but overlapping:Is there a way to get aggregated data for the desired slots like this?Note: we use from and till with real dates, just wanted to make it easier to read.Is there a way with $group or $bucket to realize this with overlapping timestamps?",
"username": "blue_puma"
},
{
"code": "{ room: 'B': from: '09:15', till: '10:15' }{ from: '09:00', till: '10:00' } and { from: '11:00', till: '12:00' }slots : [\n { time: 09:00 , booked : null }\n { time : 09:15 , booked : null }\n { time : 09:30 , booked : null }\n { time : 09:45 , booked : null }\n { time : 11:00 , booked : null }\n { time : 11:15 , booked : null }\n { time : 11:30 , booked : null }\n { time : 11:45 , booked : null }\n]\nslots : [\n { time : 09:00 , booked : null }\n { time : 09:15 , booked : booking_id_369 }\n { time : 09:30 , booked : booking_id_369 }\n { time : 09:45 , booked : booking_id_369 }\n { time : 11:00 , booked : null }\n { time : 11:15 , booked : null }\n { time : 11:30 , booked : null }\n { time : 11:45 , booked : null }\n]\n",
"text": "I have linked your other post here since I feel they are somewhat related.The difficulty for your use-case is the way you store your availability and reservation schedule. Yes it is very nice to define time schedules with from/till but storing it likewise make it hard. In those circumstances, I do not use this time of model. I first determine the granularity of the available resources. In your case it looks like it is time slots of 15 minutes as seen here:{ room: 'B': from: '09:15', till: '10:15' }I give the UI the possibility to define availability like you do,{ from: '09:00', till: '10:00' } and { from: '11:00', till: '12:00' }but what I store is an array of 8 entries, one for each 15 minutes time slots like:When a booking comes I simply update the booked field with the appropriate booking_id. It is now trivial to find what is booked and what is not booked. For example a reservation for the given resource from 9:15 to 9:45 would result in:Note that in reality, I do not stored booked:null, I prefer to leave the field missing for space efficiency.I know this does not answer your question, but it gives some ideas and revive your 4 days old posts.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej Thanks for your feedback! In general it seems like a good approach. My challenge is that the time slots are configurable per tenant, so I can’t guarantee the granularity and it may be changed later on.I was really looking for some aggregation tricks (or should I say magic?) to count appointments that fall within the pre-defined time slots.",
"username": "blue_puma"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to count overlapping time slot usage ("10:00-11:00")? With $group or $bucket? | 2022-10-13T20:49:20.167Z | How to count overlapping time slot usage (“10:00-11:00”)? With $group or $bucket? | 1,896 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Cans anybody to provide the ‘real world’ simple example of using aggregation function and join 2 collections from 2 DBs (placed as mention in this title), something like here: https://www.mongodb.com/docs/atlas/data-federation/supported-unsupported/pipeline/lookup-stage/ . Generally, Is it possible?Thanks in advance.\nAndrei",
"username": "Andrei"
},
{
"code": "",
"text": "Hi @AndreiI don’t think MongoDB can join two databases where one is on-prem, but I believe you can join multiple MongoDB Atlas based databases in a single query using Atlas Data Federation. Using Data Federation you can also join Atlas Data Lake and data in AWS S3 buckets.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Nothing stops you from implementing your own federation.You just connect your application to the 2 servers, reading data from both and storing the result where ever you wish.You can even do that by simply doing 2 mongodump, one from each servers. The mongorestore to 2 different databases to the target mongod, then use aggregation with $merge in the order you want.",
"username": "steevej"
},
{
"code": "",
"text": "Salut Steeve! Yes, we have the middleware in our project and I’m able to call the api routes to get data from the desired collections. Actually I need to compare metadata in related collections and update data in the local collection by the user call by the running a script (Node + js). This script (Node + js) does not work properly and we do not use refs in our mongoose schema . I’d like ro rewrite it with using aggregation func. Could you, pls, write for me the short example of code how to create/merge those collections to temporary/ virtual DB, then put make to aggregation func.?",
"username": "Andrei"
},
{
"code": "",
"text": "Thanks Kevin for your answer! Can you get me the simple example how can I make it with using JavaScript code? Or link for tutorial?\nAndrei",
"username": "Andrei"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Aggregate multiple databases (one DB is placed locally on PC, second - on host in net) | 2022-10-19T20:36:25.344Z | Aggregate multiple databases (one DB is placed locally on PC, second - on host in net) | 3,553 |
null | [
"compass"
]
| [
{
"code": "",
"text": "when am trying to view collection data throw compass,it is taking few millisec but while trying to view data by connecting SAP system to mongodb using adopter,its taking 90 sec.Can anyone help to check this performance issue.",
"username": "Rojalin_Das1"
},
{
"code": "",
"text": "Hi @Rojalin_Das1If Compass can do it very quickly but not SAP, it seems to me that the problem does not lie in MongoDB. Some things I would check:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "One thing not to forget is that Compass is paging the result, downloading and showing only the first few documents despite matching many more. It is possible that your SAP system needs to download all the matching documents in order to do its tasks.",
"username": "steevej"
},
{
"code": "",
"text": "Thank You Kevin and Steeve for your suggestions. I am still working on this performance issue and keeping all these points in mind.Actually the scenario is ,\nUsing the SAP interface and the extra connector (odbc driver) trying to connect mongodb and extracting data from mongodb to the oracle database and it perform very slow process.Analysis:\nChecked the mongodb log. I found the opened connections but it is not ended and while loading data to the oracle data base,the job getting stuck in middle ,which is running in SAP system.",
"username": "Rojalin_Das1"
},
{
"code": "",
"text": "HI Steeve,\nThe problem is job getting stop after running for few mins. If it is running slow and giving output in late or delay, then i can think of performance issue. But the job is getting hung.Can you please suggest me, if is there any constraint on row limit or any file size limit while loading data from our mongo side.",
"username": "Rojalin_Das1"
},
{
"code": "",
"text": "If your use-case is slow because you are using SAP and SAP requires a huge number of documents to perform your use case, then the only optimization I can see is that you stop using SAP and implement your use case with the aggregation framework which can process the huge number of documents directly on the server.You did not answerIn addition to know where the MongoDB deployment is located, it would be nice to have the system specification of this deployment. Memory, disk, CPU, size of collections, …Since you mentionedto the oracle databaseWhere is the deployment and specifications of this oracle database? May be it is the one slowing everything. It is much slower to write data and update indexes that reading.If you only use SAP via ODBC to copy over some data into SQL, why don’t you simple mongoexport, use any tool that maps JSON to SQL, then import the result.",
"username": "steevej"
}
]
| Mongodb performance issue | 2022-10-12T13:24:39.681Z | Mongodb performance issue | 1,720 |
null | [
"mongoose-odm"
]
| [
{
"code": "",
"text": "Please, I need help on how to relate record using mongoose",
"username": "Simeon_Akindele"
},
{
"code": "relate record using mongoose",
"text": "Hello @Simeon_Akindele ,Welcome to The MongoDB Community Forums! Could you please help me with below details to understand your use case better?Regards,\nTarun",
"username": "Tarun_Gaur"
}
]
| How to relate record using Mongoose? | 2022-10-15T19:13:31.603Z | How to relate record using Mongoose? | 1,520 |
null | [
"installation",
"php"
]
| [
{
"code": "",
"text": "I’ve tried way too many ways now to install and make the driver work.\nIssue I am facing is: /usr/sbin/apache2: symbol lookup error: /usr/lib/php/20210902/mongodb.so: undefined symbol: ns_initparseI have latest ubuntu, php8.1, apache2.\nphpinfo shows the mongodb present and php.ini has extension=mongodb.soI’m really struggling now and really need help.",
"username": "Amelie_Levray"
},
{
"code": "-lresolv-lresolve./configure",
"text": "Most of the existing Google results for this error refer back to Error with PHP 8.1 MongoDB driver on Ubuntu 22.04 - Stack Overflow from several months ago, which was never resolved. yugabyte/yugabyte-db#12738 suggests it’s related to a missing -lresolv build flag, so there is likely an incompatibility with Ubuntu 22.04 in our CheckResolv.m4 build script if it’s not detecting that -lresolve is necessary. I’ve opened PHPC-2152 to investigate this further.In the meantime, it’d be helpful if you could attempt compiling the extension from source and share the full output of the ./configure command. This will provide some insight into what is being detected in our environment and the final linker flags used for compilation.",
"username": "jmikola"
},
{
"code": "checking for grep that handles long lines and -e... /usr/bin/grep\nchecking for egrep... /usr/bin/grep -E\nchecking for a sed that does not truncate output... /usr/bin/sed\nchecking for pkg-config... /usr/bin/pkg-config\nchecking pkg-config is at least version 0.9.0... yes\nchecking for cc... cc\nchecking whether the C compiler works... yes\nchecking for C compiler default output file name... a.out\nchecking for suffix of executables... \nchecking whether we are cross compiling... no\nchecking for suffix of object files... o\nchecking whether the compiler supports GNU C... yes\nchecking whether cc accepts -g... yes\nchecking for cc option to enable C11 features... none needed\nchecking how to run the C preprocessor... cc -E\nchecking for icc... no\nchecking for suncc... no\nchecking for system library directory... lib\nchecking if compiler supports -Wl,-rpath,... yes\nchecking build system type... x86_64-pc-linux-gnu\nchecking host system type... x86_64-pc-linux-gnu\nchecking target system type... x86_64-pc-linux-gnu\nchecking for PHP prefix... /usr\nchecking for PHP includes... -I/usr/include/php/20210902 -I/usr/include/php/20210902/main -I/usr/include/php/20210902/TSRM -I/usr/include/php/20210902/Zend -I/usr/include/php/20210902/ext -I/usr/include/php/20210902/ext/date/lib\nchecking for PHP extension directory... /usr/lib/php/20210902\nchecking for PHP installed headers prefix... /usr/include/php/20210902\nchecking if debug is enabled... no\nchecking if zts is enabled... no\nchecking for gawk... gawk\nchecking whether to enable MongoDB support... yes, shared\nchecking PHP version... 8.1.2\nchecking whether to enable developer build flags... no\nchecking whether to enable code coverage... no\nchecking whether to compile against system libraries instead of bundled... no\nchecking whether to use system libbson... no\nchecking whether to use system libmongoc... no\nchecking whether to enable client-side encryption... auto\nchecking for gcc... (cached) cc\nchecking whether the compiler supports GNU C... (cached) yes\nchecking whether cc accepts -g... (cached) yes\nchecking for cc option to enable C11 features... (cached) none needed\nchecking for g++... g++\nchecking whether the compiler supports GNU C++... yes\nchecking whether g++ accepts -g... yes\nchecking for g++ option to enable C++11 features... none needed\nchecking accept ARG2 => struct sockaddr ARG3 => socklen_t ... ok\nchecking for an ANSI C-conforming const... yes\nchecking for inline... inline\nchecking for typeof syntax and keyword spelling... typeof\nchecking for __sync_add_and_fetch_4... yes\nchecking for __sync_add_and_fetch_8... yes\nchecking for stdio.h... yes\nchecking for stdlib.h... yes\nchecking for string.h... yes\nchecking for inttypes.h... yes\nchecking for stdint.h... yes\nchecking for strings.h... yes\nchecking for sys/stat.h... yes\nchecking for sys/types.h... yes\nchecking for unistd.h... yes\nchecking for _Bool... yes\nchecking for stdbool.h that conforms to C99... yes\nchecking for strings.h... (cached) yes\nchecking whether byte ordering is bigendian... no\nchecking for strnlen... yes\nchecking for reallocf... no\nchecking for syscall... yes\nchecking for SYS_gettid... yes\nchecking for snprintf... yes\nchecking for strlcpy... no\nchecking for struct timespec... yes\nchecking for library containing clock_gettime... none required\nchecking for library containing floor... -lm\nchecking for gmtime_r... yes\nchecking for rand_r... yes\nchecking for arc4random_buf... no\nchecking if compiler needs -Werror to reject unknown flags... no\nchecking for the pthreads library -lpthreads... no\nchecking whether pthreads work without any flags... yes\nchecking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE\nchecking if more special flags are required for pthreads... no\nchecking for PTHREAD_PRIO_INHERIT... yes\nchecking whether PTHREAD_ONCE_INIT needs braces... no\nchecking for PHP_MONGODB_SNAPPY... no\nchecking for snappy_uncompress in -lsnappy... no\nchecking for snappy-c.h... no\nchecking for PHP_MONGODB_ZLIB... yes\nchecking for PHP_MONGODB_ZSTD... no\nchecking for ZSTD_compress in -lzstd... no\nchecking for zstd.h... no\nchecking for res_nsearch... yes\nchecking for res_ndestroy... no\nchecking for res_nclose... yes\nchecking whether to enable SASL for Kerberos authentication... auto\nchecking for PHP_MONGODB_SASL... no\nchecking for sasl_client_init in -lsasl2... no\nchecking for sasl/sasl.h... no\nchecking which SASL library to use... no\nchecking whether to enable crypto and TLS... auto\nchecking deprecated option for OpenSSL library path... auto\nchecking for cc options needed to detect all undeclared functions... none needed\nconfigure: checking whether OpenSSL is available\nchecking for PHP_MONGODB_SSL... yes\nchecking whether ASN1_STRING_get0_data is declared... yes\nchecking which TLS library to use... openssl\nchecking whether to use system crypto profile... no\nchecking deprecated option for whether to use system crypto profile... no\nchecking whether to enable ICU for SASLPrep with SCRAM-SHA-256 authentication... auto\nchecking for PHP_MONGODB_ICU... no\nchecking for shm_open... yes\nchecking for sched_getcpu... yes\nchecking for socklen_t... yes\nchecking for struct sockaddr_storage.ss_family... yes\nchecking if compiler needs -Werror to reject unknown flags... no\nchecking for the pthreads library -lpthreads... no\nchecking whether pthreads work without any flags... yes\nchecking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE\nchecking if more special flags are required for pthreads... no\nchecking for PTHREAD_PRIO_INHERIT... (cached) yes\nchecking if weak symbols are supported... yes\nchecking which crypto library to use for libmongocrypt... openssl\nchecking whether byte ordering is bigendian... (cached) no\nchecking how to print strings... printf\nchecking for a sed that does not truncate output... (cached) /usr/bin/sed\nchecking for fgrep... /usr/bin/grep -F\nchecking for ld used by cc... /usr/bin/ld\nchecking if the linker (/usr/bin/ld) is GNU ld... yes\nchecking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B\nchecking the name lister (/usr/bin/nm -B) interface... BSD nm\nchecking whether ln -s works... yes\nchecking the maximum length of command line arguments... 1572864\nchecking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop\nchecking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop\nchecking for /usr/bin/ld option to reload object files... -r\nchecking for objdump... objdump\nchecking how to recognize dependent libraries... pass_all\nchecking for dlltool... no\nchecking how to associate runtime and link libraries... printf %s\\n\nchecking for ar... ar\nchecking for archiver @FILE support... @\nchecking for strip... strip\nchecking for ranlib... ranlib\nchecking for gawk... (cached) gawk\nchecking command to parse /usr/bin/nm -B output from cc object... ok\nchecking for sysroot... no\nchecking for a working dd... /usr/bin/dd\nchecking how to truncate binary pipes... /usr/bin/dd bs=4096 count=1\nchecking for mt... mt\nchecking if mt is a manifest tool... no\nchecking for dlfcn.h... yes\nchecking for objdir... .libs\nchecking if cc supports -fno-rtti -fno-exceptions... no\nchecking for cc option to produce PIC... -fPIC -DPIC\nchecking if cc PIC flag -fPIC -DPIC works... yes\nchecking if cc static flag -static works... yes\nchecking if cc supports -c -o file.o... yes\nchecking if cc supports -c -o file.o... (cached) yes\nchecking whether the cc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes\nchecking whether -lc should be explicitly linked in... no\nchecking dynamic linker characteristics... GNU/Linux ld.so\nchecking how to hardcode library paths into programs... immediate\nchecking whether stripping libraries is possible... yes\nchecking if libtool supports shared libraries... yes\nchecking whether to build shared libraries... yes\nchecking whether to build static libraries... no\nchecking for ld used by g++... /usr/bin/ld -m elf_x86_64\nchecking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes\nchecking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes\nchecking for g++ option to produce PIC... -fPIC -DPIC\nchecking if g++ PIC flag -fPIC -DPIC works... yes\nchecking if g++ static flag -static works... yes\nchecking if g++ supports -c -o file.o... yes\nchecking if g++ supports -c -o file.o... (cached) yes\nchecking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes\nchecking dynamic linker characteristics... (cached) GNU/Linux ld.so\nchecking how to hardcode library paths into programs... immediate\nconfigure: patching config.h.in\nconfigure: creating ./config.status\n\nmongodb was configured with the following options:\n\nBuild configuration:\n CFLAGS : -g -O2\n Extra CFLAGS : \n Developers flags (slow) : \n Code Coverage flags (extra slow) : \n libmongoc : Bundled (1.11.1-20220829+git623d659f00)\n libbson : Bundled (1.11.1-20220829+git623d659f00)\n libmongocrypt : Bundled (1.5.2)\n LDFLAGS : \n EXTRA_LDFLAGS : \n MONGODB_SHARED_LIBADD : -lz -lssl -lcrypto\n\nPlease submit bugreports at:\n https://jira.mongodb.org/browse/PHPC\n\n\nconfig.status: creating /home/calnex_admin/mongo-php-driver/src/libmongoc/src/common/common-config.h\nconfig.status: creating /home/calnex_admin/mongo-php-driver/src/libmongoc/src/libbson/src/bson/bson-config.h\nconfig.status: creating /home/calnex_admin/mongo-php-driver/src/libmongoc/src/libbson/src/bson/bson-version.h\nconfig.status: creating /home/calnex_admin/mongo-php-driver/src/libmongoc/src/libmongoc/src/mongoc/mongoc-config.h\nconfig.status: creating /home/calnex_admin/mongo-php-driver/src/libmongoc/src/libmongoc/src/mongoc/mongoc-version.h\nconfig.status: creating /home/calnex_admin/mongo-php-driver/src/libmongocrypt/src/mongocrypt-config.h\nconfig.status: creating config.h\nconfig.status: executing libtool commands\n\n",
"text": "Hi Jeremy,Thanks for looking into this.\nHere is the output of ./configure",
"username": "Amelie_Levray"
},
{
"code": "checking for res_nsearch... yes\nchecking for res_ndestroy... no\nchecking for res_nclose... yes\n",
"text": "Based on this, the fix in PHPC-2152 should resolve the issue for you. This will be released in 1.14.2, which should be published within the coming week.",
"username": "jmikola"
},
{
"code": "",
"text": "1.14.2 has been released and includes the necessary fix for this issue.",
"username": "jmikola"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Installing mongodb php driver | 2022-10-14T13:18:08.504Z | Installing mongodb php driver | 3,383 |
null | [
"swift"
]
| [
{
"code": "import SwiftUI\nimport RealmSwift\n\nstruct InspItemStatusView: View {\n @ObservedObject var rlmMgr = RealmManager()\n @ObservedRealmObject var item:InspItem\n \n \n \n var body: some View {\n VStack(alignment:.leading) {\n var newStatus:ItemStatusEnum = .na\n Button {\n \n if item.itemStatus.index < ItemStatusEnum.allCases.count - 1 {\n //item.itemStatus =\n newStatus = ItemStatusEnum.allCases[item.itemStatus.index + 1]\n \n } else {\n //item.itemStatus =\n newStatus = ItemStatusEnum.allCases[0]\n }\n\n updateItemStatus(newValue: newStatus.rawValue)\n } label: {\n Text(item.thaw()!.itemStatus.rawValue)\n }\n .minimumScaleFactor(0.5)\n .buttonStyle(.plain)\n .fontWeight(.semibold)\n \n .foregroundColor(\n item.itemStatus.rawValue == \"N/A\" ? .secondary :item.itemStatus.rawValue == \"Rec\" ? .orange : item.itemStatus.rawValue == \"IP\" ?.red : .green)\n }//Vstack\n //}\n }\n \n func updateItemStatus(newValue:String){\n do {\n try Realm().write() {\n guard let thawedItem = item.thaw() else {\n print(\"Unable to thaw item\")\n return\n }\n \n print(\"ItemStatus Update hit\")\n // thawedItem.itemStatus = newValue) ?? ItemStatusEnum.na\n }\n } catch {\n print(\"Failed to save Item: \\(error.localizedDescription)\")\n }\n }\n}\n\nimport Foundation\nimport RealmSwift\n\npublic enum ItemStatusEnum: String, PersistableEnum, CaseIterable{\n \n case na = \"N/A\"\n case recorded = \"Rec\"\n case ip = \"IP\"\n case closed = \"Closed\"\n \n \n var index: Int {\n ItemStatusEnum.allCases.firstIndex(where: {$0 == self}) ?? 1\n }\n}\n\npublic enum ItemTypeEnum: String, PersistableEnum, CaseIterable {\n case O = \"Observation\"\n case D = \"Deficiency\"\n case R = \"Recommendation\"\n}\n\nfinal class InspItem : Object, ObjectKeyIdentifiable {\n \n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted(indexed: true) var inspId:Int = 0\n @Persisted(indexed: true) var itemId:Int = 0\n \n @Persisted var itemType : String = \"\" //O,D,R\n // @Persisted var itemStatus : String = \"Recorded\" //\"IP\", \"Closed\" //<==First Try\n @Persisted var itemStatus : ItemStatusEnum = ItemStatusEnum.recorded //\"IP\", \"Closed\"\n @Persisted var itemDescript : String = \"\"\n @Persisted var photos = RealmSwift.List<Photo>()\n \n @Persisted(originProperty: \"items\") var inspection:LinkingObjects<Inspection> //backlink\n \n //convenience init(inspId:Int ,itemId:Int, iType:String, iStat:String, iDescript:String) {\n convenience init(inspId:Int ,itemId:Int, iType:String, iStat:ItemStatusEnum, iDescript:String) {\n self.init()\n self.inspId = inspId\n self.itemId = itemId\n self.itemType = iType\n self.itemStatus = iStat\n self.itemDescript = iDescript\n \n \n }\n}\n",
"text": "I read the answers to the question asked by TomF about … How to update an @ObservedRealmObject in a SwiftUI view. My question is quite similar except for the fact that the field value is drawn from an Enum. The object of the exercise is to change the value in the field(itemStatus) by clicking on the field.\nI have run this basic code as a standalone view and it works without problems leading me to believe the problem is caused by my use of the ObservedRealmObject.\nI am hoping that someone will be able to show me how to update the value of itemStatus in the View and in the DB(.realm).Here is the View…Based on the previous example (TomF), I applied the thaw to the item and it worked as evidenced by the execution of the print statement; “ItemStatus Update hit”The InspItem object and Enum…At this point, I am not completely sure on how to update the value of itemStatus. I presume that the code that I have will save and display the updated value, but if not, please show me what to do.Thanks\nKenT",
"username": "Ken_Turnbull"
},
{
"code": "InspItemStatusView",
"text": "Hi Ken, I don’t see the issue her. I tested your code and it is working, when I change the status of the Enum, the view changes. Can you please how are you initialising the InspItemStatusView, maybe it will give a hint why this is happening.",
"username": "Diana_Maria_Perez_Af"
},
{
"code": "",
"text": "Sorry, but I’m not quite sure what you are asking for. Are you asking about the view that calls this view?",
"username": "Ken_Turnbull"
},
{
"code": "// thawedItem.itemStatus = newValue) ?? ItemStatusEnum.naprint(\"thawed before = \\(thawedItem.itemStatus) and newValue = \\(newValue)\")\nthawedItem.itemStatus = newValue) ?? ItemStatusEnum.na\nprint(\"thawed after = \\(thawedItem.itemStatus)\")\n",
"text": "// thawedItem.itemStatus = newValue) ?? ItemStatusEnum.naCan we investigate that line? What does the console output look like if it’s changed toIt may not reveal anything but need to establish that vars are what you expect them to be",
"username": "Jay"
},
{
"code": "public enum ItemStatusEnum: String, PersistableEnum, CaseIterable{\n \n case na = \"N/A\"\n case recorded = \"Rec\"\n case ip = \"IP\"\n case closed = \"Closed\"\n \n var asString: String {\n self.rawValue\n }\n \n var index: Int {\n ItemStatusEnum.allCases.firstIndex(where: {$0 == self}) ?? 1\n }\n}\nimport SwiftUI\nimport RealmSwift\n\nfinal class InspItem : Object, ObjectKeyIdentifiable {\n \n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted(indexed: true) var inspId:Int\n @Persisted(indexed: true) var itemId:Int\n \n @Persisted var itemType : String //O,D,R\n @Persisted var iStatus : ItemStatus = .recorded //\"IP\", \"Closed\"\n @Persisted var itemDescript : String = \"\"\n @Persisted var photos = RealmSwift.List<Photo>()\n \n \n enum ItemStatus: Int, PersistableEnum, CaseIterable{\n case na,recorded,ip,closed\n \n var text:String{\n switch self{\n case .na:\n return \"N/A\"\n case .recorded:\n return \"Recorded\"\n case .ip:\n return \"IP\"\n case .closed:\n return \"Closed\"\n }\n }\n \n var color: Color {\n switch self{\n case .na:\n return .secondary\n case .recorded:\n return .orange\n case .ip:\n return .red\n case .closed:\n return .green\n }\n }\n \n \n// var asString: String {\n// self.rawValue\n// }\n \n// var index: Int {\n// ItemStatusEnum.allCases.firstIndex(where: {$0 == self}) ?? 1\n// }\n }\n \n func increment() -> ItemStatus {\n switch iStatus{\n case .na:\n return .recorded\n case .recorded:\n return .ip\n case .ip:\n return .closed\n case .closed:\n return .na\n }\n \n }\n \n convenience init(inspId:Int ,itemId:Int, iType:String = \"0\", iDescript:String) {\n self.init()\n self.inspId = inspId\n self.itemId = itemId\n self.itemType = iType\n //self.iStatus = iStat\n self.itemDescript = iDescript\n \n }\n}\n\n",
"text": "I have been trying different tacks to solve this since I originally submitted it. At this point I am getting an Initializer error\nScreen Shot 2022-10-19 at 2.54.22 PM580×541 128 KB\nThis is the enum that I was using at that time…I have been building standalone app to test the view in isolation and have developed an Enum that is working better but still has problems.I will try and get your suggested code to work but my memory said that I did something similar and did get the values that I want to save…possible exception of iStatus. Saving was an issue.With the new code I am able to Save but without the proper iStatus",
"username": "Ken_Turnbull"
},
{
"code": "import SwiftUI\nimport RealmSwift\n\nenum ItemStatus: Int, PersistableEnum, CaseIterable {\n case na,recorded,ip,closed\n\n var text:String{\n switch self{\n case .na:\n return \"N/A\"\n case .recorded:\n return \"Recorded\"\n case .ip:\n return \"IP\"\n case .closed:\n return \"Closed\"\n }\n }\n}\n\nfinal class InspItem : Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n",
"text": "Oh - try this. Move the Realm PersisstableEnum outside of the class and move it to the top level of your hierarchy, like the rest of your Realm Models:",
"username": "Jay"
},
{
"code": "enum ItemStatus: Int, PersistableEnum, CaseIterable{\n case na,recorded,ip,closed\n \n var text:String{\n switch self{\n case .na:\n return \"N/A\"\n case .recorded:\n return \"Recorded\"\n case .ip:\n return \"IP\"\n case .closed:\n return \"Closed\"\n }\n }\n \n var color: Color {\n switch self{\n case .na:\n return .secondary\n case .recorded:\n return .orange\n case .ip:\n return .red\n case .closed:\n return .green\n }\n }\n \n \n }\nfunc increment() -> ItemStatus {\n switch iStatus{\n case .na:\n return .recorded\n case .recorded:\n return .ip\n case .ip:\n return .closed\n case .closed:\n return .na\n }\n \n }\n",
"text": "Thanks for the reply Jay. There are three related parts to the enum and I am assuming that all of them should be moved out of the class.As well as a function to allow the user to update the status.The code in the previous reply that had been commented out was for a previous iteration that converted the value to text and allowed incrementing based in index.\nThe new code works very well but does save the value in Realm as an int",
"username": "Ken_Turnbull"
},
{
"code": "",
"text": "Often times when you’re finding a managed property not saving, it’s because Realm can’t resolve it so it doesn’t fail, but it’s also not set.Yes, try moving PersistableEnums to the top level of the app and report back.",
"username": "Jay"
}
]
| Trying to update a field in ObservedRealmObject that depends on an Enum | 2022-10-11T17:21:56.567Z | Trying to update a field in ObservedRealmObject that depends on an Enum | 3,252 |
null | [
"production",
"php",
"atlas-data-lake"
]
| [
{
"code": "pecl install mongodb-1.14.2\npecl upgrade mongodb-1.14.2\n",
"text": "The PHP team is happy to announce that version 1.14.2 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release fixes a build issue where libresolv was not correctly linked on some platforms (e.g. Ubuntu 22.04).This release upgrades our libbson and libmongoc dependencies to 1.22.2. This notably fixes a build issue on Alpine Linux and a separate bug that prevented the driver from connecting to Atlas Data Lake, both of which were introduced in libmongoc 1.22.0 (i.e. ext-mongodb 1.14.0).A complete list of resolved issues in this release may be found in JIRA.DocumentationDocumentation is available on PHP.net.InstallationYou can either download and install the source manually, or you can install the extension with:or update with:",
"username": "jmikola"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB PHP Extension 1.14.2 Released | 2022-10-20T15:29:18.365Z | MongoDB PHP Extension 1.14.2 Released | 2,155 |
[
"node-js",
"connecting",
"app-services-user-auth"
]
| [
{
"code": "",
"text": "I have created an App service tried to connect to it using the electron example but while doing await realmApp.logIn(Realm.Credentials.anonymous()) getting an error.\n\nScreenshot 2022-10-20 at 2.23.36 AM1870×408 33.8 KB\n\nI have enabled anonymous login have added user roles and also configured my local IP",
"username": "Ujjwal_Madan"
},
{
"code": "",
"text": "Hi @Ujjwal_Madan,That application doesn’t seem to exist, are you sure there isn’t a typo in your code (more in detail, the App ID)?",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Well, I have found it, but the Cluster it was supposed to work with is gone now, did you delete it?",
"username": "Paolo_Manna"
}
]
| Not able to login from node Realm SDK | 2022-10-19T20:54:22.670Z | Not able to login from node Realm SDK | 1,616 |
|
[]
| [
{
"code": "sudo chown -R mongodb:mongodb /var/lib/mongodb\nsudo chown mongodb:mongodb /tmp/mongodb-27017.sock\n",
"text": "\nimage961×216 14 KB\nI have tried doingdoesn’t seem to work",
"username": "Pragyan_Yadav"
},
{
"code": "",
"text": "Please share what is in the logs.",
"username": "steevej"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Pragyan_Yadav !As @steevej suggested, please check the MongoDB logs for more context on why the process is shutting down.Please also confirm:version of MongoDB server you are installingO/S versionmethod you used to install MongoDBIf you installed MongoDB using one of the Installation Tutorials and used the official packages, all of the file and directory permissions should be correct as long as you are starting and stopping MongoDB using the service definition.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "version of MongoDB = v6.0.2\nOS = Ubuntu 20.4\nmethod of installation = I have tried these 2.\nmongod --versiondb version v6.0.2\nBuild Info: {\n \"version\": \"6.0.2\",\n \"gitVersion\": \"94fb7dfc8b974f1f5343e7ea394d0d9deedba50e\",\n \"openSSLVersion\": \"OpenSSL 1.1.1f 31 Mar 2020\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu2004\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.129+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.130+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.132+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.132+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.751+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.751+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.751+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.751+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.752+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":834593,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"fcs01\"}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.752+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.13\",\"gitVersion\":\"cfb7690563a3144d3d1175b3a20c2ec81b662a8f\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.752+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.752+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.765+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.765+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.765+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=479M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:56.541+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2022-10-10T07:51:56.541+00:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"95: Operation not supported\"}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:56.541+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":687}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:56.541+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n",
"text": "Hey sorry for late response\n@Stennie_X @steevejThese are the infoOutput of mongod --versionInstallation LinksThese are my logs",
"username": "Pragyan_Yadav"
},
{
"code": "{\"t\":{\"$date\":\"2022-10-10T07:51:55.765+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-10-10T07:51:55.765+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=479M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2022-10-10T07:51:56.541+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n",
"text": "Your thread title installing mongodb for first time seems to be in contradiction (because of the first time part) with the fatal error you get.The error messages tell us that mongod found existing data file in the specified directory and that the data files are incompatible with the version you just installed.May be it is true that you are installing the first time on this machine because you imported the data files from another installation.The files you imported from another machine or the data files created by a previous install on this machine are either important to you or not. If they are not important, you may just delete the content of the directory and start over. However, if the files are important to you, you must install the version that was running when the data files were updated, then follow the documented migration path.",
"username": "steevej"
}
]
| I am getting this error after installing mongodb for first time | 2022-10-10T08:26:59.851Z | I am getting this error after installing mongodb for first time | 2,374 |
|
null | [
"queries",
"python",
"crud",
"mongodb-shell"
]
| [
{
"code": ".insert_many.insert_one.inserted_countif one_or_many_bool == True:\n x = mycol.insert_many(insert_values_json)\nelse:\n x = mycol.insert_one(insert_values_json)\nreturn x\n\nprint(x)\nprint(x.inserted_count, \"documents insert.\")\n<pymongo.results.InsertManyResult object at 0x0000017D1D256950>\nTraceback (most recent call last):\n File \"C:\\Users\\chuan\\OneDrive\\Desktop\\10.17_connect_mongoD_練習\\fake02.py\", line 54, in <module>\n print(x.inserted_count, \"documents inserted.\")\nAttributeError: 'InsertManyResult' object has no attribute 'inserted_count'\nimport pymongo\nimport datetime\nimport json\nfrom bson.objectid import ObjectId\nfrom bson import json_util\n\ndef init_db(ip, db, coll):\n try:\n myclient = pymongo.MongoClient('mongodb://' + ip + '/')\n mydb = myclient[db]\n mycol = mydb[coll]\n except Exception as e:\n msg_fail_reason = \"error in init_db function\"\n return msg_fail_reason\n\n return mydb, mycol\n\n# ins_data = insert_db_data\n# one_or_many_bool: input 1 means True; input 0 is False\n\ndef ins_data(one_or_many_bool, insert_values_json ):\n try: \n if one_or_many_bool:\n x = mycol.insert_many(insert_values_json)\n else:\n x = mycol.insert_one(insert_values_json)\n return x\n except Exception as e:\n msg_fail_reason = \"error in ins_data function\"\n return msg_fail_reason\n\nmsg_fail_reason = \"no error occur\"\n\nip_input = input(\"Enter the ip: \")\nexist_DB_name = input(\"Enter exist DB name: \")\nexist_coll_name = input(\"Enter exist collection name: \")\nmydb, mycol = init_db(ip_input, exist_DB_name, exist_coll_name)\n\n\nupdate_one_or_many = input(\"U are update one or many values? (ex:1 for many , 0 for one): \")\nnewvalues_str = input(\"Enter new values: \")\n\none_or_many_bool = bool(int(update_one_or_many))\n\ninsert_values_json =json.loads(newvalues_str)\nx = ins_data(one_or_many_bool, insert_values_json )\n\nprint(x)\nprint(x.inserted_count, \"documents insert.\")\n\nnumber_of_insert_data = int(x.inserted_count)\n\nmodified_data_list = []\nfor modified_data in mycol.find().sort(\"_id\", -1).limit(number_of_insert_data):\n# print(modified_data)\n modified_data_list.append(modified_data)\n\n\ndef parse_json(data):\n return json.loads(json_util.dumps(data))\n\n# if someone want data in json \nmodified_data_json = parse_json(modified_data_list)\n\n\n# 1 means success \nreturn_status_str = { \"ok\" : 1 , \"msg\" : msg_fail_reason , \"count\" : number_of_insert_data}\nprint(return_status_str)\nprint(type(return_status_str))\n",
"text": "I follow the manual https://api.mongodb.com/python/3.4.0/api/pymongo/results.html\nand similar problem python - AttributeError: 'dict' object has no attribute 'is_active' (PyMongo And Flask) - Stack Overflow (not fit mine issue)\nafter successfully .insert_many or .insert_one,\nthe .inserted_count not working",
"username": "j_ton"
},
{
"code": "acknowledgedFalseWriteConcern(w=0)TrueAttributeError: 'InsertManyResult' object has no attribute 'inserted_count'",
"text": "If you read correctlythe manual results – Result class definitions — PyMongo 3.4.0 documentation you will see that the return type of insert_many is of type pymongo.results.InsertManyResult which has the attributes:Is this the result of an acknowledged write operation?The acknowledged attribute will be False when using WriteConcern(w=0) , otherwise True .andA list of _ids of the inserted documents, in the order provided.There is no attribute named inserted_count, hence the errorAttributeError: 'InsertManyResult' object has no attribute 'inserted_count'The attribute inserted_count is only present in pymongo.results.BulkWriteResult.",
"username": "steevej"
},
{
"code": "InsertManyResultinserted_countlen(result.inserted_ids)import pymongo\nimport datetime\nimport json\nfrom bson.objectid import ObjectId\nfrom bson import json_util\n\ndef init_db(ip, db, coll):\n try:\n myclient = pymongo.MongoClient('mongodb://' + ip + '/')\n mydb = myclient[db]\n mycol = mydb[coll]\n except Exception as e:\n msg_fail_reason = \"error in init_db function\"\n return msg_fail_reason\n\n return mydb, mycol\n\n# ins_data = insert_db_data\n# one_or_many_bool: input 1 means True; input 0 is False\n\ndef ins_data(one_or_many_bool, insert_values_json ):\n try: \n if one_or_many_bool:\n x = mycol.insert_many(insert_values_json)\n else:\n x = mycol.insert_one(insert_values_json)\n return x\n except Exception as e:\n msg_fail_reason = \"error in ins_data function\"\n return msg_fail_reason\n\nmsg_fail_reason = \"no error occur\"\n\nip_input = input(\"Enter the ip: \")\nexist_DB_name = input(\"Enter exist DB name: \")\nexist_coll_name = input(\"Enter exist collection name: \")\nmydb, mycol = init_db(ip_input, exist_DB_name, exist_coll_name)\n\n\ninsert_one_or_many = input(\"U are Insert one or many values? (ex:1 for many , 0 for one): \")\nnewvalues_str = input(\"Enter new values: \")\n\none_or_many_bool = bool(int(insert_one_or_many))\n\ninsert_values_json =json.loads(newvalues_str)\nx = ins_data(one_or_many_bool, insert_values_json )\n\nprint(x)\nprint(x.inserted_ids)\nprint(len(x.inserted_ids), \"documents insert.\")\n\nnumber_of_insert_data = int(len(x.inserted_ids))\n\nmodified_data_list = []\nfor modified_data in mycol.find().sort(\"_id\", -1).limit(number_of_insert_data):\n# print(modified_data)\n modified_data_list.append(modified_data)\n\n\ndef parse_json(data):\n return json.loads(json_util.dumps(data))\n\n# if someone want data in json \nmodified_data_json = parse_json(modified_data_list)\n\n\n# 1 means success \nreturn_status_str = { \"ok\" : 1 , \"msg\" : msg_fail_reason , \"count\" : number_of_insert_data}\nprint(return_status_str)\nprint(type(return_status_str))\n",
"text": "I can use for InsertManyResult , instead of inserted_count that len(result.inserted_ids) is proper, which I learn from expert.so the overall corrected code look like this:",
"username": "j_ton"
},
{
"code": "len(result.inserted_ids)x.modified_count",
"text": "thanks @steevej, ur suggestion looks really greate, and one more question , is there a manul, can list up all the calls ? That I’m working on Pymongo to MongoDB’s (Insert , Update, Query, Delete), so far I have known for “counting resluts” for Insert = len(result.inserted_ids) ; Update = x.modified_count , I might need a manul to know the rest , I wanna know how to count Query, Delete",
"username": "j_ton"
},
{
"code": "",
"text": "is there a manul, can list up all the calls ?Yes, a manual is always useful.But it is really funny that you ask, because the manual is exactly the link you shared in your first post:I follow the manual results – Result class definitions — PyMongo 3.4.0 documentation",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| (mongoDB and Python) AttributeError: 'InsertManyResult' object has no attribute 'inserted_count' | 2022-10-19T09:53:11.310Z | (mongoDB and Python) AttributeError: ‘InsertManyResult’ object has no attribute ‘inserted_count’ | 3,854 |
[]
| [
{
"code": "",
"text": "Can any one have any Idea how can we install MongoDB 5.0 or MongoDB 6.0 on Red hat Linux 9, is it in support or is it got removed from support.\n\nMicrosoftTeams-image1110×312 15.7 KB\nWhen I tried install it in the MongoDB 6.0 it got installed and and when i tried to run it it shows the following error.",
"username": "Ch_Sadvik"
},
{
"code": "grep -qm1 '^flags.*avx' /proc/cpuinfo && echo OK || echo NOT OK",
"text": "Hi @Ch_SadvikAs of MongoDB 5.0 CPUs that don’t support the AVX set are not supported.A core-dump with ILL is a signature that your CPU is not supporting this. If you are running in a virtual machine and the underlying host supports AVX then it is likely the hypervisor or vm require further configuration.You can run this to check if the cpu\ngrep -qm1 '^flags.*avx' /proc/cpuinfo && echo OK || echo NOT OK",
"username": "chris"
}
]
| How to install MongoDB 5.0 or MongoDB 6.0 on Red hat Linux9 | 2022-10-20T09:25:55.850Z | How to install MongoDB 5.0 or MongoDB 6.0 on Red hat Linux9 | 1,827 |
|
null | [
"cxx",
"field-encryption",
"c-driver"
]
| [
{
"code": "[ 77%] Building C object src/libmongoc/CMakeFiles/test-libmongoc.dir/tests/test-mongoc-client.c.o\n/home/xxx/Mongodb/mongo-c-driver-1.23.0/src/libmongoc/tests/test-mongoc-client.c: In function ‘_test_client_sends_handshake’:\n/home/xxx/Mongodb/mongo-c-driver-1.23.0/src/libmongoc/tests/test-mongoc-client.c:3377:7: error: ‘pool’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n mongoc_client_pool_destroy (pool);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/home/xxx/Mongodb/mongo-c-driver-1.23.0/src/libmongoc/tests/test-mongoc-client.c:3354:7: error: ‘future’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n future_destroy (future);\n ^~~~~~~~~~~~~~~~~~~~~~~\ncc1: some warnings being treated as errors\nsrc/libmongoc/CMakeFiles/test-libmongoc.dir/build.make:1238: recipe for target 'src/libmongoc/CMakeFiles/test-libmongoc.dir/tests/test-mongoc-client.c.o' failed\nmake[2]: *** [src/libmongoc/CMakeFiles/test-libmongoc.dir/tests/test-mongoc-client.c.o] Error 1\nCMakeFiles/Makefile2:2506: recipe for target 'src/libmongoc/CMakeFiles/test-libmongoc.dir/all' failed\nmake[1]: *** [src/libmongoc/CMakeFiles/test-libmongoc.dir/all] Error 2\nMakefile:162: recipe for target 'all' failed\nmake: *** [all] Error 2\n\nmymachine:~/Downloads/Mongodb/mongo-c-driver-1.23.0/cmake-build$ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF -DCMAKE_BUILD_TYPE=Release ..\n-- The C compiler identification is GNU 7.5.0\n-- Check for working C compiler: /usr/bin/cc\n-- Check for working C compiler: /usr/bin/cc -- works\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Detecting C compile features\n-- Detecting C compile features - done\n-- Looking for a CXX compiler\n-- Looking for a CXX compiler - /usr/bin/c++\n-- The CXX compiler identification is GNU 7.5.0\n-- Check for working CXX compiler: /usr/bin/c++\n-- Check for working CXX compiler: /usr/bin/c++ -- works\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- Detecting CXX compile features\n-- Detecting CXX compile features - done\nfile VERSION_CURRENT contained BUILD_VERSION 1.23.0\n-- Build and install static libraries\n -- Using bundled libbson\nlibbson version (from VERSION_CURRENT file): 1.23.0\n-- Check if the system is big endian\n-- Searching 16 bit integer\n-- Looking for sys/types.h\n-- Looking for sys/types.h - found\n-- Looking for stdint.h\n-- Looking for stdint.h - found\n-- Looking for stddef.h\n-- Looking for stddef.h - found\n-- Check size of unsigned short\n-- Check size of unsigned short - done\n-- Using unsigned short\n-- Check if the system is big endian - little endian\n-- Looking for snprintf\n-- Looking for snprintf - found\n-- Performing Test BSON_HAVE_TIMESPEC\n-- Performing Test BSON_HAVE_TIMESPEC - Success\n-- struct timespec found\n-- Looking for gmtime_r\n-- Looking for gmtime_r - found\n-- Looking for rand_r\n-- Looking for rand_r - found\n-- Looking for strings.h\n-- Looking for strings.h - found\n-- Looking for strlcpy\n-- Looking for strlcpy - not found\n-- Looking for stdbool.h\n-- Looking for stdbool.h - found\n-- Looking for clock_gettime\n-- Looking for clock_gettime - found\n-- Looking for strnlen\n-- Looking for strnlen - found\n-- Looking for pthread.h\n-- Looking for pthread.h - found\n-- Looking for pthread_create\n-- Looking for pthread_create - not found\n-- Check if compiler accepts -pthread\n-- Check if compiler accepts -pthread - yes\n-- Found Threads: TRUE \nAdding -fPIC to compilation of bson_static components\nlibmongoc version (from VERSION_CURRENT file): 1.23.0\n-- Searching for zlib CMake packages\n-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version \"1.2.11\") \n-- zlib found version \"1.2.11\"\n-- zlib include path \"/usr/include\"\n-- zlib libraries \"/usr/lib/x86_64-linux-gnu/libz.so\"\n-- Looking for include file unistd.h\n-- Looking for include file unistd.h - found\n-- Looking for include file stdarg.h\n-- Looking for include file stdarg.h - found\n-- Searching for compression library zstd\n-- Found PkgConfig: /usr/bin/pkg-config (found version \"0.29.1\") \n-- Checking for module 'libzstd'\n-- No package 'libzstd' found\n-- Not found\n-- Found OpenSSL: /usr/lib/x86_64-linux-gnu/libcrypto.so (found version \"1.1.1\") \n-- Looking for ASN1_STRING_get0_data in /usr/lib/x86_64-linux-gnu/libcrypto.so\n-- Looking for ASN1_STRING_get0_data in /usr/lib/x86_64-linux-gnu/libcrypto.so - found\n-- Searching for sasl/sasl.h\n-- Found in /usr/include\n-- Searching for libsasl2\n-- Found /usr/lib/x86_64-linux-gnu/libsasl2.so\n-- Looking for sasl_client_done\n-- Looking for sasl_client_done - found\n-- Check size of socklen_t\n-- Check size of socklen_t - done\n-- Looking for res_nsearch\n-- Looking for res_nsearch - found\n-- Looking for res_ndestroy\n-- Looking for res_ndestroy - not found\n-- Looking for res_nclose\n-- Looking for res_nclose - found\n-- Looking for sched_getcpu\n-- Looking for sched_getcpu - not found\n-- Detected parameters: accept (int, struct sockaddr *, socklen_t *)\n-- Searching for compression library header snappy-c.h\n-- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\nSearching for libmongocrypt\n-- libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n-- Performing Test MONGOC_HAVE_SS_FAMILY\n-- Performing Test MONGOC_HAVE_SS_FAMILY - Success\n-- Compiling against OpenSSL\n-- Compiling against Cyrus SASL\nAdding -fPIC to compilation of mongoc_static components\n-- Building with MONGODB-AWS auth support\n-- Build files generated for:\n-- \tbuild system: Unix Makefiles\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/xxx/Mongodb/mongo-c-driver-1.23.0/cmake-build\n\n",
"text": "I’ve encountered a build error following the instructions at Installing the MongoDB C Driver (libmongoc) and BSON library (libbson) — libmongoc 1.23.2\nfor version 1.23.0. The flag -Werror is set and as a result the build fails here:I can get by this by editing the CMake files and removing the -Werror but presumably that is there by design.CMake output follows:",
"username": "Otto_Is_Bob"
},
{
"code": "",
"text": "You can file an issue here",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thank you. I’ll try that.",
"username": "Otto_Is_Bob"
}
]
| Mongo-c-driver-1.23.0 build error on Ubuntu 18.04 | 2022-10-19T21:08:49.589Z | Mongo-c-driver-1.23.0 build error on Ubuntu 18.04 | 2,471 |
null | [
"aggregation",
"compass"
]
| [
{
"code": "[\n {\n '$match': {\n 'Id': 321 // Note this is not a Mongo _id but a distinct ID from a legacy system.\n }\n }, {\n '$unwind': {\n 'path': '$Vehicles'\n }\n }, {\n '$facet': {\n 'test': []\n }\n }\n] \n",
"text": "Hi all,\nI’ve run into an issue and I’m not sure how to best approach finding a solution.\nI’m using MongoDB 5.3.1 Community and MongoDB Compass. I have a database that contains a collection of “Dealers”. Each dealer record contains information about a customer and a list of vehicles they have in stock. In turn, each Vehicle record contains information about the vehicle and 2 arrays, one of associated images and the other about the vehicles specifications.We are not talking large amounts of data. The Dealer collection only contains 57 records, and the largest number of vehicles is 113 records.I’ve created an aggregate that:PlanExecutor error during aggregation :: caused by :: document constructed by $facet is 104910678 bytes, which exceeds the limit of 104857600 bytesThe aggregate is literally this:I don’t do anything in the facet stage and it still bombs.I’ve not played with the allowDiskUse, because I wasn’t sure it should be throwing this sort of error, with this amount of data. I could be wrong.If I turn off the facet stage and export the aggregate result, the output Json file is only 3.12MB.Any advice would be greatly received. Tips on how to trouble shoot, possible reasons, what to do next.Thanks in advance.",
"username": "Andy_Bryan"
},
{
"code": "",
"text": "Hi @Andy_Bryan ,Try to add the allowDiskUse: true to your query:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Just to double check. Should it be throwing this sort of error at the level of data I’m using?",
"username": "Andy_Bryan"
},
{
"code": "",
"text": "Running an empty facet might blow the stage",
"username": "Pavel_Duchovny"
},
{
"code": "db.adminCommand({setParameter: 1, internalQueryFacetMaxOutputDocSizeBytes: 335544320})\n",
"text": "Unfortunately, that didn’t resolve the issue. After doing a bit of digging, this actually sorted my problem.",
"username": "Andy_Bryan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Facet exceeding it's limit | 2022-10-19T15:23:19.863Z | Facet exceeding it’s limit | 2,473 |
null | [
"dot-net",
"time-series",
"unity"
]
| [
{
"code": "",
"text": "Hi! I’ve recently started integrating Realm into my multiplayer Unity project. My task is to save the positions of all players (the number of players can be from 4 to 30) along with some information about the current player’s activity every 0.1 seconds. This information is sent by the game server and is not used in the game, so I just have to write it to the database. However, it is important to keep the same sampling frequency.My question is what is a better approach to keep the stored data points evenly spaced?Hopefully, my question makes sense and this is the right place to ask it.Any recommendations and resources will come in handy!",
"username": "Valerii"
},
{
"code": "",
"text": "Hi @Valerii, thanks for your message. Sorry for the late reply, your post flew under our radar I think we don’t have any specific recommendation for your use case. Probably the best approach would be to store all these events in a concurrent queue and continuously dequeue it in a background thread that takes care of storing it in Realm.\nRegarding your frequency limits… probably it would be better if you group multiple events in one write transaction instead of having one transaction per event, as it could be too much depending on the platforms in which the game will be run.Let us know how it goes!",
"username": "papafe"
},
{
"code": "",
"text": "Hi @papafe, thank you for your reply! I have shelved the implementation of the solution that includes Realm and MongoDB and am using the PlayFab capacity for now. Hopefully, I will make it back to the Realm solution soon!",
"username": "Valerii"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| A better approach to storing time series data from multiplayer Unity server | 2022-10-01T15:25:31.255Z | A better approach to storing time series data from multiplayer Unity server | 2,815 |
null | []
| [
{
"code": "",
"text": "Ubiquiti Controller & Omada Controller I Installed On The Same Server, the ports are different but which Controller I Start First The Other Doesn’t Work Both Controllers are using mangadb Please Help",
"username": "Abdullah_Gunaydin"
},
{
"code": "",
"text": "Hello @Abdullah_Gunaydin and welcome to the MongoDB Community forums! Are you having any problems with MongoDB itself? You can look in the log files and paste any log entries if you are.As for the controllers, those appear to be third party items and you would most likely want to reach out to companies that make those tools through their support channels if the problem resides in one of them. While there are a lot of smart people here willing to help out on issues, once you get into third party tooling that help here might be limited as you would need to find someone that has those tools installed and in use.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "I tried changing the port again and I succeeded thanks[image]",
"username": "Abdullah_Gunaydin"
}
]
| Ubiquiti Controller & Omada Controller | 2022-10-11T19:02:58.520Z | Ubiquiti Controller & Omada Controller | 1,982 |
null | [
"aggregation",
"queries",
"java"
]
| [
{
"code": "String query = ResourceLoader.loadResource(\"mongodb/search_query.txt\");\n \nList<Bson> queryPipeline = new ArrayList<>();\nBsonArray.parse(query)\n .getValues()\n .forEach(bsonValue -> queryPipeline\n .add(BsonDocument.parse(bsonValue.toString())));\n\ncollection.aggregate(queryPipeline);\n{ $match: {\n \"modified\":{$gte: new Date(978307200000)}\n }\n}\n{\"$match\": {\"modified\": {\"$gte\": {\"$date\": \"2001-01-01T00:00:00Z\"}}}}\nmodified_id:62b9e49c772160a59079ae5d\nmyId:\"26085\"\nsource:\"has some data:\nmodified: 1999-07-05T00:00:00.000+00:00\n {\"$match\": {\"modified\": {\"$gte\": new Date(978307200000)}}} \n",
"text": "I am having hard time understanding the issue i am face with making the aggregation pipeline query work. I read the query as string from a file and then convert that into Bson and then input that to aggregation() before executing it.Below is the codeIssue:Below is the matching condition that is defined in the filethe above string gets parsed into the below Bson objectProblem is that, the query doesn’t return any result from the collection. even more confusing is that the unit test I wrote passes, meaning; the same query fetch the record and the test passes.Below is a sample record that is stored in our collection. I see modified is stored as Date.I am not sure what the issue is, I would really appreciate your help in understanding the issue.if I run the query on the mongo shell setting new Date(), it works; I understand MongoShell doesn’t understand $date but Bson parser outputs the data condition as $date:",
"username": "Raj"
},
{
"code": "import com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\nimport org.bson.json.JsonWriterSettings;\n\nimport java.time.Instant;\nimport java.util.Date;\nimport java.util.function.Consumer;\n\nimport static com.mongodb.client.model.Filters.gte;\n\npublic class Community {\n\n public static void main(String[] args) {\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n MongoDatabase db = mongoClient.getDatabase(\"test\");\n MongoCollection<Document> coll = db.getCollection(\"coll\");\n coll.drop();\n coll.insertOne(new Document(\"date\", new Date(988307200000L)));\n\n System.out.println(\"It works like this with a timestamp and Date:\");\n coll.find(gte(\"date\", new Date(978307200000L))).forEach(printDocuments());\n\n System.out.println(\"It's also working with Instant and Date for example:\");\n Instant instant = Instant.parse(\"2000-01-01T00:00:00.000Z\");\n Date timestamp = Date.from(instant);\n coll.find(gte(\"date\", timestamp)).forEach(printDocuments());\n }\n }\n\n private static Consumer<Document> printDocuments() {\n return doc -> System.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));\n }\n}\n",
"text": "Hi @Raj and welcome in the MongoDB Community !I wrote a little example. I hope this will help:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi, I am also having the same issue, but I am trying this using Spring MongoTemplate library. Anyone has a Spring solution for this?",
"username": "Jaward_Sally"
}
]
| Java Date in MongoDB aggregation query brings no result. Collection has data and unit test works | 2022-07-05T20:49:39.651Z | Java Date in MongoDB aggregation query brings no result. Collection has data and unit test works | 4,206 |
null | []
| [
{
"code": "",
"text": "When inserting duplicates, the E11000 error is thrown.\nThis is a sample error message:\nE11000 duplicate key error collection: test.movies index: name_1_lang_1 dup key: { name: “movie1”, lang: “ENG” }However, in some places, I can see the error message does not show the field keys that caused the duplicate. Some thing like so:\nE11000 duplicate key error collection: test.movies index: name_1_lang_1 dup key: { : “movie1”,: “ENG” }Why does the error message not show the field names?\nThis causes my parser to break.",
"username": "Sameer_Khalid"
},
{
"code": "",
"text": "I suspect this is the case when _id is the duplicate.",
"username": "steevej"
},
{
"code": "",
"text": "Same issue here.\nI cannot find the message property while responding with the error object but I am able to access it with “err.message”.",
"username": "Zahid_Hussain_Khan"
}
]
| E11000 error message not showing the field keys | 2021-01-28T09:23:56.230Z | E11000 error message not showing the field keys | 5,923 |
null | []
| [
{
"code": "",
"text": "At the access log there are randomly error messages: BadValue: SCRAM-SHA-256 authentication is disabled.\nThis happens 100% of the time with mms-automation from localhost and occasionally from regular remote client even the connection string is always the same.SCRAM-SHA is not disabled and this seems like an Atlas bug.",
"username": "Danwiu"
},
{
"code": "",
"text": "Did you solve this? Have the same problem.",
"username": "Stefan_Verhagen"
},
{
"code": "SCRAM-SHA-256SCRAM-SHA-1SHA-1mms-automationmms-automationSCRAM-SHA-256mms-automation",
"text": "Hi @Danwiu,SCRAM-SHA is not disabled and this seems like an Atlas bug.Currently, Atlas does not support SCRAM-SHA-256, but does support SCRAM-SHA-1. Notably, MongoDB authentication protocols do not use SHA-1 as a raw hash function for passwords or digital signatures, but rather as an HMAC construction in, e.g., SASL SCRAM-SHA-1. While many common uses of SHA-1 have been deprecated or sunset by standards organizations, these do not typically apply to HMAC functions.At the access log there are randomly error messages: BadValue: SCRAM-SHA-256 authentication is disabled.Just to clarify, is the above message you’re seeing within the Database Access History section?This happens 100% of the time with mms-automation from localhostThe mms-automation user is used for Atlas internal automation tasks including monitoring. The source of this message is that mms-automation user initially attempts authentication using SCRAM-SHA-256 which Atlas doesn’t support, causing the “BadValue: SCRAM-SHA-256 authentication is disabled” message, before falling back to SCRAM-SHA-1. Note that there is no detrimental effect to the operation of the database, and this informational message is provided for your own auditing purposes.occasionally from regular remote client even the connection string is always the same.Other than the mms-automation user, what other application(s) from your environment are causing the same “BadValue: SCRAM-SHA-256 authentication is disabled.” message? Please provide the following details about those application(s):Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "SCRAM-SHA-256SCRAM-SHA-1mongod",
"text": "Hi @Stefan_Verhagen - Welcome to the community As noted above in my previous response to Danwiu, Currently, Atlas does not support SCRAM-SHA-256, but does support SCRAM-SHA-1. Hopefully the previous response provides more details you were after.However, could you clarify what problem you are seeing exactly? Please provide the following so we are able to assist with narrowing down what the particular issue could be:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you for your quick response Jason, indeed it is the mms-automation user creating the culprit.",
"username": "Stefan_Verhagen"
},
{
"code": "",
"text": "It seems to me a bug for Atlas to report a problem for a problem Atlas caused.",
"username": "Steve_Hand1"
},
{
"code": "",
"text": "A post was split to a new topic: “BadValue: SCRAM-SHA-256 authentication is disabled”",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| “BadValue: SCRAM-SHA-256 authentication is disabled” | 2022-09-12T14:15:26.924Z | “BadValue: SCRAM-SHA-256 authentication is disabled” | 5,007 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Hello,\nI am using a collection with aprox 0.5 Million documents inside the collection.\nI just want to get the sum of the some of columns on the whole collection with one lookup where I need to filter based on that lookup collection.\nMy concern is that for only 0.5 million records the query take about 1 min time to execute it.\nI have also used the indexed on the columns but still it takes this much time.\nWhat is the issue with and what’s wrong with it?\nIs there any suggestion or idea how to improve the performance?My Collections:",
"username": "Ghanshyam_Ashra"
},
{
"code": "executionStatsmongod",
"text": "Hello @Ghanshyam_Ashra ,Welcome to The MongoDB Community Forums! Could you please help me with below details to get better understanding of your query slowness?Along with information about the query and collections, could you also provide some information about your hardware specifications and serverRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "\n{ stages:\n\n [ { '$cursor':\n\n { query:\n\n { start_time:\n\n { '$gte': 2022-09-06T04:28:31.779Z,\n\n '$lte': 2022-10-06T04:28:31.779Z } },\n\n fields:\n\n { distance: 1,\n\n harsh_breaking_counts: 1,\n\n idling_counts: 1,\n\n rpm_counts: 1,\n\n speeding_incident_counts: 1,\n\n start_time: 1,\n\n vehicle_id: 1,\n\n _id: 0 },\n\n queryPlanner:\n\n { plannerVersion: 1,\n\n namespace: 'kpi_rps.journeys',\n\n indexFilterSet: false,\n\n parsedQuery:\n\n { '$and':\n\n [ { start_time: { '$lte': 2022-10-06T04:28:31.779Z } },\n\n { start_time: { '$gte': 2022-09-06T04:28:31.779Z } } ] },\n\n queryHash: '1367AE20',\n\n planCacheKey: '4A44DFBB',\n\n winningPlan:\n\n { stage: 'FETCH',\n\n inputStage:\n\n { stage: 'IXSCAN',\n\n keyPattern: { start_time: 1 },\n\n indexName: 'start_time_1',\n\n isMultiKey: false,\n\n multiKeyPaths: { start_time: [] },\n\n isUnique: false,\n\n isSparse: false,\n\n isPartial: false,\n\n indexVersion: 2,\n\n direction: 'forward',\n\n indexBounds: { start_time: [ '[{ $date: { $numberLong: \"1662438511779\" } }, { $date: { $numberLong: \"1665030511779\" } }]' ] } } },\n\n rejectedPlans: [] },\n\n executionStats:\n\n { executionSuccess: true,\n\n nReturned: 0,\n\n executionTimeMillis: 2,\n\n totalKeysExamined: 0,\n\n totalDocsExamined: 0,\n\n executionStages:\n\n { stage: 'FETCH',\n\n nReturned: 0,\n\n executionTimeMillisEstimate: 0,\n\n works: 1,\n\n advanced: 0,\n\n needTime: 0,\n\n needYield: 0,\n\n saveState: 1,\n\n restoreState: 1,\n\n isEOF: 1,\n\n docsExamined: 0,\n\n alreadyHasObj: 0,\n\n inputStage:\n\n { stage: 'IXSCAN',\n\n nReturned: 0,\n\n executionTimeMillisEstimate: 0,\n\n works: 1,\n\n advanced: 0,\n\n needTime: 0,\n\n needYield: 0,\n\n saveState: 1,\n\n restoreState: 1,\n\n isEOF: 1,\n\n keyPattern: { start_time: 1 },\n\n indexName: 'start_time_1',\n\n isMultiKey: false,\n\n multiKeyPaths: { start_time: [] },\n\n isUnique: false,\n\n isSparse: false,\n\n isPartial: false,\n\n indexVersion: 2,\n\n direction: 'forward',\n\n indexBounds: { start_time: [ '[{ $date: { $numberLong: \"1662438511779\" } }, { $date: { $numberLong: \"1665030511779\" } }]' ] },\n\n keysExamined: 0,\n\n seeks: 1,\n\n dupsTested: 0,\n\n dupsDropped: 0 } } } } },\n\n { '$lookup':\n\n { from: 'vehicles',\n\n as: 'vehicle',\n\n localField: 'vehicle_id',\n\n foreignField: 'id',\n\n unwinding: { preserveNullAndEmptyArrays: false },\n\n matching: { status: { '$in': [ 'Roadworthy', 'Roadworthy (with defects)', 'VOR' ] } } } },\n\n { '$group':\n\n { _id: { '$dateToString': { date: '$start_time', format: { '$const': '%Y-%m-%d' } } },\n\n total_distance: { '$sum': '$distance' },\n\n total_speeding_incidents: { '$sum': '$speeding_incident_counts' },\n\n total_breaking_incidents: { '$sum': '$harsh_breaking_counts' },\n\n total_idlinging_incidents: { '$sum': '$idling_counts' },\n\n total_rpm_incidents: { '$sum': '$rpm_counts' } } } ],\n\n serverInfo:\n\n { host: 'DESKTOP-1IBPCFM',\n\n port: 27017,\n\n version: '4.2.23-rc0',\n\n gitVersion: 'cf91e1fbb5f45590d8e356e57522648381fea93c' },\n\n ok: 1 }\n\n\n{ _id: ObjectId(\"631360201fe5008c790afe10\"),\n\n id: 1,\n\n vehicle_id: 181,\n\n user_id: 1,\n\n start_time: 2022-02-07T12:37:11.000Z,\n\n end_time: 2022-02-07T12:43:19.000Z,\n\n start_lat: Decimal128(\"51.52837\"),\n\n start_lon: Decimal128(\"-3.07873\"),\n\n end_lat: Decimal128(\"51.530338\"),\n\n end_lon: Decimal128(\"-3.102211\"),\n\n engine_duration: 368,\n\n idle_duration: 128,\n\n fuel: Decimal128(\"0.43\"),\n\n co2: Decimal128(\"1.10\"),\n\n distance: 1890,\n\n odometer: 17370605,\n\n odometer_start: 17368668,\n\n odometer_end: 17370605,\n\n avg_speed: 3.44,\n\n max_speed: 12,\n\n incident_count: 0,\n\n harsh_breaking_count: 0,\n\n harsh_acceleration_count: 0,\n\n harsh_cornering_count: 0,\n\n speeding_count: 0,\n\n speeding_incident_count: null,\n\n rpm_count: 0,\n\n idling_count: 0,\n\n updated_at: 2022-09-03T14:09:36.838Z,\n\n created_at: 2022-09-03T14:09:36.838Z }\n\n{ _id: ObjectId(\"631360201fe5008c790afe11\"),\n\n id: 2,\n\n vehicle_id: 181,\n\n user_id: 1,\n\n start_time: 2022-02-07T12:45:26.000Z,\n\n end_time: 2022-02-07T13:34:13.000Z,\n\n start_lat: Decimal128(\"51.53034\"),\n\n start_lon: Decimal128(\"-3.10221\"),\n\n end_lat: Decimal128(\"51.656443\"),\n\n end_lon: Decimal128(\"-3.337073\"),\n\n engine_duration: 2927,\n\n idle_duration: 1261,\n\n fuel: Decimal128(\"2.98\"),\n\n co2: Decimal128(\"7.70\"),\n\n distance: 32559,\n\n odometer: 17403336,\n\n odometer_start: 17370605,\n\n odometer_end: 17403336,\n\n avg_speed: 8.81,\n\n max_speed: 26,\n\n incident_count: 2,\n\n harsh_breaking_count: 0,\n\n harsh_acceleration_count: 0,\n\n harsh_cornering_count: 0,\n\n speeding_count: 0,\n\n speeding_incident_count: null,\n\n rpm_count: 0,\n\n idling_count: 2,\n\n updated_at: 2022-09-03T14:09:36.839Z,\n\n created_at: 2022-09-03T14:09:36.839Z }\n\n{ _id: ObjectId(\"631360201fe5008c790afe12\"),\n\n id: 3,\n\n vehicle_id: 181,\n\n user_id: 1,\n\n start_time: 2022-02-07T13:48:58.000Z,\n\n end_time: 2022-02-07T13:59:11.000Z,\n\n start_lat: Decimal128(\"51.65644\"),\n\n start_lon: Decimal128(\"-3.33707\"),\n\n end_lat: Decimal128(\"51.656464\"),\n\n end_lon: Decimal128(\"-3.337021\"),\n\n engine_duration: 613,\n\n idle_duration: 608,\n\n fuel: Decimal128(\"0.00\"),\n\n co2: Decimal128(\"0.00\"),\n\n distance: 0,\n\n odometer: 17403336,\n\n odometer_start: 17403336,\n\n odometer_end: 17403336,\n\n avg_speed: 0,\n\n max_speed: 0,\n\n incident_count: 1,\n\n harsh_breaking_count: 0,\n\n harsh_acceleration_count: 0,\n\n harsh_cornering_count: 0,\n\n speeding_count: 0,\n\n speeding_incident_count: null,\n\n rpm_count: 0,\n\n idling_count: 1,\n\n updated_at: 2022-09-03T14:09:36.840Z,\n\n created_at: 2022-09-03T14:09:36.840Z }\n\n{ _id: ObjectId(\"631360201fe5008c790afe13\"),\n\n id: 4,\n\n vehicle_id: 181,\n\n user_id: 1,\n\n start_time: 2022-02-07T14:30:12.000Z,\n\n end_time: 2022-02-07T14:47:54.000Z,\n\n start_lat: Decimal128(\"51.65646\"),\n\n start_lon: Decimal128(\"-3.33702\"),\n\n end_lat: Decimal128(\"51.696191\"),\n\n end_lon: Decimal128(\"-3.346751\"),\n\n engine_duration: 1062,\n\n idle_duration: 248,\n\n fuel: Decimal128(\"0.59\"),\n\n co2: Decimal128(\"1.50\"),\n\n distance: 9755,\n\n odometer: 17413179,\n\n odometer_start: 17403336,\n\n odometer_end: 17413179,\n\n avg_speed: 8.05,\n\n max_speed: 21,\n\n incident_count: 0,\n\n harsh_breaking_count: 0,\n\n harsh_acceleration_count: 0,\n\n harsh_cornering_count: 0,\n\n speeding_count: 0,\n\n speeding_incident_count: null,\n\n rpm_count: 0,\n\n idling_count: 0,\n\n updated_at: 2022-09-03T14:09:36.841Z,\n\n created_at: 2022-09-03T14:09:36.841Z }\n\n{ _id: ObjectId(\"631360201fe5008c790afe14\"),\n\n id: 5,\n\n vehicle_id: 181,\n\n user_id: 1,\n\n start_time: 2022-02-07T15:11:53.000Z,\n\n end_time: 2022-02-07T15:14:38.000Z,\n\n start_lat: Decimal128(\"51.69619\"),\n\n start_lon: Decimal128(\"-3.34675\"),\n\n end_lat: Decimal128(\"51.696226\"),\n\n end_lon: Decimal128(\"-3.346785\"),\n\n engine_duration: 165,\n\n idle_duration: 0,\n\n fuel: Decimal128(\"0.00\"),\n\n co2: Decimal128(\"0.00\"),\n\n distance: 0,\n\n odometer: 17413179,\n\n odometer_start: 17413179,\n\n odometer_end: 17413179,\n\n avg_speed: 0,\n\n max_speed: 0,\n\n incident_count: 0,\n\n harsh_breaking_count: 0,\n\n speeding_incident_count: null,\n\n rpm_count: 0,\n\n idling_count: 0 }\n\n\n{ _id: ObjectId(\"6313601f1fe5008c790afbc5\"),\n\n id: 1,\n\n status: 'Roadworthy',\n\n telematics_status: 'tm8.gps.ign.off',\n\n last_location_lat: Decimal128(\"51.353667\"),\n\n last_location_lon: Decimal128(\"-0.482931\"),\n\n last_location_time: 2022-09-02T17:38:59.000Z,\n\n updated_at: 2022-09-03T14:09:35.564Z,\n\n created_at: 2022-09-03T14:09:35.564Z }\n\n{ _id: ObjectId(\"6313601f1fe5008c790afbc6\"),\n\n id: 2,\n\n status: 'Roadworthy',\n\n telematics_status: 'tm8.gps.ign.off',\n\n last_location_lat: Decimal128(\"56.206825\"),\n\n last_location_lon: Decimal128(\"-3.17141\"),\n\n last_location_time: 2022-06-14T10:27:16.000Z,\n\n updated_at: 2022-09-03T14:09:35.566Z,\n\n created_at: 2022-09-03T14:09:35.566Z }\n\n{ _id: ObjectId(\"6313601f1fe5008c790afbc7\"),\n\n id: 3,\n\n status: 'Roadworthy',\n\n telematics_status: null,\n\n last_location_lat: null,\n\n last_location_lon: null,\n\n last_location_time: null,\n\n updated_at: 2022-09-03T14:09:35.566Z,\n\n created_at: 2022-09-03T14:09:35.566Z }\n\n{ _id: ObjectId(\"6313601f1fe5008c790afbc8\"),\n\n id: 4,\n\n status: 'Archived',\n\n telematics_status: null,\n\n last_location_lat: null,\n\n last_location_lon: null,\n\n last_location_time: null,\n\n updated_at: 2022-09-03T14:09:35.567Z,\n\n created_at: 2022-09-03T14:09:35.567Z }\n\n{ _id: ObjectId(\"6313601f1fe5008c790afbc9\"),\n\n id: 5,\n\n status: 'Roadworthy',\n\n telematics_status: 'tm8.gps.ign.off',\n\n last_location_lat: Decimal128(\"53.382753\"),\n\n last_location_lon: Decimal128(\"-2.189482\"),\n\n last_location_time: 2022-09-02T11:56:21.000Z,\n\n updated_at: 2022-09-03T14:09:35.568Z,\n\n created_at: 2022-09-03T14:09:35.568Z }\n\n\ndb.journeys.aggregate([\n\n {\n\n \"$match\": {\n\n \"start_time\": {\n\n \"$gte\": { \"$date\": { \"$numberLong\": \"1662440173747\" } },\n\n \"$lte\": { \"$date\": { \"$numberLong\": \"1665032173747\" } }\n\n }\n\n }\n\n },\n\n {\n\n \"$lookup\": {\n\n \"from\": \"vehicles\",\n\n \"localField\": \"vehicle_id\",\n\n \"foreignField\": \"id\",\n\n \"as\": \"vehicle\"\n\n }\n\n },\n\n { \"$unwind\": { \"path\": \"$vehicle\" } },\n\n {\n\n \"$match\": {\n\n \"vehicle.status\": {\n\n \"$in\": [\"Roadworthy\", \"Roadworthy (with defects)\", \"VOR\"]\n\n }\n\n }\n\n },\n\n {\n\n \"$group\": {\n\n \"_id\": {\n\n \"$dateToString\": { \"format\": \"%Y-%m-%d\", \"date\": \"$start_time\" }\n\n },\n\n \"total_distance\": { \"$sum\": \"$distance\" },\n\n \"total_speeding_incidents\": { \"$sum\": \"$speeding_incident_counts\" },\n\n \"total_breaking_incidents\": { \"$sum\": \"$harsh_breaking_counts\" },\n\n \"total_idlinging_incidents\": { \"$sum\": \"$idling_counts\" },\n\n \"total_rpm_incidents\": { \"$sum\": \"$rpm_counts\" }\n\n }\n\n }\n\n])\n\n",
"text": "@Tarun_Gaur Thanks for the attention.My Mongodb version is: MongoDB server version: 4.2.23-rc0. MongoDB compass is version 1.33.0.Explain Response:Journeys Documents :Vehicles Documents:My Query that takes approx 1 min:Indexes applied on all the columns that used inside query:Start_timespeeding_incident_countharsh_breaking_countdistanceidling_countrpm_countAnd other fields applied Indexes and also it is used when query run.Currently it is in my local system and still it takes time to load. My system has 8 GB RAM and using i5 processor.Thanks.",
"username": "Ghanshyam_Ashra"
},
{
"code": " nReturned: 0,\n\n executionTimeMillis: 2,\n\n totalKeysExamined: 0,\n\n totalDocsExamined: 0,\n0executionStatsstart_time_1",
"text": "@Ghanshyam_Ashra , why is most of the parameters 0 in executionStats of Explain Response?\nJust an observation that the only index used in this is start_time_1.",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "So what is the issue there tat this statistics come as Zero?",
"username": "Ghanshyam_Ashra"
},
{
"code": "{\n \"stages\" : [\n {\n \"$cursor\" : {\n \"query\" : {\n \"start_time\" : {\n \"$gte\" : ISODate(\"2022-08-01T00:00:00Z\"),\n \"$lte\" : ISODate(\"2022-09-20T00:00:00Z\")\n }\n },\n \"fields\" : {\n \"COLUMN_1\" : 1,\n \"COLUMN_2\" : 1,\n \"COLUMN_3\" : 1,\n \"COLUMN_4\" : 1,\n \"COLUMN_5\" : 1,\n \"COLUMN_6\" : 1,\n \"COLUMN_7\" : 1,\n \"collection_2_id\" : 1,\n \"_id\" : 0\n },\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"db.journeys_collection\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"start_time\" : {\n \"$lte\" : ISODate(\"2022-09-20T00:00:00Z\")\n }\n },\n {\n \"start_time\" : {\n \"$gte\" : ISODate(\"2022-08-01T00:00:00Z\")\n }\n }\n ]\n },\n \"queryHash\" : \"1367AE20\",\n \"planCacheKey\" : \"4A44DFBB\",\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"start_time\" : 1\n },\n \"indexName\" : \"start_time_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"start_time\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"start_time\" : [\n \"[new Date(1659312000000), new Date(1663632000000)]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : [ ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 82120,\n \"executionTimeMillis\" : 14810,\n \"totalKeysExamined\" : 82120,\n \"totalDocsExamined\" : 82120,\n \"executionStages\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 82120,\n \"executionTimeMillisEstimate\" : 5,\n \"works\" : 82121,\n \"advanced\" : 82120,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 656,\n \"restoreState\" : 656,\n \"isEOF\" : 1,\n \"docsExamined\" : 82120,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 82120,\n \"executionTimeMillisEstimate\" : 2,\n \"works\" : 82121,\n \"advanced\" : 82120,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 656,\n \"restoreState\" : 656,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"start_time\" : 1\n },\n \"indexName\" : \"start_time_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"start_time\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"start_time\" : [\n \"[new Date(1659312000000), new Date(1663632000000)]\"\n ]\n },\n \"keysExamined\" : 82120,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n }\n }\n },\n {\n \"$lookup\" : {\n \"from\" : \"COLLECTION_2\",\n \"as\" : \"result\",\n \"localField\" : \"collection_2_id\",\n \"foreignField\" : \"id\",\n \"unwinding\" : {\n \"preserveNullAndEmptyArrays\" : false\n },\n \"matching\" : {\n \"status\" : {\n \"$in\" : [\n \"STATUS_1\",\n \"STATUS_2\",\n \"STATUS_3\"\n ]\n }\n }\n }\n },\n {\n \"$group\" : {\n \"_id\" : {\n \"$dateToString\" : {\n \"date\" : \"$start_time\",\n \"format\" : {\n \"$const\" : \"%Y-%m-%d\"\n }\n }\n },\n \"total_COLUMN_1_envents\" : {\n \"$sum\" : \"$COLUMN_1\"\n },\n \"total_COLUMN_2_envents\" : {\n \"$sum\" : \"$COLUMN_2\"\n },\n \"total_COLUMN_3_envents\" : {\n \"$sum\" : \"$COLUMN_3\"\n },\n \"total_COLUMN_4_envents\" : {\n \"$sum\" : \"$COLUMN_4\"\n },\n \"total_COLUMN_5_envents\" : {\n \"$sum\" : \"$COLUMN_5\"\n },\n \"total_COLUMN_6_envents\" : {\n \"$sum\" : \"$COLUMN_6\"\n }\n }\n },\n {\n \"$sort\" : {\n \"sortKey\" : {\n \"_id\" : 1\n }\n }\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"DESKTOP-KIPRKDI\",\n \"port\" : 27017,\n \"version\" : \"4.2.23-rc0\",\n \"gitVersion\" : \"cf91e1fbb5f45590d8e356e57522648381fea93c\"\n },\n \"ok\" : 1\n}\n",
"text": "@Tarun_Gaur\nTotal documents scanned: 82k\nEstimated time: 14810 MS ( Approx 14 seconds )\nIt seems too much low performanceExplain Stats:Thanks.",
"username": "Ghanshyam_Ashra"
},
{
"code": "",
"text": "@Tarun_Gaur\nI have added new explain result here.\nAnd it seems that $lookup takes time to execute the relationship.\nIs there any possible way to enhance the performance with $lookup and $match on multiple fields with $lookup data?Thanks.",
"username": "Ghanshyam_Ashra"
}
]
| Enhance Performance for 1 Million records | 2022-10-04T12:23:20.739Z | Enhance Performance for 1 Million records | 4,645 |
null | [
"production",
"c-driver",
"atlas-data-lake"
]
| [
{
"code": "",
"text": "Announcing 1.22.2 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.Bug fixes:Thanks to everyone who contributed to this release.",
"username": "Colby_Pike"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB C Driver 1.22.2 Released | 2022-10-19T23:35:44.940Z | MongoDB C Driver 1.22.2 Released | 1,951 |
null | [
"atlas-search"
]
| [
{
"code": "",
"text": "I’m trying to define synonyms mapping on my MongoDB Cloud freetier cluster using Atlas Search and I got into this weird error. So, I was wondering if Atlas Search only supports english when defining synonyms in the collection.\nI have some english and korean words as “equivalent” synonyms. I thought I messed up with the json configuration but whenver I enter korean word as an element of synonyms, the build failed.{ “mappingType” : “equivalent” , “synonyms”: [“car”,“차”] } ← built w/o errorBut the weird part is that, if I use korean word that has 1 letter it passed. So, basically this fails.{ “mappingType” : “equivalent” , “synonyms”: [“car”,“차차”] } ← failedcar menas “차” in korean. (car === “차”). I just copied and pasted the same word twice to demonstrate the my condition. It looks like Atlas doesn’t support korean in synonyms collection but on the other hand, it works with only 1 letter word. Is it some sort of bug?",
"username": "Polar_swimming"
},
{
"code": "\"차차\"db.synonyms.find()\n[\n {\n _id: ObjectId(\"634de9f26343f96ab5838209\"),\n mappingType: 'equivalent',\n synonyms: [ 'car', 'vehicle', 'automobile', '차차' ]\n }\n]\n\"차차\"db.cars.aggregate([{$search:{text:{query:'차차',path:\"name\",synonyms:\"mySynonyms\"}}}])\n[\n { _id: ObjectId(\"634de7486343f96ab5838200\"), name: 'vehicle' },\n { _id: ObjectId(\"634de7416343f96ab58381fe\"), name: 'car' },\n { _id: ObjectId(\"634de7446343f96ab58381ff\"), name: 'car 2' }\n]\n\"View status details\"",
"text": "Hi @Polar_swimming - Welcome to the community But the weird part is that, if I use korean word that has 1 letter it passed. So, basically this fails.\n{ “mappingType” : “equivalent” , “synonyms”: [“car”,“차차”] } ← failedCan you provide the index details in JSON format and the full error message when you attempted to add the additional character?I wasn’t able to reproduce any particular error when adding the dual character text \"차차\":Running a search query using synonyms for the text query value \"차차\":I believe you should be able to get the error message on the index build failure from the UI by clicking the \"View status details\" message on the Search Index page.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n \"name\": {\n \"fullName\": \"some full name\",\n \"tag\": \"some tag\"\n }\n}\n{\n \"analyzer\": \"lucene.nori\",\n \"searchAnalyzer\": \"lucene.nori\",\n \"mappings\": {\n \"fields\": {\n \"name\": {\n \"dynamic\": true,\n \"type\": \"document\"\n }\n }\n },\n \"synonyms\": [\n {\n \"analyzer\": \"lucene.nori\",\n \"name\": \"vendorSynonyms\",\n \"source\": {\n \"collection\": \"vendorSynonyms\"\n }\n }\n ]\n}\n{\n \"mappingType\":\"equivalent\",\n \"synonyms\":[\n \"samsung\",\n \"삼성\",\n \"삼성전자\"\n ]\n}\n{\n \"mappingType\":\"equivalent\",\n \"synonyms\":[\n \"micron\",\n \"마이크론\"\n ]\n}\n{\n \"mappingType\":\"equivalent\",\n \"synonyms\":[\n \"asus\",\n \"아수스\",\n \"에이수스\",\n \"에이서스\"\n ]\n}\n",
"text": "@Jason_Tran I saw you were replying to my question. Here’s what you asked for.\n“name” field is document type. It has “fullName” and “tag” as its properties (fields).Index configurationYou know what. I was trying to show you some more examples but it seems some of them work just fine.\nHere’s a list of documents from my synonyms collection.#1#2#3 (this causes error)\nimage824×636 25 KB\nlet me know if you need more info",
"username": "Polar_swimming"
},
{
"code": "",
"text": "Thank you for providing those details, i’ll do some testing on my system and update here accordingly.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n \"mappingType\":\"equivalent\",\n \"synonyms\":[\n \"asus\",\n \"아수스\",\n \"에이수스\",\n \"에이서스\"\n ]\n}\n\"아수스\"\"아\"",
"text": "Hi @Polar_swimming,#3 (this causes error)I believe the specific entry on the synonym mapping you provided which is causing the index failure is \"아수스\". More specifically, after I had done some testing, it appears to be due to this character \"아\". The character is functioning as a stop word and as per the synonyms options documentation:To use synonyms with stop words, you must either index the field using the Standard Analyzer or add the synonym entry without the stop word.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Atlas Search synonyms collection language support? | 2022-10-17T04:16:18.779Z | Atlas Search synonyms collection language support? | 1,857 |
null | [
"aggregation",
"serverless"
]
| [
{
"code": "db.mycoll.aggregate([{ $sample: { size: 20 } }])\n",
"text": "In a collection I have about 10 million documents.I use this code to find random 20 of them:How many RPU MongoDB Atlas needs to do this?",
"username": "tri_be"
},
{
"code": "RPU$sampleCOLLSCAN$sampleN$sampleN$sampleN",
"text": "Hi @tri_be - Welcome to the communityHow many RPU MongoDB Atlas needs to do this?I would recommend going over the Serverless - Usage Cost Summary documentation. In regards to RPU’s specifically (as of the time of this message):You are charged one RPU for each document read (up to 4KB) or for each index read (up to 256 bytes).So in terms of RPU for your question, one of the factors you will need to consider is document and index read size(s).In a collection I have about 10 million documents.\ndb.mycoll.aggregate([{ $sample: { size: 20 } }])There are several conditions in which the $sample stage will do a COLLSCAN / use all documents from preceding aggregation stage or use a pseudo-random cursor. As per the documentation linked:If all of the following conditions are true, $sample uses a pseudo-random cursor to select the N documents:If any of the previous conditions are false, $sample Whether the RPU usage is higher when the pseudo-random cursor is used versus when it is not would differ on a case-by-case basis.As serverless costs may be a concern to you, you may wish to set up a billing alert.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| How many RPU MongoDB Atlas uses to find random records using aggregate? | 2022-09-27T04:04:37.890Z | How many RPU MongoDB Atlas uses to find random records using aggregate? | 2,035 |
null | [
"data-modeling"
]
| [
{
"code": "user {\n\t_id: 123,\n\tname: ‘bob’,\n\ttasks: [987, 1, 2, 3, …, N] // task reference _ids\n}\n\ntask {\n\t_id: 987,\n\tname: ‘water the plants’,\n\tusers: [123, 1, 2, 3, …, N] // user reference _ids\n}\n",
"text": "I’m new to NOSQL and Mongo and am aware of the unbounded arrays anti pattern and the 16MB size limit for documents. After doing some research, I noticed that objects with a many:many relationship need to reference each other. For example, for argument’s sake, let’s say there are users and tasks - a user can have many tasks, and a task can have many users. I would think to structure the database like this:So each task can reference its users and each user can reference its tasks. However, for argument’s sake, let’s say that both the number of users and tasks are unbounded. This would mean that the users array in the a task object is unbounded and could exceed the 16MB limit, and the tasks array in a user object is also unbounded and could exceed the 16MB limit. So my question is how could this many:many relationship scale without exceeding the 16MB limit?I feel like I could be missing something here. Thanks for any help",
"username": "Nick_Smith"
},
{
"code": "",
"text": "Hi @Nick_Smith ,Yes you are correct that incase of an “unbounded” arrays using one document to form this relationship will fail and is not advisable.For this scenario we have a pattern called the outlier pattern where large relationship is bucket into several documents each holding a portion of the array. The other documents called overflow document .The Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data setThis works for both sides of the relationship.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Perfect, that should do it - thank you",
"username": "Nick_Smith"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unbounded many:many relationships | 2022-10-18T07:49:59.654Z | Unbounded many:many relationships | 1,545 |
null | [
"php"
]
| [
{
"code": "",
"text": "Hello,\nIs anyone aware of a complete calendaring function that can be combined with MongoDB and PHP? We are trying to build an app (registering events and tasks) using PHP accessing MongoDB collections without building a calendar function from scratch? I would appreciate any feedback or ideas.\nPlease stay safe during these difficult times\nMany thanks\nSimon",
"username": "Simon_Adams"
},
{
"code": "",
"text": "Hi @Simon_Adams, welcome!Is anyone aware of a complete calendaring function that can be combined with MongoDB and PHP?Would you care to elaborate more on what do you mean by calendaring function. Generally an application’s user interface would have a calendar interface, then only the value selected is stored into a database. There is a separation between the user interface and the database.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi Wan,Many thanks indeed for your reply, much appreciated.What I am trying to achieve at my work place is this:In order to shorten development time, I am seeking either a library/function that I can use to act as a go-between our app and MongoDB collections, or simply a script that can be modified to suit our needs. It needs to be well-presented front end to display calendar entries with ability to email, display alerts etc. I know there are some tailored calendars designed for clinic bookings and the like. However, I was just wondering if there is something that standout which could be recommended by MongoDB folks like yourself.\nWan, if the above is inadequate I will be happy to elaborate further.\nOnce again, many thanks for your kind assistance.\nRegards\nSimon Adams",
"username": "Simon_Adams"
},
{
"code": "",
"text": "there have been calendar MODs and extesions over the years but apparently they are not popular enough to keep updating etc by their authors.the main reason a calendar is not part of the phpbb code is that the original and still valid concept of phpbb is to make a basic bulletin board system. that requires forums and posts and not much else.\nanything else is an addon. that includes a calendar.I would assume the reason why you can’t find too much about a current calendar extension is because there is apparently not that much interest in one.",
"username": "Betina_Jessen"
},
{
"code": "",
"text": "Please stay safe during these difficult times in 2023Thanks a lot",
"username": "Betina_Jessen"
}
]
| Calendaring function for PHP | 2020-04-19T23:40:12.193Z | Calendaring function for PHP | 3,319 |
null | [
"atlas-device-sync"
]
| [
{
"code": "",
"text": "In testing, I can achieve synchronization across my 2 mobile apps, creating accounts, logging in and storing data that can be changed and reflected across both native apps.But I have one account, the main account with which I’ve done the most testing, where synchronization has broken. After entering 4 records, synchronization has stopped. Logging in and out doesn’t restore synchronization. However, the records I add and edit on the device are maintained. So I can login and logout without losing the data on the device, it just won’t get synchronized to Realm and reflected across apps. I can even create another account on the same device and create records that are synched accurately, then logout and log back into the broken account and see all of the correct records, just still not synched.So something happened at some random point that broke synchronization on that specific account. I don’t see any errors in the logs that I can link to this. The only thing I can think to work on at the moment is trying to verify if the realm on the device is the same as the one that is partially synched and visible on the web admin.",
"username": "Ryan_Goodwin"
},
{
"code": "",
"text": "Hi Ryan,Are you getting any errors in your client app?Please also check your Realm Logs and filter by Errors and Sync categories, you can also filter using the user ID you’re logging in with. Are there any errors which occurred around the time your reproduced this issue?Have you tried terminating sync recently? The client will need to be reset after a termination happens. If this is the case, please try uninstalling the app on the client and re-installing. See article below regarding setting up client reset handling automatically.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "Failed to parse, or apply received changeset: ERROR: AddColumn 'class_Direction.%3' which already exists\n indent preformatted text by 4 spaces\n Exception backtrace:\n 0 Realm 0x000000010110ed5c _ZNK5realm4sync18InstructionApplier19bad_transaction_logIJNS_10StringDataERS3_EEEvPKcDpOT_ + 320\n 1 Realm 0x000000010110eb80 _ZN5realm4sync18InstructionApplierclERKNS0_5instr9AddColumnE + 860\n 2 Realm 0x00000001010c0bc4 _ZN5realm4sync18InstructionApplier5applyIS1_EEvRT_RKNS0_9ChangesetEPNS_4util6LoggerE + 136\n 3 Realm 0x00000001010bd03c _ZN5realm5_impl17ClientHistoryImpl27integrate_server_changesetsERKNS_4sync12SyncProgressEPKyPKNS2_11Transformer15RemoteChangesetEmRNS2_11VersionInfoERNS2_21ClientReplicationBase16IntegrationErrorERNS_4util6LoggerEPNSE_20SyncTransactReporterE + 828\n 4 Realm 0x00000001010cf548 _ZN5realm5_impl14ClientImplBase7Session29initiate_integrate_changesetsEyRKNSt3__16vectorINS_4sync11Transformer15RemoteChangesetENS3_9allocatorIS7_EEEE + 180\n 5 Realm 0x0000000101105fdc _ZN12_GLOBAL__N_111SessionImpl29initiate_integrate_changesetsEyRKNSt3__16vectorIN5realm4sync11Transformer15RemoteChangesetENS1_9allocatorIS6_EEEE + 48\n 6 Realm 0x00000001010cdf34 _ZN5realm5_impl14ClientImplBase7Session24receive_download_messageERKNS_4sync12SyncProgressEyRKNSt3__16vectorINS3_11Transformer15RemoteChangesetENS7_9allocatorISA_EEEE + 692\n 7 Realm 0x00000001010cad74 _ZN5realm5_impl14ClientProtocol22parse_message_receivedINS0_14ClientImplBase10ConnectionEEEvRT_PKcm + 4640\n 8 Realm 0x00000001010c5e10 _ZN5realm5_impl14ClientImplBase10Connection33websocket_binary_message_receivedEPKcm + 60\n 9 Realm 0x0000000101196178 _ZN12_GLOBAL__N_19WebSocket17frame_reader_loopEv + 1528\n 10 Realm 0x00000001010d34bc _ZN5realm4util7network7Service9AsyncOper22do_recycle_and_executeINSt3__18functionIFvNS5_10error_codeEmEEEJRS7_RmEEEvbRT_DpOT0_ + 260\n 11 Realm 0x00000001010d2fa8 _ZN5realm4util7network7Service14BasicStreamOpsINS1_3ssl6StreamEE16BufferedReadOperINSt3__18functionIFvNS8_10error_codeEmEEEE19recycle_and_executeEv + 240\n 12 Realm 0x00000001011882a8 _ZN5realm4util7network7Service4Impl3runEv + 400\n 13 Realm 0x00000001010fb5c0 _ZN5realm4sync6Client3runEv + 36\n 14 Realm 0x0000000100dd03cc _ZZN5realm5_impl10SyncClientC1ENSt3__110unique_ptrINS_4util6LoggerENS2_14default_deleteIS5_EEEERKNS_16SyncClientConfigEENKUlvE0_clEv + 232\n 15 Realm 0x0000000100dd029c _ZNSt3__1L8__invokeIZN5realm5_impl10SyncClientC1ENS_10unique_ptrINS1_4util6LoggerENS_14default_deleteIS6_EEEERKNS1_16SyncClientConfigEEUlvE0_JEEEDTclclsr3std3__1E7forwardIT_Efp_Espclsr3std3__1E7forwardIT0_Efp0_EEEOSE_DpOSF_ + 28\n 16 Realm 0x0000000100dd01fc _ZNSt3__1L16__thread_executeINS_10unique_ptrINS_15__thread_structENS_14default_deleteIS2_EEEEZN5realm5_impl10SyncClientC1ENS1_INS6_4util6LoggerENS3_ISA_EEEERKNS6_16SyncClientConfigEEUlvE0_JEJEEEvRNS_5tupleIJT_T0_DpT1_EEENS_15__tuple_indicesIJXspT2_EEEE + 32\n 17 Realm 0x0000000100dcf930 _ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5realm5_impl10SyncClientC1ENS2_INS7_4util6LoggerENS4_ISB_EEEERKNS7_16SyncClientConfigEEUlvE0_EEEEEPvSJ_ + 116\n 18 libsystem_pthread.dylib 0x00000001d516bc74 _pthread_start + 288\n 19 libsystem_pthread.dylib 0x00000001d5170878 thread_start + 8",
"text": "I’ve gone through the Realm logs and there are no error logs.I’m now receiving a bad changeset error.",
"username": "Ryan_Goodwin"
}
]
| Synch disabled for a specific account | 2021-07-30T14:38:31.390Z | Synch disabled for a specific account | 3,015 |
null | [
"production",
"c-driver",
"atlas-data-lake"
]
| [
{
"code": "",
"text": "Announcing 1.23.1 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.No changes since 1.23.0. Version incremented to match the libmongoc version.Bug fixes:Thanks to everyone who contributed to this release.",
"username": "Colby_Pike"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB C Driver 1.23.1 Released | 2022-10-19T19:15:03.516Z | MongoDB C Driver 1.23.1 Released | 1,893 |
[
"monitoring"
]
| [
{
"code": "",
"text": "\nimage1452×174 12.6 KB\n\nWhat is causing this error to generate?",
"username": "Rajitha_Hewabandula"
},
{
"code": "\"s\":\"I\"",
"text": "Hi @Rajitha_HewabandulaThis is not an error, it is an informational (\"s\":\"I\" ) message. It is notifying that a checkpoint is being written.A checkpoint is creating a consistent state for the data files, you’ll see this log every minute when a checkpoint occurs.You can read more about checkpoints in Kevins’s response and the manual.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error generate in live server | 2022-10-19T07:11:10.425Z | Error generate in live server | 2,187 |
|
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n {\n \"_id\": 1,\n \"priority\": 1,\n \"number_of_products_to_display\": 3\n },\n {\n \"_id\": 2,\n \"priority\": 2,\n \"number_of_products_to_display\": 4\n },\n {\n \"_id\": 3,\n \"priority\": 3,\n \"number_of_products_to_display\": 7\n }\n ]\n[\n {\n \"_id\": 1,\n \"company_id\": 1\n },\n {\n \"_id\": 2,\n \"company_id\": 1\n },\n {\n \"_id\": 3,\n \"company_id\": 2\n },\n {\n \"_id\": 4,\n \"company_id\": 3\n },\n {\n \"_id\": 5,\n \"company_id\": 2\n }\n]\nprioritynumber_of_products_to_display3252323",
"text": "I have a bit of a tricky use case and would love to hear suggestions from the community on the best way to do it.So, I have 2 collections:CompaniesProductsI want to paginate over the products in chucks of 10, but based on the company priority and number_of_products_to_display.For example:We should circle over all companies in this manner until all the products are fetch.All suggestions are welcomed! ",
"username": "NeNaD"
},
{
"code": "db.companies.find({}).sort({priority : -1})\ndb.products.aggregate([{\n $match: {\n company_id: <HIGH_PRIORITY_ID>\n }\n}, {\n $limit: <CORRESPONDING_LIMIT>\n}, {\n $unionWith: {\n coll: 'products',\n pipeline: [\n {\n $match: {\n company_id: <NEXT_HIGH_PRIORITY_ID>\n }\n },\n {\n $limit: <CORRESPONDING_LIMIT>\n }\n...\n// The next N companies\n\n ]\n }},\n{ $skip : 0 },\n{ $limit : 10}\n])\ndb.companies.aggregate(\n[{\n $sort: {\n priority: -1\n }\n}, {\n $lookup: {\n from: 'products',\n localField: '_id',\n foreignField: 'company_id',\n as: 'products'\n }\n}, {\n $addFields: {\n products: {\n $slice: [\n '$products',\n '$number_of_products_to_display'\n ]\n }\n }\n}, {\n $unwind: {\n path: '$products'\n }\n}, {\n $skip: 0\n}, {\n $limit: 10\n}]);\n",
"text": "Hi @NeNaD ,So running such a logic in one aggregation will not be the most efficient way.The efficient way is to get the first 10 companies based on priorityAnd then get the relevant 10 documents from the product:Now if you want to still try a one go show here is the agg , but its complex and not super efficient:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "$unionWith$unionWith[\n {\n \"_id\": 1,\n \"priority\": 1,\n \"number_of_products_to_display\": 7\n },\n {\n \"_id\": 2,\n \"priority\": 2,\n \"number_of_products_to_display\": 8\n }\n ]\n",
"text": "@Pavel_Duchovny Ingenious! I am working with MongoDB for a long time and used $unionWith many times, but I didn’t know you can use $unionWith on the same collection you are doing the aggregation! The only thing that is missing here is the option to iterate over in the next cycle once all the company products are fetched at least once.So for example, let’s say I have 2 companies like this:With you current solution, the whole aggregate will stop in the second iteration and it will return only 5 results (all five from the second company).What I would like to happen is that the cycle would continue, where the second aggregate would also return 10 results (5 from second company and 5 from first company again).Any idea how to add on top of your current answer that would cover that as well? Btw, I am referencing your first solution with company data prefetched before the aggregation.",
"username": "NeNaD"
},
{
"code": "",
"text": "Hi @NeNaD ,2 options :2 . With the skip and limit When First round it is skip : 0 limit : 10 next one is skip: 10 limit : 10 and so on…Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "$limit$unionWith$limit$skip$limit$unionWith",
"text": "Hi @Pavel_Duchovny,Thanks for the quick response! As I tried to said, this will not work since it will stop after the first cycle.In your solution, the $limit is used on each $unionWith, so when we got to the last stage for global $limit and $skip, we will always have only the data from the first cycle to iterate.In my example above, the second iteration will be the final, since it will not continue to iterate over the data because of the inner $limit in each $unionWith stage.Did I explained it properly? ",
"username": "NeNaD"
},
{
"code": "",
"text": "Oh so the display size is actually a batch size?If so then add a skip of x*display(batch)for every union with that you rerunWhy not to precalculate the priority of each product document so that you will just use sorr.For the highest priority company all first x products will get priority 50 , the next will get priority 49 etc…Consider this as a pagination score pattern",
"username": "Pavel_Duchovny"
}
]
| Paginate over chunked data of different size | 2022-10-19T07:31:51.596Z | Paginate over chunked data of different size | 1,569 |
null | [
"queries",
"data-modeling"
]
| [
{
"code": "",
"text": "In my application, I need to delete very huge amount of data at a single time by running a query. I need some optimized solution. I read here BulkOperations delete is very effiencient when companritive with deleteMany() function. Even in these we have two options. Which one is very optimised? Bulk.remove() or bulkWrite(delete:{})?",
"username": "Imrankhan_M"
},
{
"code": "",
"text": "I would be very surprise if there are huge differences between the different approaches. You mentioneddelete very huge amount of dataThe total amount of work is more or less the same in all approaches. And with huge amount of data, you will end up being I/O bound anyway to update the permanent storage of your collection and its indexes.If you can specify a single query that matches all documents to delete, you should be using deleteMany with that single query. If you need multiple queries, a bulkWrite with one deleteMany per query is your safe bet.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Bulk.remove() or bulkWrite(delete:{}) | Which one is efficient? | 2022-10-19T06:02:07.208Z | Bulk.remove() or bulkWrite(delete:{}) | Which one is efficient? | 2,804 |
null | []
| [
{
"code": "",
"text": "Good Morning. I need to store and save pdf and doc documents. Which is the best option? Thank you",
"username": "francisco_jose"
},
{
"code": "binData",
"text": "Good Morning @francisco_jose, welcome to MongoDB forum.In MongoDB, you use GridFS for storing files larger than 16 MB.If the files (i.e., each of them) to be stored are within the 16 MB limit of BSON document size, then you can store the files within a collection’s document as a field of type Binary Data (binData).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for quick reply. Still separate the data into two custer? One for data related to pdf and the other for pdf? Best regards",
"username": "francisco_jose"
},
{
"code": "",
"text": "What are you considering - GridFS or within a document? What is the maximum size of the PDF documents?",
"username": "Prasad_Saya"
},
{
"code": "fileschunksfileschunks",
"text": "In case of GridFS, the file and its information is stored in two collections (the files and chunks collections). The files stores the file’s metadata (information about the file like, id, length / size, filename, content type, etc.) and the chunks stores the actual file data.In case of a document as a file store, you can have additional fields specifying the file name, description etc., that is within the same document of the collection.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I tried importing pdf files to gridfs. As everything is stored in BSON, is it possible to export the pdf file from mongodb which I imported ?",
"username": "Helena_N_A"
},
{
"code": "",
"text": "Take a look at https://www.mongodb.com/docs/database-tools/mongofiles/#mongodb-binary-bin.mongofiles",
"username": "steevej"
},
{
"code": "",
"text": "Thank you. mongofiles command works",
"username": "Helena_N_A"
},
{
"code": "",
"text": "Hi,\ncan you please provide a link that explains how to store the file (the functions) and then display it to the user from the DB.",
"username": "Laeek_Ahmed"
}
]
| Store and serve pdf documents | 2020-07-11T01:49:10.736Z | Store and serve pdf documents | 32,200 |
null | [
"replication",
"flexible-sync"
]
| [
{
"code": "",
"text": "Can we sync a mongodb database to another incrementally based on a schedule? Say, once every night at 12 am?Background:\nWe have a MongoDB replica cluster consisting of 3 instances, all located within one data center. For backup, the company is acquiring another server in a different country. This rises two questions for us.We initially thought of adding the new server to the replica set as a passive replica. However, we got input that it will slow down the cluster as the new server will have high latency.",
"username": "Safwan"
},
{
"code": "",
"text": "Hello,Synchronisation in MongoDB is an ongoing process, once a new instance is added and an initial sync is done, no need to replace or resync an instance in order to keep it up to date, so you don’t have to schedule sync on specific timestamps.While it’s true that for the newly added remote instance in a different country will take longer to acknowledge write which will increase the overall write latency for your workload, but that’s only if that instance is an active participant in write acknowledgment.You could consider setting the priority of the new instance to 0, this will prevent the instance from becoming primary and it cannot trigger elections, for write concern of “majority” if the instance is a non-voting member it will not contribute in write acknowledgment. and so, application workload will not wait for this instance’s write acknowledgment.This situation is explained in more details actually in this documentation link about priority 0 replica set members, they mention exactly the situation of a replica set member in a different “remote” data center as in your case.You can also check below documentation link about initial sync in MongoDB:And this link about replica set deployment architectures:This one about replica set distributed across two or more data centers:I hope you find this helpful.",
"username": "Mohamed_Elshafey"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can we do incremental sync based on a schedule? | 2022-10-17T10:37:16.832Z | Can we do incremental sync based on a schedule? | 2,062 |
null | [
"rust"
]
| [
{
"code": "",
"text": "Hey how’s it going. Been experimenting with the rust driver lately and have been running into this problem when I try to do a find_one_and_update() call where I get thrown a BsonDeserialization(EndOfStream) error result. The query and update seem to go through and makes the changes that I intend, but I returned an Err type I am just curious if there is much documentation on EndOfStream error. Haven’t really found much on it, but I am assuming it has something to do with one of the docs that I am passing to the function that’s causing problems.If this is a more complicated issue, let me know and I can give more details on filter/query/schema (I just need permission from my boss if I can post certain things).",
"username": "Steven_Zhdanov"
},
{
"code": "let devices_collection = self.db.database(\"deviceManagement\").collection::<models::device_schema::Device>(\"devices\");\npub struct Device{\n pub id: String,\n pub name: String,\n pub tags: Vec<String>,\n}\n{\n id: 1,\n name: Bob\n}\n let devices_collection = self.db.database(\"deviceManagement\").collection::<Document>(\"devices\");\n\n",
"text": "oops, acidentally deleted my own post lolFixed my problem. Dumb mistake, basically what was happening was coming from here:Basically, to test some queries, I set up a slimmed down version of my schema but told bson, deserialize the output from the update call as the fully fleshed out schema. Obviously was getting EndOfStream error since…the deserializer was done deserializing but had missing values!So imagine I had something like the following as my schemabut the documents I inserted into my database are as suchI think the deserializer realized there were values missing, and thus couldn’t package into the models::device_schema::Device format.What I did to temporarly solve the issue (just to see my queries in action), was I did the following:basically told the deserializer to just hand me back as type bson document.",
"username": "Steven_Zhdanov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Find_one_and_update returns EndOfStream error | 2022-10-19T04:55:11.250Z | Find_one_and_update returns EndOfStream error | 2,312 |
null | [
"aggregation",
"queries",
"indexes"
]
| [
{
"code": "[\n... other steps ...\n{ '$addFields': { 'list.a_new_field': { ... } },\n{ '$addFields': { 'list.other_new_field': { '$sum': [ { '$max': '$list.a_new_field } ] } } },\n{ '$sort': { 'list.other_new_field': -1 } },\n... other steps ...\n]\n{ '$sort': { sortKey: { 'list.other_new_field': -1 } },\n nReturned: 3053,\n executionTimeMillisEstimate: 60667 } ]\n",
"text": "Hello,I’m trying to understand how to improve an inside sorting operation for an aggregation, made on new fields created by an $addFields step. I’ve got a very articulated pipeline, which I’ll just show the part that I’m interested in:The sort is taking 60s to compute, as explain’d:The collection has 464 documents.\nThe problem here is that I don’t really know how to index the sorting, cause it’s on a new field. Is there any way I can optimize the query without messing with the logic of the pipeline?",
"username": "Marco_D_Agostino"
},
{
"code": "{ '$sort': { sortKey: { 'list.other_new_field': -1 } },\n nReturned: 3053,\n executionTimeMillisEstimate: 60667 } ]\n",
"text": "Hi @Marco_D_Agostino,Welcome to the MongoDB Community forums The problem here is that I don’t really know how to index the sorting, cause it’s on a new fieldAn index cannot be created on a field that is generated within a pipeline, only on a field that resides in a collection.The sort is taking 60s to compute, as explain’d:I am curious, though, about what explain() return if you remove the $sort stage. Is it only the presence $sort stage that slows down the query? 60 seconds seems excessive to sort 3000 documents.Alternatively, if speed is your main concern, you may be able to use the $merge / $out aggregation stage to create a materialized view and then create an index on the resulting collection. Note that you will have to periodically update the materialized view collection in order to get the most recent data.I hope it helps!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hello Kushagra,thanks a lot for your reply. I thought just like you did, with $merge, but it’s a bit complicated cause the pipeline has other articulated steps and other sorts to do. I’ll think of a way to combine it with $merge.\nThank you very much!",
"username": "Marco_D_Agostino"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sorting on new fields created dynamically by an $addFields step in an aggregation pipeline | 2022-10-14T11:03:12.482Z | Sorting on new fields created dynamically by an $addFields step in an aggregation pipeline | 2,894 |
null | [
"atlas",
"api"
]
| [
{
"code": "",
"text": "Hello,I am integrating Project IP Access List, of MongoDB Atlas Administration API (1.0),What I achieved:I am not able to find how to prepare and pass DigestAuth (HTTP Authorization Scheme) in API call, I found this Authentication Note but did not help,Any practical example or documentation would be great,Let me know if I am missing anything,\nThanks.",
"username": "turivishal"
},
{
"code": "--digestrequests.auth.HTTPDigestAuth()",
"text": "Hi @turivishalIt will depend what you are using for your api calls. cURL for example has --digest flag, python requests has requests.auth.HTTPDigestAuth() class. Its not generally something you’ll have to do for yourself.",
"username": "chris"
},
{
"code": "Digest Auth<h1>Bad Message 400</h1>\n<pre>reason: Ambiguous URI empty segment</pre>\n",
"text": "Currently, I am executing this call in postman, I have selected Digest Auth type in the Authorization tab and added username and password, When I execute its response:My end goal is to implement it in nodejs/expressjs (got NPM)!",
"username": "turivishal"
},
{
"code": "",
"text": "I found the tutorial in MongoDB Developer Resources, and it is working for me,Learn how to use digest authentication for the MongoDB Atlas Administration API from Python, Node.js, and Ruby.The below deprecated document helped me instead of the new one,Thanks, @chris",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to prepare auth for MongoDB Atlas Administration API? | 2022-10-18T15:11:30.711Z | How to prepare auth for MongoDB Atlas Administration API? | 2,805 |
[
"sharding"
]
| [
{
"code": "ShardingTaskExecutorPoolMaxSize",
"text": "Hi!I have a sharded MongoDB cluster with tens of shards. Sometimes hardware or hypervisor failure leads to a situation when some random replica starts lagging. Queries to the problematic replica start to queue up. At some point they saturate connection pools on mongoses and it leads to cluster-wide denial of service.Singe shard failure shouldn’t lead to entire cluster failure. After some considerations I came to the following options:Are these good solutions for the single shard partial failure problem? Does anybody know other solutions?Thanks!",
"username": "Sergey_Zagursky"
},
{
"code": "",
"text": "Hello @Sergey_Zagursky ,Welcome to The MongoDB Community Forums! Queries to the problematic replica start to queue up. At some point they saturate connection pools on mongoses and it leads to cluster-wide denial of service.Could you please help me with below details to know more about your use-case?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "We are currently on MongoDB 4.4.10 with PSS configuration.The problem is that if single primary suddenly goes 100x slower connections to it immediately saturate mongos connection pools and client connection pools. And from client perspective it looks like all MongoDB cluster is not responding. What I want here is that MongoDB cluster remained operational with the only exception of serving requests to degraded shard.",
"username": "Sergey_Zagursky"
},
{
"code": "",
"text": "single primary suddenly goes 100x slower connections to it immediately saturate mongos connection pools and client connection pools.Sometimes hardware or hypervisor failure leads to a situation when some random replica starts laggingDo you find a pattern in these failures and what was the root cause for these issues(swapping, hardware issues, network issues or any other)?It could be that your cluster is running at full hardware capacity and for some reason a small failure leads to a much larger one? Have you considered upgrading hardware just to see if failure still occurs? Alternatively, depending on the use case, is it possible to add more shards?4.4.17 is the latest in 4.4 series. There are improvements made between 4.4.10-4.4.17 that may help, so upgrading to the newest version may show us that this is not caused by any fixed issues.Lastly, what is the Driver version you are using?",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "The problem with cluster failure is not the failure I want to address.Do you find a pattern in these failures and what was the root cause for these issues(swapping, hardware issues, network issues or any other)?The main reasons of such failures are hardware failures and human mistakes during manual maintenance procedures.It could be that your cluster is running at full hardware capacity and for some reason a small failure leads to a much larger one? Have you considered upgrading hardware just to see if failure still occurs? Alternatively, depending on the use case, is it possible to add more shards?No, it is not running at full capacity. Adding more shards will just increase probability of single shard failure.4.4.17 is the latest in 4.4 series. There are improvements made between 4.4.10-4.4.17 that may help, so upgrading to the newest version may show us that this is not caused by any fixed issues.No, we haven’t tried newer versions and the problem I’m talking about is irrelevant to MongoDB version. Hardware failures are equally deadly to any MongoDB version and what I’m asking here is how to continue serving requests to alive shards.Lastly, what is the Driver version you are using?We are using latest version of official Go MongoDB driver. But again this is hardly relevant.",
"username": "Sergey_Zagursky"
},
{
"code": "",
"text": "No, it is not running at full capacity. Adding more shards will just increase probability of single shard failure.Without knowing your full deployment details and use case, since originally the question is about single shard failure, I was thinking that providing more horizontal scaling might alleviate the issue. But again this depends on details that we are not familiar with.No, we haven’t tried newer versions and the problem I’m talking about is irrelevant to MongoDB version.Newer MongoDB versions have bugfixes and new features that might alleviate certain issues. Upgrading to a newer version ensures that you are not experiencing issues that are already fixed.Hardware failures are equally deadly to any MongoDB versionHardware failures are just as deadly to any other database and/or applications so this is not limited to MongoDB what I’m asking here is how to continue serving requests to alive shards.If some queries depends on an unavailable shard, it may be that the application floods the database with requests that doesn’t timeout or have long timeouts. One possible solution is to limit the timeout for queries for example by using wtimeout for write operations and/or maxTimeMS() for read operations, but this needs to be balanced with possible network latencies or disk latencies so the app doesn’t give up too quickly when the hardware is just preparing to answer the query.PSS replica sets are usually reasonably resilient to failure, but if you’re having trouble with operational issues and you’re open to using a hosted service, you might want to consider using MongoDB Atlas which will take care of these operational concerns for you.",
"username": "Tarun_Gaur"
},
{
"code": "ShardingTaskExecutorPoolMaxSizemongodmongos",
"text": "If some queries depends on an unavailable shard, it may be that the application floods the database with requests that doesn’t timeout or have long timeouts. One possible solution is to limit the timeout for queries for example by using wtimeout for write operations and/or maxTimeMS() for read operations, but this needs to be balanced with possible network latencies or disk latencies so the app doesn’t give up too quickly when the hardware is just preparing to answer the query.Thanks! As I said previously, we’re already considering tightening our timeouts to mitigate the extent of problem. Are there any pool settings that would prevent single shard from saturating entire mongos pools? I found ShardingTaskExecutorPoolMaxSize setting but it only limits connections to mongod and incoming pool on mongos still saturates.",
"username": "Sergey_Zagursky"
}
]
| Single shard partial failure handling | 2022-10-04T14:59:33.496Z | Single shard partial failure handling | 2,594 |
|
[
"storage"
]
| [
{
"code": "",
"text": "I have confirmed that the page provides a brief description of what WiredTiger uses Memory for.\n'1. Index\n'2. CollectionAnd I also confirmed that the rest of the file system area is used to reduce disk I/O.I think WiredTiger’s internal cache will also store plans.\nIs there anything else I can save?Also, in one article, I saw that MongoDB recommended memory size is index size.\nBut if I don’t have the minimum size of memory available, will there be a very minimum recommended size?\n(Size up to just before the server stumbles)",
"username": "Kim_Hakseon"
},
{
"code": "dbPathmongod",
"text": "Hi @Kim_Hakseon,I also confirmed that the rest of the file system area is used to reduce disk I/O.This is dependent on the O/S, but generally filesystem cache will be helpful for storing most recently accessed data files in memory (including collections and indexes in your dbPath).I have an older answer which could be a useful reference: Does the MongoDB 3.2 WiredTiger compression include stuff stored in RAM - Server Fault.Is there anything else I can save?Aside from the WiredTiger cache, a mongod process will also need to allocate memory temporarily for processing requests (queries, JavaScript evaluation, in-memory sorts, etc).I saw that MongoDB recommended memory size is index size.The general recommendation is to try to avoid accessing disk for commonly used data and indexes, as this is orders of magnitude slower than cache or memory access. However, it is not a strict rule and there are definitely access patterns where you may only need a subset of a particular index (for example, Indexes that hold only recent values in RAM).Resource usage is highly variable depending on your workloads and use case, so the best way to predict usage is by testing with representative workload and environment. You can mitigate some concerns of becoming I/O bound by having faster disks or more available RAM.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Q. Memory used by mongodb | 2022-10-18T11:11:27.702Z | Q. Memory used by mongodb | 1,610 |
|
null | [
"queries"
]
| [
{
"code": "",
"text": "When I do some tests between MongoDB and DocumentDB, I find some performance issue in DocumentDB.like these , about 800 elements\ndb.xxx.find({code:{$in:[“1”,“2”,“3”…]}});\nor\ndb.xxx.find({$or:[{code:“1”},{code:“2”},{code:“3”}…]});the documentDB has bad performance which compare to DocumentDB .And who can tell me why ? Looking forward your response , thank you .",
"username": "Huang_Huang"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Huang_Huang !Amazon DocumentDB is an independent emulation of a subset of features for the associated MongoDB server version they claim compatibility with.The server implementations do not have any code in common, so it is not surprising if there are differences (sometimes significant) in behaviour and feature support.If you are trying to understand performance issues for DocumentDB, I recommend asking on Stack Overflow or an AWS product community: Newest 'aws-documentdb' Questions - Stack Overflow.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks a lot. I read the file",
"username": "Huang_Huang"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| $in between MongoDB and DocumentDB | 2022-10-18T14:23:47.535Z | $in between MongoDB and DocumentDB | 1,701 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "Host has index suggestions",
"text": "What should I do if I’m getting Host has index suggestions alerts for monitoring aggregation I am running against the oplog?\nAs I know, it will never be going to use indexes because this is the nature of this collection.\nHow can I avoid these atlas alerts being fired?Thanks.",
"username": "Shay_I"
},
{
"code": "Host has index suggestions",
"text": "Hi @Shay_I and welcome to the community!!How can I avoid these atlas alerts being fired?The log alerts Host has index suggestions can be avoided if the Performance Advisor is disabled for the specific cluster. Please follow the documentation for the same.May I ask why you are querying the oplog to begin with? The oplog is for MongoDB internal use, and may change without notice (since it’s not for general usage). For these type of purposes, it’s generally best to use Change Stream which is a lot more configurable than querying the oplog.\nFor example, change streams can monitor a single collection, a server, or a whole sharded cluster. You can also manipulate the output of the change stream using an aggregation pipeline. This is not possible to do using the oplog.However if your use case does not allow the use of change streams, do you mind sharing it?The Performance Advisor monitors the slow query and help in improving the performance.Also, the Index suggestion in performance advisor help in performance improvement which the suggested index would bring. Hence, disabling the alerts would also result in disabling of other related alerts.Please let us know if you have any further questions.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thank you @Aasawari,I am using oplog queries, mainly for monitoring whether the change stream is working fine or becoming idle,\nfor example: if I see the oplog contains a newer data from the last processed change and it has past a significant amount of time - I can get to the conclusion that something is wrong with the cursor and usually I perform a restart in that case.do you have maybe a better way to monitor change stream health? and keep the change stream running (forever)?",
"username": "Shay_I"
},
{
"code": "",
"text": "Hi @Shay_I and welcome to the MongoDB community forum!!It would be very helpful if you could share example for your use case which would help me understand the issue further.Thanks\nAasawari",
"username": "Aasawari"
}
]
| `Host has index suggestions` alerts on oplog queries (Secondary node) | 2022-09-11T16:16:11.685Z | `Host has index suggestions` alerts on oplog queries (Secondary node) | 2,634 |
null | [
"compass",
"mongodb-shell",
"atlas-cluster"
]
| [
{
"code": "",
"text": "Busy with the training M001 I tried to connect to the database sandbox, created in the tutorial, with compass to get a better view.\nAfter enter in te connection part this string:\nmongodb+srv://sandbox.gnwvowo.mongodb.net/Sandbox --apiVersion 1 --username M001-Student\nI didn’t get asked about a password and add --passwd “password” does not allow me to login\nWith mongosh “mongodb+srv://sandbox.gnwvowo.mongodb.net/myFirstDatabase” --apiVersion 1 --username M001-Student I do get the request for a passwd and can access all informationHow to solve the login?",
"username": "GentleRV_N_A"
},
{
"code": "",
"text": "Please show screenshot of the exact command you fired.\nIt should prompt for password",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you for reading my post.\nI added a screenshot from my connection string.\n\nMongo-Compass login screen1352×731 73.3 KB\n",
"username": "GentleRV_N_A"
},
{
"code": "",
"text": "Which option you used to connect?\nIf it is Compass the uri string will have userid:pwd embedded in the string.You have to update your password",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "thanks.\nI succeded to login",
"username": "GentleRV_N_A"
}
]
| Connecting to MongoDB with Compass | 2022-10-15T10:56:57.634Z | Connecting to MongoDB with Compass | 1,691 |
null | [
"queries",
"sharding",
"performance",
"transactions"
]
| [
{
"code": "[mongos] ni4cc2> db.data_events.getIndexes()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { 'commonEventHeader.eventFields.msisdn': 'hashed' },\n name: 'commonEventHeader.eventFields.msisdn_hashed'\n },\n {\n v: 2,\n key: { start_epoch_microsec: 1 },\n name: 'start_epoch_microsec_1',\n expireAfterSeconds: 3024000\n }\n]\n[mongos] ni4cc2> db.data_events.getShardDistribution()\nShard ni4cc2-rs4 at ni4cc2-rs4/dbnode4:27050,dbnode8:27051\n{\n data: '2636.88GiB',\n docs: 543863130,\n chunks: 24672,\n 'estimated data per chunk': '109.44MiB',\n 'estimated docs per chunk': 22043\n}\n---\nShard ni4cc2-rs3 at ni4cc2-rs3/dbnode3:27040,dbnode7:27041\n{\n data: '3970.52GiB',\n docs: 1194578619,\n chunks: 24672,\n 'estimated data per chunk': '164.79MiB',\n 'estimated docs per chunk': 48418\n}\n---\nShard ni4cc2-rs1 at ni4cc2-rs1/dbnode1:27020,dbnode5:27021\n{\n data: '2640.67GiB',\n docs: 544733199,\n chunks: 24673,\n 'estimated data per chunk': '109.59MiB',\n 'estimated docs per chunk': 22078\n}\n---\nShard ni4cc2-rs2 at ni4cc2-rs2/dbnode2:27030,dbnode6:27031\n{\n data: '2633.88GiB',\n docs: 543462431,\n chunks: 24672,\n 'estimated data per chunk': '109.31MiB',\n 'estimated docs per chunk': 22027\n}\n---\nTotals\n{\n data: '2.6368893047463394e+42GiB',\n docs: 2826637379,\n chunks: 98689,\n 'Shard ni4cc2-rs4': [\n '0 % data',\n '19.24 % docs in cluster',\n '5KiB avg obj size on shard'\n ],\n 'Shard ni4cc2-rs3': [\n '0 % data',\n '42.26 % docs in cluster',\n '3KiB avg obj size on shard'\n ],\n 'Shard ni4cc2-rs1': [\n '0 % data',\n '19.27 % docs in cluster',\n '5KiB avg obj size on shard'\n ],\n 'Shard ni4cc2-rs2': [\n '0 % data',\n '19.22 % docs in cluster',\n '5KiB avg obj size on shard'\n ]\n}\n[mongos] ni4cc2> db.data_events.explain(\"allPlansExecution\").find({\n... \"commonEventHeader.eventFields.msisdn\":'8stringOFnums8',\n... \"start_epoch_microsec\": {\n..... \"$gte\": new Date(\"2022-10-09T00:00:00+10:00\"),\n..... \"$lt\" : new Date(\"2022-10-11T00:00:00+10:00\")\n..... }\n... })\n{\n queryPlanner: {\n mongosPlannerVersion: 1,\n winningPlan: {\n stage: 'SINGLE_SHARD',\n shards: [\n {\n shardName: 'ni4cc2-rs3',\n connectionString: 'ni4cc2-rs3/dbnode3:27040,dbnode7:27041',\n serverInfo: {\n host: '1c921c1e4ec1',\n port: 27017,\n version: '6.0.2',\n gitVersion: '94fb7dfc8b974f1f5343e7ea394d0d9deedba50e'\n },\n namespace: 'ni4cc2.data_events',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n {\n 'commonEventHeader.eventFields.msisdn': { '$eq': '8stringOFnums8' }\n },\n {\n start_epoch_microsec: { '$lt': ISODate(\"2022-10-10T14:00:00.000Z\") }\n },\n {\n start_epoch_microsec: { '$gte': ISODate(\"2022-10-08T14:00:00.000Z\") }\n }\n ]\n },\n queryHash: 'C12E761C',\n planCacheKey: '3E585050',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'FETCH',\n filter: {\n '$and': [\n {\n 'commonEventHeader.eventFields.msisdn': { '$eq': '8stringOFnums8' }\n },\n {\n start_epoch_microsec: { '$lt': ISODate(\"2022-10-10T14:00:00.000Z\") }\n },\n {\n start_epoch_microsec: { '$gte': ISODate(\"2022-10-08T14:00:00.000Z\") }\n }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'commonEventHeader.eventFields.msisdn': 'hashed' },\n indexName: 'commonEventHeader.eventFields.msisdn_hashed',\n isMultiKey: false,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'commonEventHeader.eventFields.msisdn': [ '[3554060079612239688, 3554060079612239688]' ]\n }\n }\n },\n rejectedPlans: [\n {\n stage: 'FETCH',\n filter: {\n 'commonEventHeader.eventFields.msisdn': { '$eq': '8stringOFnums8' }\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { start_epoch_microsec: 1 },\n indexName: 'start_epoch_microsec_1',\n isMultiKey: false,\n multiKeyPaths: { start_epoch_microsec: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n start_epoch_microsec: [\n '[new Date(1665237600000), new Date(1665410400000))'\n ]\n }\n }\n }\n ]\n }\n ]\n }\n },\n executionStats: {\n nReturned: 0,\n executionTimeMillis: 22,\n totalKeysExamined: 6,\n totalDocsExamined: 6,\n executionStages: {\n stage: 'SINGLE_SHARD',\n nReturned: 0,\n executionTimeMillis: 22,\n totalKeysExamined: 6,\n totalDocsExamined: 6,\n totalChildMillis: Long(\"21\"),\n shards: [\n {\n shardName: 'ni4cc2-rs3',\n executionSuccess: true,\n nReturned: 0,\n executionTimeMillis: 21,\n totalKeysExamined: 6,\n totalDocsExamined: 6,\n executionStages: {\n stage: 'FETCH',\n filter: {\n '$and': [\n {\n 'commonEventHeader.eventFields.msisdn': { '$eq': '8stringOFnums8' }\n },\n {\n start_epoch_microsec: { '$lt': ISODate(\"2022-10-10T14:00:00.000Z\") }\n },\n {\n start_epoch_microsec: { '$gte': ISODate(\"2022-10-08T14:00:00.000Z\") }\n }\n ]\n },\n nReturned: 0,\n executionTimeMillisEstimate: 11,\n works: 8,\n advanced: 0,\n needTime: 6,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n docsExamined: 6,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 6,\n executionTimeMillisEstimate: 1,\n works: 7,\n advanced: 6,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n keyPattern: { 'commonEventHeader.eventFields.msisdn': 'hashed' },\n indexName: 'commonEventHeader.eventFields.msisdn_hashed',\n isMultiKey: false,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'commonEventHeader.eventFields.msisdn': [ '[3554060079612239688, 3554060079612239688]' ]\n },\n keysExamined: 6,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n },\n allPlansExecution: [\n {\n nReturned: 0,\n executionTimeMillisEstimate: 11,\n totalKeysExamined: 6,\n totalDocsExamined: 6,\n score: 2.0002,\n executionStages: {\n stage: 'FETCH',\n filter: {\n '$and': [\n {\n 'commonEventHeader.eventFields.msisdn': { '$eq': '8stringOFnums8' }\n },\n {\n start_epoch_microsec: { '$lt': ISODate(\"2022-10-10T14:00:00.000Z\") }\n },\n {\n start_epoch_microsec: { '$gte': ISODate(\"2022-10-08T14:00:00.000Z\") }\n }\n ]\n },\n nReturned: 0,\n executionTimeMillisEstimate: 11,\n works: 7,\n advanced: 0,\n needTime: 6,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n docsExamined: 6,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 6,\n executionTimeMillisEstimate: 1,\n works: 7,\n advanced: 6,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n keyPattern: { 'commonEventHeader.eventFields.msisdn': 'hashed' },\n indexName: 'commonEventHeader.eventFields.msisdn_hashed',\n isMultiKey: false,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'commonEventHeader.eventFields.msisdn': [ '[3554060079612239688, 3554060079612239688]' ]\n },\n keysExamined: 6,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n },\n {\n nReturned: 0,\n executionTimeMillisEstimate: 6,\n totalKeysExamined: 7,\n totalDocsExamined: 7,\n score: 1.0002,\n executionStages: {\n stage: 'FETCH',\n filter: {\n 'commonEventHeader.eventFields.msisdn': { '$eq': '8stringOFnums8' }\n },\n nReturned: 0,\n executionTimeMillisEstimate: 6,\n works: 7,\n advanced: 0,\n needTime: 7,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 0,\n docsExamined: 7,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 7,\n executionTimeMillisEstimate: 2,\n works: 7,\n advanced: 7,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 0,\n keyPattern: { start_epoch_microsec: 1 },\n indexName: 'start_epoch_microsec_1',\n isMultiKey: false,\n multiKeyPaths: { start_epoch_microsec: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n start_epoch_microsec: [\n '[new Date(1665237600000), new Date(1665410400000))'\n ]\n },\n keysExamined: 7,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n ]\n }\n ]\n },\n allPlansExecution: [\n {\n shardName: 'ni4cc2-rs3',\n allPlans: [\n {\n nReturned: 0,\n executionTimeMillisEstimate: 11,\n totalKeysExamined: 6,\n totalDocsExamined: 6,\n score: 2.0002,\n executionStages: {\n stage: 'FETCH',\n filter: {\n '$and': [\n {\n 'commonEventHeader.eventFields.msisdn': { '$eq': '8stringOFnums8' }\n },\n {\n start_epoch_microsec: { '$lt': ISODate(\"2022-10-10T14:00:00.000Z\") }\n },\n {\n start_epoch_microsec: { '$gte': ISODate(\"2022-10-08T14:00:00.000Z\") }\n }\n ]\n },\n nReturned: 0,\n executionTimeMillisEstimate: 11,\n works: 7,\n advanced: 0,\n needTime: 6,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n docsExamined: 6,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 6,\n executionTimeMillisEstimate: 1,\n works: 7,\n advanced: 6,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n keyPattern: { 'commonEventHeader.eventFields.msisdn': 'hashed' },\n indexName: 'commonEventHeader.eventFields.msisdn_hashed',\n isMultiKey: false,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'commonEventHeader.eventFields.msisdn': [ '[3554060079612239688, 3554060079612239688]' ]\n },\n keysExamined: 6,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n },\n {\n nReturned: 0,\n executionTimeMillisEstimate: 6,\n totalKeysExamined: 7,\n totalDocsExamined: 7,\n score: 1.0002,\n executionStages: {\n stage: 'FETCH',\n filter: {\n 'commonEventHeader.eventFields.msisdn': { '$eq': '8stringOFnums8' }\n },\n nReturned: 0,\n executionTimeMillisEstimate: 6,\n works: 7,\n advanced: 0,\n needTime: 7,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 0,\n docsExamined: 7,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 7,\n executionTimeMillisEstimate: 2,\n works: 7,\n advanced: 7,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 0,\n keyPattern: { start_epoch_microsec: 1 },\n indexName: 'start_epoch_microsec_1',\n isMultiKey: false,\n multiKeyPaths: { start_epoch_microsec: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n start_epoch_microsec: [\n '[new Date(1665237600000), new Date(1665410400000))'\n ]\n },\n keysExamined: 7,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n ]\n }\n ]\n },\n serverInfo: {\n host: '5d950768592c',\n port: 27017,\n version: '6.0.2',\n gitVersion: '94fb7dfc8b974f1f5343e7ea394d0d9deedba50e'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n command: {\n find: 'data_events',\n filter: {\n 'commonEventHeader.eventFields.msisdn': '8stringOFnums8',\n start_epoch_microsec: {\n '$gte': ISODate(\"2022-10-08T14:00:00.000Z\"),\n '$lt': ISODate(\"2022-10-10T14:00:00.000Z\")\n }\n },\n lsid: { id: UUID(\"bcf5d84a-6a4e-49f9-b53b-dd606a732347\") },\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1665616627, i: 347 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n '$db': 'ni4cc2',\n '$readPreference': { mode: 'secondaryPreferred' }\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1665616950, i: 404 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1665616950, i: 402 })\n}\n[mongos] ni4cc2> db.data_events.latencyStats()\n[\n {\n ns: 'ni4cc2.data_events',\n shard: 'ni4cc2-rs4',\n host: '947f417ee9e7:27017',\n localTime: ISODate(\"2022-10-13T00:40:04.611Z\"),\n latencyStats: {\n reads: { latency: Long(\"30374180\"), ops: Long(\"351\") },\n writes: { latency: Long(\"0\"), ops: Long(\"0\") },\n commands: { latency: Long(\"15483\"), ops: Long(\"8\") },\n transactions: { latency: Long(\"0\"), ops: Long(\"0\") }\n }\n },\n {\n ns: 'ni4cc2.data_events',\n shard: 'ni4cc2-rs2',\n host: '1198a80e3a0d:27017',\n localTime: ISODate(\"2022-10-13T00:40:04.611Z\"),\n latencyStats: {\n reads: { latency: Long(\"5680449\"), ops: Long(\"306\") },\n writes: { latency: Long(\"0\"), ops: Long(\"0\") },\n commands: { latency: Long(\"0\"), ops: Long(\"0\") },\n transactions: { latency: Long(\"0\"), ops: Long(\"0\") }\n }\n },\n {\n ns: 'ni4cc2.data_events',\n shard: 'ni4cc2-rs1',\n host: 'b5b60ccde536:27017',\n localTime: ISODate(\"2022-10-13T00:40:04.612Z\"),\n latencyStats: {\n reads: { latency: Long(\"14913376\"), ops: Long(\"319\") },\n writes: { latency: Long(\"0\"), ops: Long(\"0\") },\n commands: { latency: Long(\"0\"), ops: Long(\"0\") },\n transactions: { latency: Long(\"0\"), ops: Long(\"0\") }\n }\n },\n {\n ns: 'ni4cc2.data_events',\n shard: 'ni4cc2-rs3',\n host: '1c921c1e4ec1:27017',\n localTime: ISODate(\"2022-10-13T00:40:04.611Z\"),\n latencyStats: {\n reads: { latency: Long(\"541987744\"), ops: Long(\"2957\") },\n writes: { latency: Long(\"1531416588\"), ops: Long(\"444521\") },\n commands: { latency: Long(\"4452502327\"), ops: Long(\"13\") },\n transactions: { latency: Long(\"0\"), ops: Long(\"0\") }\n }\n }\n]\n[mongos] config> db.chunks.latencyStats()\n[\n {\n ns: 'config.chunks',\n host: 'e89bcffaddb9:27017',\n localTime: ISODate(\"2022-10-13T00:44:30.624Z\"),\n latencyStats: {\n reads: { latency: Long(\"3099752\"), ops: Long(\"3273\") },\n writes: { latency: Long(\"0\"), ops: Long(\"0\") },\n commands: { latency: Long(\"0\"), ops: Long(\"0\") },\n transactions: { latency: Long(\"0\"), ops: Long(\"0\") }\n }\n }\n]\n[mongos] config> db.chunks.getIndexes()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { uuid: 1, min: 1 },\n name: 'uuid_1_min_1',\n unique: true\n },\n {\n v: 2,\n key: { uuid: 1, shard: 1, min: 1 },\n name: 'uuid_1_shard_1_min_1',\n unique: true\n },\n {\n v: 2,\n key: { uuid: 1, lastmod: 1 },\n name: 'uuid_1_lastmod_1',\n unique: true\n }\n]\n",
"text": "Hi All,I am having an issue where I make a simple find query and the first attempt for particular items might take 20 to 30 seconds, but then if I query for the same items it is under half a second. Pretty sure this is due to caching but the initial query taking so long is causing be quite a lot of grief.I am aware the first query is likely being pulled from disk, but I don’t believe it should take as long as it does given there is at most about 10K documents for any particular indexed item. So even if I only filtered on the indexed item for all records it should only pull in a reasonably small set.I am using fairly high spec VMs backed by an enterprise SSD storage data centre.DB is split across 4 shards each having 2 RS+Arb. 8CPU/32GB/3TB ea\nUsing 2 mongos. 4CPU/16GB eaI did notice it seems to favor the Hashed index based on the partitioning key. Ignoring the date index.I am seeing a lot of IOWait on some nodes with top and vmstat, yet iostat shows not very much and also the data rate for the disks is only around 50MB/s yet it can push upwards of 250MB/s on a file copy.Also wondered if the config DB was contributing! How much of the chunk distrubution is cached? I don’t see any index on the config database for the partition key, how does the mongos determine the RS?I have set the following tuning suggestions, Which didn’t seem to make any difference:It doesn’t seem to make a difference if I specify date range or not, So I suspect this ties in with the unutilized date index.Some options I have thought about:Many hours spent already investigating, Appreciate any suggestions on checks, strategies, issues?",
"username": "John_Torr"
},
{
"code": "",
"text": "Hi @John_Torr and welcome to the MongoDB community forum!!Firstly would appreciate you sharing a post with much detailed information.I am having an issue where I make a simple find query and the first attempt for particular items might take 20 to 30 secondsThere might be more than one reason why you are seeing the issue.\nFirstly read operation in MongoDB, it needs to fetch from disk if the documents are not in memory yet and since disk is usually the slowest part of a machine, loading a lot of data from disk might take some time.Secondly, the next time you fetch the same data, it would be faster since they’re already cached in memory.The other reason which might cause the delay in the query operation may be because of the way the shard key has been defined in your sharding deployment.I did notice it seems to favor the Hashed index based on the partitioning key. Ignoring the date index.By “partition key” do you mean shard key?MongoDB typically attempt to involve as few shards as possible when answering a query since generally it’s more performant and allow better parallelisation, thus it will select the index which can avoid a scatter-gather response.To answer a your questions on:Also wondered if the config DB was contributing!The config DB is basically responsible for storing the metadata and only has internal use. The application and administration should not modify or depend on the content in course of normal operation.It basically stores information like routing information, list of sharded collections, status of the balancer etc.\nThe shards and the mongos are the ones mostly responsible for the performance of the sharding deployment. However, they use the config servers DB to get the copy of the metadata.I don’t see any index on the config database for the partition key, how does the mongos determine the RS?The mongos queries the config servers for cluster information and then routes queries to the respective shards. You can visit the documentation for more information.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| Slow first query on huge collection | 2022-10-13T00:46:48.336Z | Slow first query on huge collection | 2,282 |
null | [
"dot-net"
]
| [
{
"code": "[Serialized]\npublic class Sale : RealmObject\n {\n private DateTime _businessDay;\n private DateTime _startTime;\n private DateTime _endTime;\n public Sale()\n {\n _id = ObjectId.GenerateNewId();\n Payment = new List<Tender>().ToArray();\n SalesItems = new List<LineItem>().ToArray();\n }\n public enum SaleIndexes\n {\n SALE_REF_ID_INDEX,\n BUSINESS_DATE_INDEX,\n LOCATION_REF_ID_INDEX,\n WHO_START_INDEX,\n WHO_END_INDEX,\n SALE_TYPE_INDEX,\n SERVICE_AREA_INDEX,\n START_TIME_INDEX,\n END_TIME_INDEX,\n GUEST_COUNT_INDEX,\n SALE_TIME_BLOCK_INDEX,\n NET_TOTAL_INDEX,\n GROSS_TOTAL_INDEX,\n TENDER_TYPE_INDEX,\n TENDER_INDEX,\n TIP_INDEX,\n GRATUITY_INDEX,\n TAX_INDEX,\n ITEMS_INDEX\n }\n \n [BsonId]\n [PrimaryKey]\n public ObjectId _id { get; set; }\n\n [BsonElement(\"sale_ref_id\")]\n public int SaleRefId { get; set; }\n [BsonElement(\"business_day\")]\n public DateTime BusinessDay\n {\n get { return DateTime.SpecifyKind(_businessDay, DateTimeKind.Utc); }\n set { _businessDay = (DateTime)value; }\n }\n \n [BsonElement(\"location_ref_id\")]\n public ObjectId LocationRefId;\n [BsonElement(\"who_start\")]\n public int? WhoStart { get; set; }\n [BsonElement(\"who_end\")]\n public int? WhoEnd { get; set; }\n [BsonElement(\"sale_type\")]\n public string? SaleType { get; set; }\n [BsonElement(\"service_area\")]\n public int? ServiceArea { get; set; }\n [BsonElement(\"start_time\")]\n public DateTime? StartTime\n {\n get { return DateTime.SpecifyKind(_startTime, DateTimeKind.Utc); }\n set { _startTime = (DateTime)value; }\n }\n [BsonElement(\"end_time\")]\n public DateTime? EndTime\n {\n get { return DateTime.SpecifyKind(_endTime,DateTimeKind.Utc); }\n set { _endTime = (DateTime)value; }\n }\n [BsonElement(\"guest_count\")]\n public double GuestCount { get; set; }\n [BsonElement(\"time_block\")]\n public double TimeBlock { get; set; }\n [BsonElement(\"net_total\")]\n public double NetTotal { get; set; }\n [BsonElement(\"gross_total\")]\n public double GrossTotal { get; set; }\n [BsonElement(\"payment\")]\n\n public Tender[] Payment { get; set; }\n [BsonElement(\"tax\")]\n public double Tax { get; set; }\n [BsonElement(\"gratutity\")]\n public double Gratuity { get; set; }\n [BsonElement(\"sales_items\")]\n public LineItem[] SalesItems { get; set; }\n\n }\n[Serializable]\n public class Tender : EmbeddedObject\n {\n public Tender()\n {\n this._id = ObjectId.GenerateNewId();\n }\n [BsonId]\n public ObjectId _id { get; set; }\n [BsonElement(\"sale_ref_id\")]\n public int SaleRefId { get; set; }\n [BsonElement(\"tender_type\")]\n public string TenderType { get; set; } = \"\";\n [BsonElement(\"value\")]\n public double Value { get; set; }\n [BsonElement(\"tip\")]\n public double Tip { get; set; }\n }\n}\nLineItem[]",
"text": "Good day. Struggling a bit to get this REALM to sync. I’m seeing some erronious, what feels to me, errors and maybe my backed instance has an issue, but lets see what the smart people can see.I have a pretty basic collection for sales in C# with two nested arrays of objects.The error in the subject is thrown when I try to initialize the REALM and the sync should happen, but we never get there.The Tender model is as such. Nothing crazy?There’s a similar model for the LineItem[] SalesItems property but that’s not complaining…yet.Is this error familiar to some of the Mongo Champs out there?Thanks in advance.\nCPT",
"username": "Colin_Poon_Tip"
},
{
"code": "IList<T>",
"text": "Realm doesn’t support arrays of objects - instead you should use IList<T> with a getter only. I realize this may be tricky if you also want to use the same models using the C# drivers, but you can probably work around it similarly to what you’re doing with dates.",
"username": "nirinchev"
},
{
"code": "",
"text": "Thanks for the response. I though I’d give it a crack but if Realm doesn’t support arrays of objects then what’s the practicle point of that? Is this on the Engineer’s whiteboard?Meaning, if I understand correctly; for example, if I want to do a Sales collection with lineItems in the sale (i.e. array of IList that’s a non-starter inheriting from RealmObject?Correct me if I’m wrong, as this question is important for me to continue, but… Is this the only way to have my Atlas database design workable with REALM? Meaning, that I’d need two separate collections? One for Sales and one for the LineItems. Then on the client side they’d need to be constucted.I must be mis-understanding something fundamental? That seems crazy.\nI though I could hack this by doing constructors passing in the IList objects since they can only be initialized in a construct since “set;” is disabled. Then just swapping out the new object with a new IList. nope.That doesn’t seem to work as soon as you inherit from RealmObject it just ignores the lists when writing to Atlas.OR, do I need separate objects with the exact same properties:That seems like a lot o’ work.Nonetheless, I’ve been trying to get that straigt as it doesn’t seem intuitive. Or I just need to keep hacking until it falls into place. Feels like I’m going backwards.Best regards,\nCPT",
"username": "Colin_Poon_Tip"
},
{
"code": "",
"text": "Hacking along, I can push into Atlas with Embeded objects in a IList and it actually makes it to the REALM visibly, but accessing them isn’t supported even though the viewer shows them?\nThis example shows the List with the 7 expected LineItems, but sure…the program which loads to atlas then initialzes the REALM I can see the data.I built a similar Model that inherits Realm Object (SyncSales). Of course I can’t use “set” on the IList properties and it complains on properties that aren’t part of that Sales Model.\nimage1131×152 127 KB\nI assume its reflecting on other data in the program (Product, Employee’s collections)So, it’s there but not accessible is what is happening? And it’s back to generating a collection around the mirrored SyncSales model I created for the client side access which inherits from RealmObject. I suspect I won’t be able to delete that collection until I kill the Realm from App services like last time.Interesting ",
"username": "Colin_Poon_Tip"
},
{
"code": "realm.Write(() =>\n{\n sale.SaleItems.Add(new LineItem(...));\n});\npublic class Sale : RealmObject\n{\n public IList<LineItem> SaleItems { get; }\n\n [BsonElement(\"sale_items\")]\n public LineItem[] SaleItemsBson\n {\n get => SaleItems.ToArray();\n set\n {\n SaleItems.Clear();\n foreach (var item in value)\n {\n SaleItems.Add(item);\n }\n }\n }\n}\n",
"text": "Maybe I’m misunderstanding something here - can you show some code examples of what you’re trying to achieve? It seems like you’re using Device Sync via the Realm SDK, but you’re also trying to use the .NET driver in the same project - while there’s nothing that inherently prevents it, it’s not clear to me why you need this. I.e. Device Sync will synchronize data two way, so whatever you write to the Realm on the device will eventually make it into Atlas.Regarding collections in Realm - they are implemented as lists rather than arrays to convey their mutability and to encourage people to add/remove/update items rather than replace the entire collection. For example, if you want to add an item, you can do:Then only that change will get synchronized and the document in atlas will be updated to reflect that. If you replace the contents of the collection every time, you’re going toFinally, if you do want to reuse the models between the Realm SDK and the .NET driver, you can do something like that for the collection properties:Again, while this is possible, I would discourage it as it is really inefficient.",
"username": "nirinchev"
},
{
"code": "",
"text": "Thanks for that insight I think it gives me ideas for sure.Well, I ended up trying to separate the two as you mentioned but while I “thought” I had a path I hit another wall. Essentially, I copied the models that were used to populate Atlas which did not inherit RealmObject, but the nested IList objects inherited EmbeddedObject.I created a console app that just opened and downloaded the Sales. Worked!!\nThen I took the copied Sales Model and inherited from RealmObject and removed the setters so it would compile. Then I just did avar sales = _Realm_Sales.All();Then it told me the schema’s didn’t matchERROR “Invalid schema change (UPLOAD): failed to add app schema for ns=‘mongodb-atlas.ONE.Sales’ for new top-level schema “Sales”: sync-incompatible app schema for the same collection already exists. This app schema is incompatible with error: %!q(). To continue, delete the app schema in Atlas App Services, or update it to match the app schema defined on the device” (error_code=225, try_again=false, error_action=ApplicationBug)As to why I’d have both? It’s for a restaurant (several. 100 to start and my parent company is 1500 with plans to double in 5 years in the US). The “idea” is that I’d put together a service that “upgrades” a POS from stand-alone to a real-time analytics (and more I could tell you;)) via MongoDB Atlas and Realm. The “problem” this company faces is that it buys 100’s or restaurants at a time but integration into central accounting systems etc is an incredably un-scalable problem. Yes, I’m just a lowly IT director of the 100 sub-companies, but have a dream. I’m far removed from “code” but love it!! So, everything is new!!So, on start it loads the entire history of Sales, Employee and product data. So far that happens just fine.Once loaded which can be 1M Sales but not limited to that. I’m working with a small set (one week) but there are 500,000 sitting there on my test box while I try to sort this next step out.Once it loads it flips over to production mode and should monitor a sliding window of two weeks of sales.\nThat’s a business requirement as sometimes people make changes late and accounting always needs to reconcile that. It’s daunting with that many restaurants. So wouldn’t it be nice it it captured the change and it was NEVER out? That’s the basic idea.So, now that the service is running everything from that day forward is Realm reads/writes.That’s the basic principalOf course I could share models code etc. All here?Thanks for responding btw!!",
"username": "Colin_Poon_Tip"
}
]
| Realm The property type Tender[] cannot be expressed as a Realm schema type (Parameter 'type') | 2022-10-17T19:56:02.274Z | Realm The property type Tender[] cannot be expressed as a Realm schema type (Parameter ‘type’) | 2,181 |
null | [
"sharding",
"mongodb-shell",
"change-streams",
"configuration"
]
| [
{
"code": "MongoServerError: Failed command { collMod: \"collName\", changeStreamPreAndPostImages: { enabled: true }, writeConcern: { w: \"majority\", wtimeout: 60000, provenance: \"implicitDefault\" } } for database 'dbName' on shard 'shard1' :: caused by :: BSON field 'changeStreamPreAndPostImages' is an unknown field.db version v6.0.1",
"text": "Hi!\nI have upgraded a system using mongo from 5.0 to 6.0.\nThen, I tried to enable pre/post Images using this command:\ndb.runCommand({collMod: ‘collName’, changeStreamPreAndPostImages: {enabled: true}})But I get this error:\nMongoServerError: Failed command { collMod: \"collName\", changeStreamPreAndPostImages: { enabled: true }, writeConcern: { w: \"majority\", wtimeout: 60000, provenance: \"implicitDefault\" } } for database 'dbName' on shard 'shard1' :: caused by :: BSON field 'changeStreamPreAndPostImages' is an unknown field.When checking mongod/mongos versions I see db version v6.0.1\nAnd mongosh version is 1.5.4On a new installation of my system (with mongo 6.0) this works fine.\nIs there anything I am missing with the configuration for this upgrade?Thanks!",
"username": "Oded_Raiches"
},
{
"code": "FCVsetFeatureCompatibilityVersionadmindb.adminCommand( { setFeatureCompatibilityVersion: \"6.0\" } )\n",
"text": "Hello @Oded_Raiches ,It seems like you missed the last step while upgrading the standalone from version 5.0 to 6.0 according to this documentation. To enable 6.0 features, set the feature compatibility version ( FCV ) to 6.0.Run the setFeatureCompatibilityVersion command against the admin database:Note: Enabling these backwards-incompatible features can complicate the downgrade process since you must remove any persisted backwards-incompatible features before you downgrade. It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these feature=s.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Post 5.0->6.0 upgrade: BSON field 'changeStreamPreAndPostImages' is an unknown field | 2022-10-12T07:13:42.193Z | Post 5.0->6.0 upgrade: BSON field ‘changeStreamPreAndPostImages’ is an unknown field | 3,280 |
null | [
"dot-net"
]
| [
{
"code": "Realm version: 10.17.0[DOTNET] 2022-10-18 23:44:49.174 Error: Failed to resolve 'ws.us-east-1.aws.realm.mongodb.com:443': Host not found (authoritative)\n**System.NullReferenceException:** 'Object reference not set to an instance of an object.'\n\n[mono-rt] [ERROR] FATAL UNHANDLED EXCEPTION: System.NullReferenceException: Object reference not set to an instance of an object.\n[mono-rt] at Realms.Sync.SessionHandle.<>c__DisplayClass26_0.<HandleSessionPropertyChangedCallback>b__0(Object _) in D:\\a\\realm-dotnet\\realm-dotnet\\Realm\\Realm\\Handles\\SessionHandle.cs:line 461\n[mono-rt] at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()\n[mono-rt] at System.Threading.ThreadPoolWorkQueue.Dispatch()\n[mono-rt] at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()\n[mono-rt] at System.Threading.Thread.StartCallback()\n[libc] FORTIFY: pthread_mutex_lock called on a destroyed mutex (0x7c1ed31110)\n[libc] Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 16670 (ogy.catalog.app), pid 16670 (ogy.catalog.app)\npublic async Task ConfigureAsync()\n {\n RealmApp = Realms.Sync.App.Create(\"<app-id>\");\n\n if(Utils.IsConnected())\n {\n _user = await RealmApp.LogInAsync(Credentials.EmailPassword(<email>, <password>));\n }\n else\n {\n _user = RealmApp.CurrentUser;\n }\n \n var config = new FlexibleSyncConfiguration(_user)\n {\n ClientResetHandler = new Realms.Sync.ErrorHandling.ManualRecoveryHandler(TratarErroSessao),\n Schema = new[] { typeof(Projeto), typeof(Campanha), typeof(GrupoTaxonomico), typeof(Coleta) }\n };\n\n try\n {\n Realm = Realm.GetInstance(config);\n var session = Realm.SyncSession;\n session.PropertyChanged += SyncSessionPropertyChanged;\n\n var subscriptions = Realm.Subscriptions;\n subscriptions.Update(() =>\n {\n var defaultSubscription = Realm.All<Projeto>()\n .Where(t => t.OwnerId == _user.Id);\n subscriptions.Add(defaultSubscription);\n });\n }\n catch (Exception ex)\n {\n throw; \n }\n }\n",
"text": "Hello,So I have this app where I’m trying to use the flexible sync. It works perfectly when online but I want to start the realm also when there is no connection. Do I have to do something like having two realms and somehow enable the sync when online? Because I’m trying to start the app without connection but it crashes with a message that is not possible to access the host. I have found some questions in other languages but didn’t find a proper documentation for .NET about it. Thanks in advance!Realm version: 10.17.0Here is the log when I try:And here is the code I’m using to connect",
"username": "Douglas_Breda1"
},
{
"code": "pthread_mutex_lock called on a destroyed mutexvar session = Realm.SyncSession;\nsession.PropertyChanged += SyncSessionPropertyChanged;\n",
"text": "This NRE is a bug in the .NET SDK and I have an idea for how to fix it, but I’m not sure if that’s the reason for the crash - what operating system are you running this on? I can also see pthread_mutex_lock called on a destroyed mutex in the logs, but I’m not sure if that’s coming from Realm code or from Mono - perhaps the first step would be to fix the NRE and then try again with that fix.And to clarify the issue with the NRE - you’re subscribing for notifications on a local variable:When this variable goes out of scope, the session instance will be garbage collected, so you’ll stop receiving notifications. There’s a case where we don’t check for this and attempt to dereference the already collected instance. You can work around it by storing the session in a class level variable to ensure it lives for as long as you need it.",
"username": "nirinchev"
},
{
"code": "",
"text": "I’m running on Android 10.0. But changing the session variable to a class scope worked. It does not crash anymore. Thanks a lot ",
"username": "Douglas_Breda1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Flexible Sync when offline | 2022-10-19T00:29:39.696Z | Flexible Sync when offline | 1,358 |
null | [
"python",
"compass",
"atlas-cluster"
]
| [
{
"code": "import datetime\nfrom pymongo import MongoClient\nclient = MongoClient('mongodb+srv://hduser:********@cluster0.pvpgbwp.mongodb.net/test', 27017 )\ncoll = client.db.posts_dbo\ndoc = {\"auteur\":\"Flouflou\",\n \"texte\":\"Mon premier post du mois\",\n \"tags\":[\"python\",\"mongodb\"],\n \"date\":datetime.datetime.utcnow()\n}\npost_id = coll.insert_one(doc).inserted_id\nprint(post_id)\n",
"text": "Hi,\ni’m studying and still discovering mongodb but it’s starting bad.\ni have no problem to connect directly with compass but now i’m trying to connect and add data using python and it’s just giving me Timeout error.\ni looked in other topics to solve the problem but nothing worked ( double clic on ‘Install Certificates.command’ too but nothing changed)\nwhen i used an other mac everything worked well, the problem is just with mine.\nso i’m using macOs if it matters and this is my code:",
"username": "Amine_Gharbi"
},
{
"code": "pymongo.errors.ServerSelectionTimeoutError: ac-j3n0ofn-shard-00-00.pvpgbwp.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129),ac-j3n0ofn-shard-00-01.pvpgbwp.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129),ac-j3n0ofn-shard-00-02.pvpgbwp.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 634edf87a414d0dc84a44689, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-j3n0ofn-shard-00-00.pvpgbwp.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-j3n0ofn-shard-00-00.pvpgbwp.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')>, <ServerDescription ('ac-j3n0ofn-shard-00-01.pvpgbwp.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-j3n0ofn-shard-00-01.pvpgbwp.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')>, <ServerDescription ('ac-j3n0ofn-shard-00-02.pvpgbwp.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-j3n0ofn-shard-00-02.pvpgbwp.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')>]>\n",
"text": "this is the error message:Process finished with exit code 1\nso please if anyone can help thank you",
"username": "Amine_Gharbi"
},
{
"code": "SSL: CERTIFICATE_VERIFY_FAILED",
"text": "Hi @Amine_Gharbi welcome to the community!The error SSL: CERTIFICATE_VERIFY_FAILED was commonly due to outdated OS root certificate. Please have a look at the solution in Keep getting ServerSelectionTimeoutError - #10 by Priyanka_Priyadarshini and see if it helps.Best regards\nKevin",
"username": "kevinadi"
}
]
| Raise ServerSelectionTimeoutError | 2022-10-18T17:50:06.427Z | Raise ServerSelectionTimeoutError | 1,715 |
null | [
"time-series"
]
| [
{
"code": "MongoServerError: can't compact a view",
"text": "Hello,I have over 100 GB of unused disk usage (and just 7GB used) in my time series collection that isn’t being reclaimed. I am unable to compact it. When I try, I receive the error:\nMongoServerError: can't compact a viewHow can I reclaim this space?",
"username": "Ben_Devore"
},
{
"code": "",
"text": "Hi @Ben_Devore welcome to the community!I think you are correct. Currently we don’t have an easy way to reclaim unused space in a time series collection. I opened SERVER-70679 to investigate further options.In the meantime, I can suggest a couple of workarounds for this:Before doing either method on production, I strongly recommend you to test the procedure thoroughly and do a backup.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can't Compact Time Series Collection | 2022-10-17T19:08:41.670Z | Can’t Compact Time Series Collection | 1,994 |
null | [
"python",
"production"
]
| [
{
"code": "dnspython",
"text": "We are pleased to announce the 4.3.2 release of PyMongo - MongoDB’s Python Driver. This release adds support for expanded datetime ranges, adds dnspython as a dependency, and has some updates to AWS auth handling.See the changelog for a high level summary of what’s new and improved or see the 4.3 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongo 4.3.2 Documentation — PyMongo 4.3.2 documentation\nChangelog: Changelog — PyMongo 4.3.2 documentation\nSource: GitHub - mongodb/mongo-python-driver at 4.3.2Thank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "A post was split to a new topic: Mariadb_to_mongo.py: convert a MySQL/MariaDB table to a MongoDB collection",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| PyMongo 4.3.2 Released | 2022-10-18T14:57:17.004Z | PyMongo 4.3.2 Released | 2,696 |
[
"100daysofcode"
]
| [
{
"code": "",
"text": "Hi everybody,\nI have heard a lot about #100daysof code. I came across this concept via my buddy @SourabhBagrecha who briefed me and intrigued me to be a part of it.\nAfter seeing him making consistent progress and learning while he was doing the challenge, I am quite impressed and motivated to do the same.\nHence, I have decided that I will take the baton from here and will start my journey of 100daysofcode from today.I will be sharing my daily updates here in the replies to this topic.\nWish me luck .Regards,\nAnshul Bhardwaj,",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I learned about basics of web development. The role of Front-End, Back-End and server. Also I covered HTML basics such as HTML Boilerplate, VSCodes, anchor tags and comments.",
"username": "anshul_bhardwaj"
},
{
"code": "\n<head>\n <title>Div Align Attribbute</title>\n</head>\n\n<body> <ol>\n <div align=\"left\">\n Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut\n labore et dolore magna aliqua.\n </div>\n <div align=\"right\">\n <font color=\"blue\">\n Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut\n labore et dolore magna aliqua.</font></ol>\n </div>\n <img src=\"100.png\" </div>\n</body>\n\n</html> ```",
"text": "Today I learned about various elements used in HTML such as div, img src, ordered and unordered list, semantic markups and entity codes.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Good luck @anshul_bhardwaj! If you have any doubt, the developer’s community is here for help. Happy coding!",
"username": "MrTimeout"
},
{
"code": "",
"text": "Today I learnt about basics of CSS which included CSS selectors such as Universal,ID, Descendant selectors. Apart from that I also covered sections consisting Box Model.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I learnt about basics of JavaScript which consisted variables and let, Nan, constant variables, Boolean and variable naming and conventions.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I covered the JavaScript fundamentals of strings which consisted string method with arguments,undefined and null, random numbers and maths and much more.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I learnt about decision making with codes, Comparison operators , Console, alert and prompt, If and else statements, Logical AND, OR and NOT.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I exercised JavaScript arrays which consisted Lotto Numbers, Array Random Access,\nPush and Pop, Slice and Splice, Multi-dimensional and much more.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I learned about Object Literals which consisted accessing data out of objects, modifying objects and nesting arrays and objects. After that I covered repeating stuffs with loops and different examples and exercises related to this topic.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I covered various coding exercises which consisted heart function, rant values,multiple args,return keyboard,isshortsweather, return value,last element, capitalize and Sum arrays in JavaScript.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I learnt about function scope, lexical scope, block scope, function expressions, higher order functions, returning and defining functions. Apart from that I covered foreach method, map method and Arrow functions.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I learned about Default Params and Spread in Function calls in JavaScript.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I exercised spread with array literals,spread with objects and rest params.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I studied destructuring arrays,destructuring params and destructuring objects.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I covered DOM which consisted introduction,the document object and getElementByld",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I exercised getElementByTagName and Class name, querySelector and querySelectorAll.",
"username": "anshul_bhardwaj"
},
{
"code": "",
"text": "Today I exercised innerHTML, textContent and innerText.",
"username": "anshul_bhardwaj"
},
{
"code": "container.style.textAlign = 'center'\nconst img = document.querySelector('img')\nimg.style.width = '150px'\nimg.style.borderRadius = '50%' ```",
"text": "Today I exercised changing styles and attributes in JavaScript.",
"username": "anshul_bhardwaj"
},
{
"code": "const span = document.querySelectorAll('h1 span')\nfor (let i=0; i<span.length; i++) {\n span[i].style.color= colors[i]\n}",
"text": "Today I practised Rainbow Text and Classlist exercises.",
"username": "anshul_bhardwaj"
}
]
| The journey of #100DaysOfCode (@anshul_bhardwaj) | 2022-08-08T16:42:17.756Z | The journey of #100DaysOfCode (@anshul_bhardwaj) | 7,730 |
|
null | [
"performance"
]
| [
{
"code": "{\n \"title\": \"mlbgame\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"_simulation\": {\n \"bsonType\": \"bool\"\n },\n \"date\": {\n \"bsonType\": \"string\"\n },\n \"endDate\": {\n \"bsonType\": \"date\"\n },\n \"gameInfo\": {\n \"bsonType\": \"string\"\n },\n \"gameStatus\": {\n \"bsonType\": \"string\"\n },\n \"id\": {\n \"bsonType\": \"string\"\n },\n \"minutesRemaining\": {\n \"bsonType\": \"int\"\n },\n \"percentComplete\": {\n \"bsonType\": \"double\"\n },\n \"sport\": {\n \"bsonType\": \"string\"\n },\n \"startDate\": {\n \"bsonType\": \"date\"\n },\n \"status\": {\n \"bsonType\": \"string\"\n }\n }\n}\nquery GamesForCalendar($dateFrom: DateTime!, $dateTo: DateTime!) {\n mlbgames(query: {startDate_gte: $dateFrom, startDate_lte: $dateTo}) {\n id\n status\n sport\n date\n startDate\n endDate\n gameInfo\n gameStatus\n minutesRemaining\n percentComplete\n _simulation\n }\n }\nFunction Call Location:\nUS-OR\nGraphQL Query:\nquery GamesForCalendar($dateFrom: DateTime!, $dateTo: DateTime!) { mlbgames(query: {startDate_gte: $dateFrom, startDate_lte: $dateTo}) { id status sport date startDate endDate gameInfo gameStatus minutesRemaining percentComplete _simulation __typename } }\nCompute Used:\n18736722090 bytes•ms\nRemote IP Address:\n136.32.230.69\nRule Performance Metrics:\n{\n \"dfsdb.mlbgames\": {\n \"roles\": {\n \"default\": {\n \"matching_documents\": 19,\n \"evaluated_fields\": 0,\n \"discarded_fields\": 0\n }\n },\n \"no_matching_role\": 0\n }\n}\n",
"text": "I am trying to switch over from REST to GraphQL, and am experiencing horribly slow responses for queries. As in 5000-7000+ ms for a super simple, small request. I’ve spent days on this, tried everything, and am about to give up. Any help would be greatly appreciated.I have greatly reduced the schema from the huge one auto-generated to a tiny one for testing (no difference), implemented filters (no difference), simplified the query to only return a few, have indexed the main field (helped a tiny bit), and have not resolvers or anything special at all.I am on the M10 tier.The schema:The query:A sample log entry:I have an index on startDate as well. The result set is only 6k. Please help! Surely I am missing something, or there is some explanation, as this is completely unusable.Best,\nGreg",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "@Pavel_Duchovny Hi, did I post this incorrectly or in the wrong place? It would have sure been nige to give GraphGL with my application to see if it would have been best, but never could get past this issue. And no-one responded. I thought I did a decent job of describing the issue, so hoped it would have been an easy fix.Thanks.",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "Hi @Gregory_Fay sorry that we missed your original post! Do you happen to know the realm app deployment model and region that your atlas cluster is deployed in? Some things that may cause slower queries with GraphQL includeLet me know if one or more oof these things seems like it may be the issue here.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hi @Sumedha_Mehta1Thank you for the reply. I missed it, and it got buried. I had to go another direction for now but will try to recreate what I had and see if any of these suggestions solve the problem. I was so excited about GraphQL until I couldn’t get the basics to work!Thanks,\nGreg",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "Hi everyoneJust want to confirm that having an app deployed in a region different from the atlas cluster caused crazy slow GraphQL queries for me.AWS Cape Town closest region for my purposes, but Realm apps cannot be deployed in Cape Town region, so they were deployed in the N-Virginia region. Terrible performance (5-8s for most basic query of 5 simple items). After changing it so that my cluster and realm app runs in the N-Virginia region the same query now takes 30-40ms.Thanks for the advice @Sumedha_Mehta1./theuns",
"username": "Theuns_Alberts"
}
]
| Extremely slow GraphQL performance even with very simple query and schema | 2021-10-27T20:06:20.058Z | Extremely slow GraphQL performance even with very simple query and schema | 10,375 |
null | [
"golang"
]
| [
{
"code": "options.DatabaseOptions.Registrymongo.Client.Database(\"name\", databaseOptions)",
"text": "I’d like to unit test my custom BSON marshal/unmarshal code but I can’t seem to find a way to register encoders for my custom structs without using options.DatabaseOptions.Registry and mongo.Client.Database(\"name\", databaseOptions).Is there a way to test it without a full blown integration test?",
"username": "Kare_Nuorteva"
},
{
"code": "buf, err := bson.MarshalWithRegistry(YourCustomRegistry, bson.M{\"custom\": custom.String()})",
"text": "Try\nbson package - go.mongodb.org/mongo-driver/bson - Go Packages and bson package - go.mongodb.org/mongo-driver/bson - Go Packagesbuf, err := bson.MarshalWithRegistry(YourCustomRegistry, bson.M{\"custom\": custom.String()})",
"username": "Kare_Nuorteva"
}
]
| Unit testing custom marshal/unmarshal BSON functions | 2022-10-18T19:31:52.649Z | Unit testing custom marshal/unmarshal BSON functions | 1,629 |
null | [
"queries",
"python"
]
| [
{
"code": "Category.city_collection.find()Category.city_collection.find().limit(5)@router.get('/all_city')\nasync def all_city():\n \n all_category = Category.city_collection.find() \n category = convert_json(all_category)\n return category\n",
"text": "I am using pymongo. Category.city_collection.find() getting all collection of result. But I want to show only 5 result per page. I tried this Category.city_collection.find().limit(5) which showing 5 result on first page but how to go others page like second page, third page etc? I am using mongo db with my fastapi project. here is my full code",
"username": "Farhan_Ahmed1360"
},
{
"code": "",
"text": "I strongly recommend you take MongoDB Courses and Trainings | MongoDB University. These kind of aspects are covered. In the mean time take a look at https://www.mongodb.com/docs/manual/reference/method/cursor.skip/.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for this helpful tips",
"username": "Farhan_Ahmed1360"
}
]
| Fastapi mongodb how to add pagination and limit number of items per page? | 2022-10-18T16:26:40.011Z | Fastapi mongodb how to add pagination and limit number of items per page? | 2,651 |
[
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "",
"text": "Hey there! I’m trying to get all the documents that match the queries. With the following query, I’m getting all the documents, not those that only match. So what’s wrong?By the way, the current result is an empty array.Thanks For your help!\nimage1042×1504 131 KB\n",
"username": "saar_twito"
},
{
"code": "const { ObjectId } = require('mongoose').Types;\n\n...\n\n{\n $match: {\n _id: ObjectId(brandId)\n }\n}\n",
"text": "You are using Mongoose. Mongoose will not automatically convert strings to ObjectIds in aggregation queries, so you need to manually cast the string to the ObjectId:",
"username": "NeNaD"
},
{
"code": "const { ObjectId } = require('mongoose').Types;",
"text": "const { ObjectId } = require('mongoose').Types;Still, const products is an empty array.",
"username": "saar_twito"
}
]
| How can i retrive all the documents that match my cretria? | 2022-10-18T13:21:59.721Z | How can i retrive all the documents that match my cretria? | 1,208 |
|
null | [
"queries",
"dot-net"
]
| [
{
"code": " var documentsTest = collectionOfInterest.Find(new BsonDocument()).FirstOrDefault(); \n var builder = Builders<BsonDocument>.Filter;\n var newFunFilter = builder.Eq(\"IsSmoke\", true);\n var list = await collectionOfInterest.Find(newFunFilter).ToListAsync();\n var result = list.FirstOrDefault();\n List<TestSet> TestSetObjList = new List<TestSet>();\n foreach(var collection in collectionsList)\n {\n //Traverse\n foreach (var element in collection)\n {\n funShitString.Add(selectedTestSet.ToString());\n } \n }\n",
"text": "General issue:\n-Any searches on documents within a collection seems to return null. Searches for DB’s and/or collections work just fine though.Context:\n-DB version: 3.4.5\n-C# Driver version 2.4.2\n> I am aware these are ancient but I have little power to update these at this time.\n-Programming Language: C#Problem details:\n-Tried the following so far to no avail:\n-Attempt 1:-Attempt 2:-Attempt 3:",
"username": "Jon_S"
},
{
"code": "db.coll.find({})",
"text": "Hey @Jon_S , in case if your first attempt returns null, then you query an empty collection. I see 2 options, you either specify wrong db/collection names or server address. You can validate it if you try connecting to your database from the shell (with the same configuration as your c# application) and call the same db.coll.find({}) method",
"username": "Dmitry_Lukyanov"
},
{
"code": " //Get collection selected\n //Works as expected\n var collectionsList = DatabaseDesired.ListCollections().ToList();\n var collectionNameSpace = DatabaseDesired.GetCollection<BsonDocument>(selectedTestSet).CollectionNamespace; \n var collectionOfInterest = DatabaseDesired.GetCollection<BsonDocument>(selectedTestSet);\n var collectionOfInterestNonBsonDoc = DatabaseDesired.GetCollection<TestSet>(selectedTestSet);\n\n \n\n //Attempt to get documents from a given collection\n //Fails every time\n var documentsTest = collectionOfInterest.Find(new BsonDocument()).FirstOrDefault();\n\n var builder = Builders<BsonDocument>.Filter;\n\n var newFunFilter = builder.Eq(\"IsSmoke\", true);\n\n var list = await collectionOfInterest.Find(newFunFilter).ToListAsync();\n\n var result = list.FirstOrDefault();\n\n \n //Always caught here\n if (documentsTest.Equals(null))\n {\n \n MessageBox.Show(\"No documents found\");\n\n System.Threading.Thread.Sleep(10000);\n\n this.Close();\n\n }\n\n",
"text": "Thank you for your quick response. I am checking the connection and server address and I am able to get the list of the collections just before trying to select the documents to get from that collection and they all show as expected. Once I try to get document information though I just get null back every time. Here is the code I am using.",
"username": "Jon_S"
},
{
"code": "var collectionsList = DatabaseDesired.ListCollections().ToList();collectionNameSpace.CountDocuments();\n",
"text": "It doesn’t say much.\nif this line returns var collectionsList = DatabaseDesired.ListCollections().ToList(); collections you expect in further steps, then that collections are empty. Instead Find, you can just call:",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "I appreciate you sharing this option. Still new to the MongoDB space so everything i can learn is good! Unfortunately the documents number I am getting back is 0. That makes no sense to me though as I am staring at this collection in Studio3T and it has 50 documents in it.I have even paused execution and checked the database connection string to compare the server URL, username, and everything else I can and they all match. This code is tied to a UI that has a dropdown of DB’s and collections that even update on new DB selection without issue.Only thing that still stands out as odd to me is that in the dB client settings there is a field called “isFrozen” that is set to true. Could this have anything to do with it?",
"username": "Jon_S"
},
{
"code": "",
"text": "if you mean IsFrozen field in MongoClientSettings, then no, this option is indicator that underlying settings are in readonly mode so you can’t modify them",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "Ok, from the sounds of it that is all it was then…So any idea how I could be looking at documents in Studio3T and collections all match and are referenceable but I would not be able to get the associated documents then?!",
"username": "Jon_S"
}
]
| C# collection.Find() method returning null every time | 2022-10-17T19:46:58.277Z | C# collection.Find() method returning null every time | 5,151 |
null | [
"java",
"spring-data-odm"
]
| [
{
"code": "",
"text": "Hello,Currently there’s MongoClient by which we can do search queries in Java. But how to performs search queries in Spring Data MongoDB ?\nI see a post from Marcus here saying that it is not possible. But that post was more an a year back. So, I just wanted to check if it is implemented.",
"username": "Rahul_Raj2"
},
{
"code": "",
"text": "Hey @Rahul_Raj2, it’s not GA yet, but there is work happening here: Simplify usage of user provided aggregation operations. by christophstrobl · Pull Request #4059 · spring-projects/spring-data-mongodb · GitHub",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "Hi Rahul, just wanted to follow up on this. Atlas Search is now possible with Spring Data, you can check out the “Tip” section on this page for more details.https://docs.spring.io/spring-data/mongodb/docs/4.0.0-RC1/reference/html/#mongo.aggregation.supported-aggregation-operations",
"username": "Ashni_Mehta"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is Atlas Search supported in Spring Data MongoDB? | 2022-08-03T17:03:55.565Z | Is Atlas Search supported in Spring Data MongoDB? | 4,282 |
null | []
| [
{
"code": "mongod --port 27017 --dbpath /var/lib/mongodb/arb --replSet mongocluster --bind_ip localhost,XX.XX.X.X\n\"ctx\":\"conn966\",\"msg\":\"Authentication failed\",\"attr\":{\"mechanism\":\"SCRAM-SHA-1\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.0.11:36982\",\"extraInfo\":{},\"error\":\"BadValue: SCRAM-SHA-1 is disallowed for cluster authentication\"\n“stateStr” : “(not reachable/healthy)”\n{\n\t\t\t\"_id\" : 4,\n\t\t\t\"name\" : \"Citus1:27017\",\n\t\t\t\"health\" : 0,\n\t\t\t\"state\" : 6,\n\t\t\t\"stateStr\" : \"(not reachable/healthy)\",\n\t\t\t\"uptime\" : 0,\n\t\t\t\"lastHeartbeat\" : ISODate(\"2022-10-18T11:03:44.751Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"authenticated\" : false,\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : -1,\n\t\t\t\"configTerm\" : -1\n\t\t}\n",
"text": "I have run this command on arbiter server:And getting error after running this command:Because of this getting error on primary node :Do you have any idea how to solve this ? Please help me to solve this problem ASAP. I’m stuck with client to deploy our product to production. Thanks.",
"username": "Sanjay_Soni"
},
{
"code": "--keyFile",
"text": "The cluster authentication method your other nodes are using has not been configured for the arbiter.You’re probably missing --keyFile as SCRAM-SHA-1 is being attempted.",
"username": "chris"
},
{
"code": "",
"text": "@ ChrisThank you so much for your Answer.\nIts working correctly.\nAppreciate for your efforts for help.",
"username": "Sanjay_Soni"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Arbiter configuration Issue | 2022-10-18T11:35:43.622Z | Arbiter configuration Issue | 1,347 |
null | [
"node-js",
"crud"
]
| [
{
"code": "\"username\":\"test1\"\n\"email\":\"[email protected]\"\n\"username\":\"test2\"\n\"email\":\"[email protected]\"\nconst userExist = await User.findOne({username})\nconst emailExist = await User.findOne({email})\n\n\nif(userExist){\n res.status(400).json(\"User Already Existing...\")\n}else if(emailExist){\n res.status(400).json(\"Email Already Exists...\")\n}else{\n try {\n const newUser = new User({\n username: req.body.username,\n email: req.body.email,\n studentid: req.body.studentid,\n password: CryptoJS.AES.encrypt(\n req.body.password,\n 'secret_key',\n ).toString(),\n });\n const savedUser = await newUser.save()\n res.status(200).json(savedUser)\n \n } catch (error) {\n return res.status(400).json(\"Error Login\")\n }\n\n }\n",
"text": "Hello guys, I’ve been having a lot of trouble with how can I work on this. Basically, I want the user to update his information.So example, I have a user.User1Change it to this oneUser 1So in this scenario, User1, wants to change his username to “test2” only, and let his email stays the same.But the problem is, his email, “[email protected]” is already existing in the database, so when he saves it, it gets an error because the email is already existing.Now the question is, How can I allow the user to change his information either username or email, but still save his default information?Some might suggest, just remove the validation for email, but I also want the same function to email, I want the user to choose whatever he picks and update it, but still save the other input box to its current value",
"username": "Emmanuel_Cruz"
},
{
"code": "res.status(200).json(savedUser)",
"text": "You are doing something fundamentally wrong in terms of database.You are testing for the existence of things with User.findOne() before updating. In a database, another user might try to perform the same update and run its newUser.save() during the very short time period between the findOne() of the first user and its newUser.save(). You might end up with duplicate.Indexes with uniqueness exists for that purpose. You do your update and you handle correctly the duplicate value exception thrown by the server.Any reason why you define username and email as used in User.findOne() but still use req.body.username and req.body.email in new User().Finally, your database access code is mangled with your UI code. That makes it very hard to unit test. But that might be okay since with res.status(200).json(savedUser) this code might be a web API. But I still prefer separation of concerns.",
"username": "steevej"
}
]
| How can I accept user input if he only wants to change a specific input? | 2022-10-18T12:15:01.965Z | How can I accept user input if he only wants to change a specific input? | 1,841 |
null | [
"connector-for-bi"
]
| [
{
"code": "",
"text": "Hi,My customer would like to connect his MongoDB to his Power BI.He is using MongoDB community open source edition.I do not understand if this requires some kind of License? or is it free for sownload and connecting it from Power BI.Thanks,\nTamar",
"username": "Tamar_Nirenberg"
},
{
"code": "",
"text": "Hi @Tamar_Nirenberg,Per the MongoDB Connector for BI download page:The MongoDB Connector for BI is available as part of the MongoDB Enterprise Advanced subscription, which features the most comprehensive support for MongoDB and the best SLA.Use of the MongoDB Connector for BI is subject to the terms of the MongoDB Customer Agreement which does have a provision to use in your internal environment for Free Evaluation and Development purposes.However, use for any other purpose requires an Enterprise Advanced subscription or a MongoDB Atlas dedicated cluster (M10+).Please review the Customer Agreement above for legal usage details.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you @Stennie_X for the clarification.",
"username": "Tamar_Nirenberg"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB ODBC driver | 2022-10-18T08:07:46.771Z | MongoDB ODBC driver | 2,236 |
null | [
"queries",
"node-js"
]
| [
{
"code": "const dateToMongoSearch = (dateLineFromHistory) => {\n // we get this time stamp from text file\n // 2022-10-06T09:50:04.555+03:00\n return dateLineFromHistory.split(',')[1];\n}\n\n//Query itself\nupdatedAt = dateToMongoSearch(updatedAt);\n\nconst query = await Model.find({ 'updatedAt': { \n $gt: new Date(updatedAt), //.toISOString(), \n $exists: true \n } \n}, {\n allowDiskUse: false\n}).limit(limit).toArray()\n",
"text": "Need to verify the correct timestamp format for the query that finds records later for a particular dateQuery in the code does not return expected result, it returns nothing.",
"username": "Dev_INX"
},
{
"code": "{\n \"_id\" : ObjectId(\"128dec47bbd2f73014646ed1\"),\n \"firstName\" : \"first\",\n \"lastName\" : \"last\",\n \"email\" : \"[email protected]\",\n \"emailVerified\" : true,\n \"passwordCreated\" : ISODate(\"2022-10-06T09:50:03.488+0000\"),\n <...>\n \"apiRequestStatus\" : {\n \"maxNumOfKeys\" : NumberInt(20),\n \"status\" : \"UNSET\",\n \"createdAt\" : ISODate(\"2022-10-06T09:50:03.488+0000\"),\n \"updatedAt\" : ISODate(\"2022-10-06T09:50:03.488+0000\")\n },\n \"state\" : {\n \"isFrozen\" : false,\n \"adminEmail\" : null,\n \"updatedAt\" : ISODate(\"2022-10-06T09:50:03.488+0000\")\n },\n \"platform\" : \"XXX\",\n \"lastLogin\" : ISODate(\"2022-10-06T09:50:03.488+0000\"),\n \"accountType\" : \"XXX\",\n \"beneficialOwnerName\" : null,\n \"enableTFA\" : false,\n \"isReviewed\" : false,\n <...>\n \"popupToShow\" : [\n\n ],\n \"marketingRefferal\" : {\n \"referralCode\" : \"CCCC-CCCC\",\n \"advocateReferralCode\" : \"DDDD-DDDD\"\n },\n \"isMigratedFromS\" : false,\n \"password\" : \"...\",\n \"__v\" : NumberInt(0),\n \"createdAt\" : ISODate(\"2022-10-06T09:50:03.555+0000\"),\n \"updatedAt\" : ISODate(\"2022-10-06T09:50:05.555+0000\")\n}\n",
"text": "DB snippet (some data is hidden for security reasons and does not affect on this query):and we need updatedAt that is in the end (a root one)",
"username": "Dev_INX"
},
{
"code": "",
"text": "While using the mongo native driver, the time is truncated from the date object",
"username": "Dev_INX"
},
{
"code": "async function run() {\n await client.connect();\n console.log(\"Connected correctly to server\");\n const db = client.db(dbName);\n const coll = db.collection(\"coll\");\n try {\n\t\t const query = await coll.find({ 'updatedAt': {\n \t $gt: new Date(\"2022-01-01\"), //.toISOString(),\n \t\t $exists: true\n \t}\n\t\t },\n\t\t {\n \t\t \tallowDiskUse: false\n\t\t }).toArray()\n\t\t console.log(query)\n }\n catch (e) {\n console.dir(`Failed to drop collection: ${e}`);\n }\n}\nnew Date(\"2021-01-01\")$gtConnected correctly to server\n[\n {\n _id: new ObjectId(“128dec47bbd2f73014646ed1”),\n firstName: ‘first’,\n lastName: ‘last’,\n email: ‘[email protected]’,\n emailVerified: true,\n passwordCreated: 2022-10-06T09:50:03.488Z,\n apiRequestStatus: {\n maxNumOfKeys: 20,\n status: ‘UNSET’,\n createdAt: 2022-10-06T09:50:03.488Z,\n updatedAt: 2022-10-06T09:50:03.488Z\n },\n state: {\n isFrozen: false,\n adminEmail: null,\n updatedAt: 2022-10-06T09:50:03.488Z\n },\n platform: ‘XXX’,\n lastLogin: 2022-10-06T09:50:03.488Z,\n accountType: ‘XXX’,\n beneficialOwnerName: null,\n enableTFA: false,\n isReviewed: false,\n popupToShow: [],\n marketingRefferal: { referralCode: ‘CCCC-CCCC’, advocateReferralCode: ‘DDDD-DDDD’ },\n isMigratedFromS: false,\n password: ‘...’,\n __v: 0,\n createdAt: 2022-10-06T09:50:03.555Z,\n updatedAt: 2022-10-06T09:50:05.555Z\n }\n]\nupdatedAt = dateToMongoSearch(updatedAt);\nnew Date()updatedAt",
"text": "Hi @Dev_INX - Welcome to the community I inserted the test document into my test environment and did the query using the official MongoDB Node Driver (Version 4.8.1).Here’s a snippet of the query portion of the code I had used against my test environment:I have passed through a value of new Date(\"2021-01-01\") to the $gt operator as shown above and tried to keep the query portion as close to what you had provided as possible which did return a result below.\nThe output:Query in the code does not return expected result, it returns nothing.\nRegarding the following in your code:Could you log this value and advise the output including the data type? Please also review the Date() documentation in regards to the new Date() function.If you still require further assistance, please provide the following:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "users,2022-10-06T09:50:04.555+03:00find({updatedAt: new Date(updatedAt})",
"text": "Hi Jason!updatedAt BEFORE executing a query returns 2022-10-06T09:50:04.555+03:00I get this data from a file in node.js (app requires he file to temporarily store it)\nFile contents:\nusers,2022-10-06T09:50:04.555+03:00\nand we cut “users,” string => then store the date in updatedAt variableand we put in updatedAt to query like that => find({updatedAt: new Date(updatedAt})driver version: 4.10.0 (the latest one for now)server version on local PC: 4.4 (locally from mongo’s docker container)\nserver version on remote machine: ~Ubuntu 20+, will write later (I am not maintaining it and do not have access - but it should be not differ too much)",
"username": "Dev_INX"
},
{
"code": "",
"text": "You show in your example that it is only date in query, without time. Does it matter?",
"username": "Dev_INX"
},
{
"code": "",
"text": "typeof updatedAt is string",
"username": "Dev_INX"
},
{
"code": "",
"text": "UPD: Mongo is in MongoAtlas (remote url), and version is 4.4.17",
"username": "Dev_INX"
},
{
"code": "",
"text": "All fixed, thanks. The problem was not only in timestamp but we fixed.",
"username": "Dev_INX"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Resolve updatedAt timestamp in node.js | 2022-10-16T07:51:08.206Z | Resolve updatedAt timestamp in node.js | 4,358 |
null | [
"java",
"compass"
]
| [
{
"code": "",
"text": "Steps to replicateImpactMongo int32 maps to Java Integer datatype which results in issue when parsing at Java side",
"username": "churamani_prasad"
},
{
"code": "",
"text": "Try mongoexport with –jsonFormat rather than Compass to export. The EJSON should keep type information.Using mongodump/mongorestore might be more appropriate depending of the use-case.",
"username": "steevej"
}
]
| Datatype changes when exporting and reimporting document using compass | 2022-10-18T02:33:01.506Z | Datatype changes when exporting and reimporting document using compass | 1,449 |
null | [
"queries"
]
| [
{
"code": "db.adminCommand(\"listDatabases\").databases.forEach(function(d){\n let mdb = db.getSiblingDB(d.name);\n mdb.getCollectionInfos({ type: \"collection\" }).forEach(function(c){\n let currentCollection = mdb.getCollection(c.name);\n currentCollection.getIndexes().forEach(function(idx){\n let idxValues = Object.values(Object.assign({}, idx.key));\n\n if (idxValues.includes(\"hashed\")) {\n print(\"Hashed index: \" + idx.name + \" on \" + idx.ns);\n printjson(idx);\n };\n });\n });\n});\n",
"text": "Hi all,\nI want to return all the TTL Indexes in my schema but cannot find a way to restrict to TTL only.\nI found this for hashed Indexes but it doesn’t help. Tried using idxValues.hasProperty as well any ideas?// The following finds all hashed indexes",
"username": "Claire_Moore1"
},
{
"code": "expireAfterSecondsmongoshDB> db.log_events.getIndexes() /// Get indexes for reference\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { createdAt: 1 },\n name: 'createdAt_1',\n expireAfterSeconds: 10\n },\n {\n v: 2,\n key: { lastModifiedDate: 1 },\n name: 'lastModifiedDate_1',\n expireAfterSeconds: 3600\n },\n { v: 2, key: { a: 1 }, name: 'a_1' }\n]\nDB> db.log_events.getIndexes().forEach(function(idx){ if (Object.hasOwn(idx, 'expireAfterSeconds')){ print(idx)}}) /// Look for 'expireAfterSeconds' field\n{\n v: 2,\n key: { createdAt: 1 },\n name: 'createdAt_1',\n expireAfterSeconds: 10\n}\n{\n v: 2,\n key: { lastModifiedDate: 1 },\n name: 'lastModifiedDate_1',\n expireAfterSeconds: 3600\n}\nmongoshgetIndexes()mongosh",
"text": "Hi @Claire_Moore1,Do you have some details regarding the use case for this? In the meantime, I’ve created an example version which looks for the field expireAfterSeconds in mongosh:If you are considering on running this script or similar then please test it thoroughly to verify it meets all your use case(s) and requirements.If you’re not planning to run this via the mongosh then perhaps you could alter it accordingly for listIndexes as the getIndexes() command is a helper for mongosh.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,Firstly thank you for taking time to help with this.\nI tried running your command in Mongo shell and I get an errorType error:Object.hasOwn is not a function\nI am using MongoDB Compass Version 1.28.4\nDb is Mongo 4.2.6The reason I want to return these Indexes is because we are planning to introduce more TTL Indexes and I want to keep track of how many we have, how much space they are taking up in RAM - we are using them to purge older data.",
"username": "Claire_Moore1"
},
{
"code": "db.adminCommand(\"listDatabases\").databases.forEach(function(d){\n let mdb = db.getSiblingDB(d.name);\n mdb.getCollectionInfos({ type: \"collection\" }).forEach(function(c){\n let currentCollection = mdb.getCollection(c.name);\n currentCollection.getIndexes().forEach(function(idx){\n if(idx.expireAfterSeconds){\n printjson(idx);\n }\n });\n });\n});\n",
"text": "Hello,The snippet you shared works for hashed indexes as it searches for “hashed” in the “key” field, while the attribute you are searching for “expireAfterSeconds” for TTL indexes is a top level field.I modified the snippet you shared to search for indexes that has the field “expireAfterSeconds” and returning only those.I tested it in my environment and confirmed it returns only TTL indexes:I hope you find this helpful.",
"username": "Mohamed_Elshafey"
},
{
"code": "",
"text": "Mohamed,This worked perfectly. While I had tried to supplement hashed with expireAfterSeconds I did it slightly different and I think it was returning all the indexes for a collection that had a TTL (so if collection eventlog had 3 indexes , 1 was TTL they all 3 indexes returned)\nThanks so much for your help made my day ",
"username": "Claire_Moore1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How do I write a query to return all my TTL Indexes only | 2022-10-17T13:33:08.792Z | How do I write a query to return all my TTL Indexes only | 1,336 |
null | [
"swift",
"flexible-sync",
"developer-hub"
]
| [
{
"code": "",
"text": "We recently announced the release of the Realm Flexible Sync preview – an opportunity for developers to take it for a spin and give us feedback. Realm Flexible Sync lets the developer provide queries to control exactly what the mobile app asks to sync, together with backend rules to ensure users can only access the data that they’re entitled to.I’ve recently published an article showing how to add flexible sync to the RChat mobile app. It shows how to configure the Realm backend app, and then what code needs adding to the mobile app.",
"username": "Andrew_Morgan"
},
{
"code": "ChatMessage",
"text": "Hey Andrew,Following up on this from your linked post:Anyone can read any ChatMessage . Ideally, we’d restrict it to just members of the chat room, but permissions don’t currently support arrays—this is another feature that I’m keen to see added.It looks like flexible permissions support arrays now. Would you be able to share how you might use them to implement these permissions for RChat?",
"username": "Campbell_Bartlett"
}
]
| New post: Usjng Realm Flexible Sync in Your App – an iOS Tutorial | 2022-02-25T12:50:03.037Z | New post: Usjng Realm Flexible Sync in Your App – an iOS Tutorial | 3,285 |
null | [
"api"
]
| [
{
"code": "",
"text": "I want to create a program that restores snapshots from one organization (in a project called project1) to another organization (in a project called project2).\nHence, I thought about creating an API key in project2, and then inviting that key to project1 (following this guide https://www.mongodb.com/docs/atlas/configure-api-access/#std-label-invite-org-app-api-keys).Unfortunately, it seems that only keys under organization1 are visible to project1.What should I do?",
"username": "Yuval_Lavie"
},
{
"code": "project1project2",
"text": "Welcome to the MongoDB Community @Yuval_Lavie !Since these are two different Atlas organisations, you should create an API key for each project and call those from your program: use project1 API key to fetch snapshots and project2 API key to restore snapshots.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "UNEXPECTED_ERROR\ncurl --user \"{PROJECT2_PUBLIC}:{PROJECT2_PRIVATE}\" --digest \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{PROJECT2}/clusters/{DEST_CLUSTER}/backup/restoreJobs?pretty=true\" \\\n --data '\n {\n \"delivery\" : {\n \"methodName\" : \"AUTOMATED_RESTORE\",\n \"targetGroupId\" : \"{PROJECT2}\",\n \"targetClusterId\" : \"{DEST_CLUSTER}\"\n },\n \"snapshotId\": \"{SOURCE_SNAP_ID}\"\n }'\n",
"text": "Thank you for your answer\nI’m following this API.I’m using project2’s API key as you suggested.\nThe body params are clear - the target group is project 2.What about the URL?\nIf I use project1 and cluster1, I get USER_CANNOT_ACCESS_ORG\nAnd if I use project2 and cluster2, I get UNEXPECTED_ERROR “Internal Server Error”This is my second request:",
"username": "Yuval_Lavie"
},
{
"code": "curl --user \"{PROJECT2_PUBLIC}:{PROJECT2_PRIVATE}\" --digest \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{PROJECT2}/clusters/{DEST_CLUSTER}/backup/restoreJobs\" \\\n --data '\n {\n \"deliveryType\": \"automated\",\n \"snapshotId\": \"{SOURCE_SNAP_ID}\",\n \"targetClusterName\": \"{DEST_CLUSTER}\",\n \"targetGroupId\": \"{PROJECT2}\"\n }''\n{\"detail\":\"Unexpected error.\",\"error\":500,\"errorCode\":\"UNEXPECTED_ERROR\",\"parameters\":[],\"reason\":\"Internal Server Error\"}\n",
"text": "I’m sorry, this is my request:And I get:",
"username": "Yuval_Lavie"
},
{
"code": "",
"text": "can you please answer? ",
"username": "Yuval_Lavie"
},
{
"code": "Project1Project2Project Owner",
"text": "Hi @Yuval_Lavie,What are the role’s associated with each of the API keys?Additionally, is the use case to automate this procedure or are you wanting to specifically just perform a restore from Project1 (in Organization1) to Project2 (in Organization2)?If it is for the latter, you can try following the Restore your Snapshot to an Atlas Cluster procedure. You’ll need to be a Project Owner in both organizations.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_TranEach API key has a project owner role.\nActually, I was expecting that one API key would have permissions on both organizations, because the action of restoring requires permissions for accessing the source snapshot, and deploying on the dest projectI want to automate this procedure.",
"username": "Yuval_Lavie"
},
{
"code": "",
"text": "Hey @Jason_Tran @Stennie_XThis topic is really a blocker for us. We want to start automating this procedure with an API key but we can’t.\nWe would really appreciate if you could answer it soon ",
"username": "Yuval_Lavie"
},
{
"code": "",
"text": "Hi @Yuval_Lavie,I am still checking to see if restore from one Organization to another Organizaiton using the Atlas Administration API is possible. It may not be possible as the all API keys exists within the Atlas Organization (which can be invited to the Projects within the same Organization). However based on my limited testing, I cannot see that the API keys and resources (snapshots in this case) of one Org can be used in another Org with its own set of API keys.In saying so, could you provide further details on the use case regarding the automatic restore from Organization1 to another Organization2 rather than restoring from Project1 to Project2 (within the same single Organization)?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Yuval_Lavie ,I have confirmed with our engineering team that it is not currently possible to restore from one Organization to another Organization using the Atlas Administration API due to the fact that each API key belongs to only one organization.We would be interested in understanding your use case of automating restores from Organization1 to another Organization2 rather than restoring from Project1 to Project2 (within the same single Organization) as @Jason_Tran mentioned above as that may help here.Best regards,\nEvin",
"username": "Evin_Roesle"
},
{
"code": "",
"text": "Hi @Evin_Roesle @Jason_Tran\nThanks for your reply.We are a data security company, which supplies our customers with analytics about their data saved in the cloud.These days we’re expanding our support to MongoDB Atlas.\nFor our needs, we need to clone our customer’s cluster into our environment, which will be in our control, and our own billing (and that’s why we need it to be transferred between different organizations).We know we can do that with user permissions (which will be invited by the customer’s project). Still, we can’t automate it (even atlasCLI asks for web authentication at the beginning).Is it possible to support sharing API keys between different organizations?",
"username": "Yuval_Lavie"
},
{
"code": "",
"text": "Hey @Evin_Roesle @Jason_Tran\nI’d like to have a response ",
"username": "Yuval_Lavie"
},
{
"code": "",
"text": "Hi @Yuval_Lavie ,This is not supported today. I see that you already submitted this as a feedback on Share API key cross organizations – MongoDB Feedback Engine . These feedback ideas are seen and evaluated by the appropriate team so this is the best way to highlight ideas/suggestions to our teams so that they can be considered for prioritization.I am not aware of any current ongoing work to enable this functionality but your feedback suggestion is the best place to see any update as we try to keep those as updated as possible.Best regards,\nEvin",
"username": "Evin_Roesle"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sharing API key between different organizations | 2022-10-02T09:43:39.809Z | Sharing API key between different organizations | 4,582 |
null | [
"aggregation"
]
| [
{
"code": " let resultData = await geoModel.aggregate([{\n $geoNear: {\n near: { \"type\": \"location.Point\", \"location.coordinates\": [-70.33013568836199, 43.63477528143518 ] },\n distanceField: \"dist.calculated\",\n maxDistance: 64373.8,\n query: { storeManager: \"Sanket\" },\n includeLocs: \"dist.location\",\n spheical: true\n }\n }]);\n",
"text": "I hace the following:\nlocation\": {\n“coordinates”: [-70.09639117297635, 44.02152086199026],\n“type”: “Point”\n},My pipeline is:This gives me an error. If I remove “location” from coordinates, it works!!What wrong with location.cordinates??",
"username": "Chris_Job1"
},
{
"code": "nearnear",
"text": "Hello @Chris_Job1 ,As shown in this $geoNear documentation, the near field is described as the point for which to find the closest documents. If using a 2dsphere index, you can specify the point as either a GeoJSON point or legacy coordinate pair. If using a 2d index, specify the point as a legacy coordinate pair. The correct syntax to add coordinates in near filed isnear: { type: “Point”, coordinates: [ X , Y ] }\nwhere X and Y are coordinatesI think that you are confusing this with accessing the objects of array and that is a different operation.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Accessing array inside of an object | 2022-10-12T21:15:16.765Z | Accessing array inside of an object | 970 |
null | []
| [
{
"code": "",
"text": "Hi Team,How to find DB creation date. Also if we using $external DB for authetication then how to find user creation date.",
"username": "BM_Sharma"
},
{
"code": "mongosh>var db1 = db.sample.find( {}).sort( { _id: 1}).limit(1).next()\n>db1._id.getTimestamp()\nISODate(\"2022-10-18T06:12:02.000Z\")\n",
"text": "Hi @BM_Sharma and welcome back to the MongoDB community forum!!MongoDB does not store the database or the collection creation date. But if you are aware about the oldest document in the collection and if you are using the standard ObjectId() for _id, the value of that field could probably be used to extract the creation data for the collection or database.The following commands in mongoshNote that this method does not guarantee correctness, since the oldest document could be already deleted. If collection/user creation information is important for your use case, it’s best to record this information separately to ensure accuracy.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| How to find DB creation DB | 2022-10-15T17:27:05.225Z | How to find DB creation DB | 2,777 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[ \n\t[\n\t '$geoNear'=> [\n\t\t'near'=> [ 'type'=> 'Point', 'coordinates'=> [1.2744485, 1.5845001] ],\n\t\t'distanceField'=> 'distance',\n\t\t'$maxDistance'=> 10,\n\t ]\n\t],\n\t[\n\t'$lookup'=> [\n\t 'from'=> \"like\",\n\t 'let'=> [ 'username'=> '$username' ],\n\t 'as'=> \"likes\",\n\t 'pipeline'=> [\n\t\t[\n\t\t '$match'=> [\n\t\t\t\t'$expr'=> [\n\t\t\t\t\t'$and'=> [\n\t\t\t\t\t\t[ '$eq'=> [ '$likeA', '$username' ] ]\n\t\t\t\t\t]\n\t\t\t\t]\n\t\t\t]\n\t\t]\n\t\t]\n\t]],\n\t[\n\t'$match'=> [\n\t\t'likes.likeA'=> [ '$exists'=> false ]\n\t]\n\t]\n];\n",
"text": "Hello! My problem is that I would like to find users who have not yet liked a certain username, but who are from the same area. if I try to write the code as below $ geoNear is not taken into account. How should this be done? thank you!",
"username": "Mondo_Tech"
},
{
"code": "",
"text": "Hello @Mondo_Tech ,Welcome to The MongoDB Community Forums! Could you please help me with below details to understand your use case better?Regards,\nTarun",
"username": "Tarun_Gaur"
}
]
| $geoNear with $lookup and $match | 2022-10-12T08:39:24.434Z | $geoNear with $lookup and $match | 1,113 |
null | [
"database-tools"
]
| [
{
"code": "",
"text": "HiI am trying to use mongoimport --mode-merge to apply some bulk updates to a collection. I want to update existing documents that all have _id : ObjectIDs formatted ids.eg: mongoimport -d=database -c=collection --mode=merge --file=patch.csv -type=csv - columnsHaveTypesAs the title suggests I am trying to import CSV data into a collection using mongoimport and a formatted CSV file. I know using the columnsHaveTypes switch I can include data types in my header line.But in the documentation there is no mention of how to specify an ObjectID or format it in the CSV.Any suggestions?ps. I have been able to achieve the correct result by switching to JSON but that now involves an extra step hence the interest columnsHaveTypes.",
"username": "Daniel_Alvers"
},
{
"code": "columnsHaveTypesauto, binary, boolean, date, date_go, date_ms, \ndate_oracle, decimal, double, int32, int64, string\n",
"text": "Hi @Daniel_Alvers,Welcome to the MongoDB Community forums eg: mongoimport -d=database -c=collection --mode=merge --file=patch.csv -type=csv - columnsHaveTypesAs per mongoimport documentation, the type of columnsHaveTypes can be one of:If you want to do the import manually though, you can write a script or convert the CSV file into EJSON before using the mongoimport.Alternatively, MongoDB Compass, however, lets you upload a CSV file, and then choose ObjectID from the drop-down menu when you upload your CSV file.\nMongoDB Compass1176×1286 205 KB\nFurthermore, I suggest you post this feature on feedback.mongodb.com.I hope it helps!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Brilliant response Kushagra!",
"username": "Daniel_Alvers"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongoimport using CSV and columnsHaveTypes - how to specify an ObjectID? | 2022-10-12T06:06:18.342Z | Mongoimport using CSV and columnsHaveTypes - how to specify an ObjectID? | 2,839 |
null | []
| [
{
"code": "",
"text": "Hello,\nmy mongod.log file has been increased a lot and due to that mongodb has been stopped and not able to serve any request. It has grown up to 90 GB and there is no space on server now. I want to clean the file to free up the space but couldn’t find the option to do so. I found a post saying roate the file, but my server doesn’t have any space. And rotating will create one more copy of 90 GB file. So, I have 2 questions",
"username": "Nabha_61843"
},
{
"code": "cat /dev/null > mongod.log\n",
"text": "Hello,Firstly, you may want to check why the log grew so big, suspecting you have high logging level, you could check the log level with the below:\ndb.getLogComponents()rotating will not create a new copy, it will simply rename the existing one with a timestamp when the log rotation was initiated and start writing to a fresh mongo.log, you can then gzip the old log file or move the it to another disk if you have free space or you can delete it at this point if it’s no longer needed.For your second question if the old log entries are not needed, I have also tried in my testing environment to empty the logfile entirely by issuing the below:and then rotating the logfile as below:\nmongo --eval ‘db.adminCommand( { logRotate : “server” } )’and found that mongod process rotated the emptied file and created a fresh mongod.log and started writing to it.",
"username": "Mohamed_Elshafey"
},
{
"code": "",
"text": "I have the same issue, I had already check the server log level and it is in default level 0. I have also an hourly logrotation with logrotate.\nOur /var/log/ partition could be full after 10 minutes with multi-process scripts using 64 workers.\nOur MongoDB server shutdown when it couldn’t write in log partition.Is there a way to handle this more than those two tips?",
"username": "Yohann_Streibel"
},
{
"code": "",
"text": "Thanks @Mohamed_Elshafey for your reply. I will check the things as you suggested.",
"username": "Nabha_61843"
}
]
| Deleting log file | 2022-10-17T07:42:47.169Z | Deleting log file | 3,917 |
null | [
"aggregation",
"views"
]
| [
{
"code": "const filterPosts = await Model.aggregate([\n { $match: matchObj },\n { $sort: sortType },\n { $limit: 10 },\n ]);\nconst filterPosts = await Model.aggregate([\n { $limit: 10 },\n { $match: matchObj },\n { $sort: sortType },\n ]);\n",
"text": "i have the model which is a view basically. Here i want the pagination but i did pagination for it but it give me same time for 5 records as it give me for entire collection records 5 seconds. even i have implemented limit still its giving me same timethis is my querynow if i use limit first then use my match query object it work fastkindly help me in that im struck in that since week",
"username": "Mehmood_Zubair"
},
{
"code": "",
"text": "Hi @Mehmood_Zubair and welcome to the MongoDB community forum!!For better understanding of the delay being seen, it would be great, if you could share how would you like the aggregation response to look like.For instance, the first query you mentioned:const filterPosts = await Model.aggregate([\n{ $match: matchObj },\n{ $sort: sortType },\n{ $limit: 10 },\n]);This matches the the condition from the entire collection and then displays the 10 records after the sort operation has been applied.where on the other hand, the following queryconst filterPosts = await Model.aggregate([\n{ $limit: 10 },\n{ $match: matchObj },\n{ $sort: sortType },\n]);first filters out only 10 documents and then sorts and matches the condition to meet. This explains why the second query is faster as this only processes the first 10 documents of the collection.Please note that the two queries above are not the same and will not result in the same result setIf possible, please share the same document from the collection for better understanding.Best Regads\nAasawari",
"username": "Aasawari"
}
]
| After put sorting and query its become slow to fetch | 2022-10-16T05:34:34.012Z | After put sorting and query its become slow to fetch | 2,237 |
null | [
"queries"
]
| [
{
"code": "",
"text": "I have a View in mongodb i need to create indexes in that view model how can i do",
"username": "Mehmood_Zubair"
},
{
"code": "",
"text": "Hi @Mehmood_Zubair and welcome to the MongoDB community forum!!MongoDB views are basically read-only objects whose contents are formed through the aggregation pipeline being defined in the createView command.Also, if you wish use indexes on the collection, you can define indexes on the original collection and utilise the indexes on the views by specify.\nPlease visit the documentation to learn more on the Indexes in MongoDB..Alternatively, you can also create materialised view and create indexes on the same. See materialised view for details.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| How to give indexes to mongodb View | 2022-10-16T13:46:17.219Z | How to give indexes to mongodb View | 872 |
null | [
"node-js",
"compass"
]
| [
{
"code": "",
"text": "Hello.\nI try to make my question as clear as possible. So I created a free tier of MongoDB Cluster. It was a cool experience. I decided to Install an instance of Community Edition on an Ubuntu server. It worked and I can enter the shell by mongo command. It shows some warnings, though. I created some empty collections there.\nThe point is I was using the free tier in a Node.JS Express app through a URI string provided by MongoDB Cluster. I also was able to browse my collections through Compass GUI via a similar URI string.\nBut how can I connect my app to my MongoDB which is installed on Ubuntu server? And how to browse its collections through Compass?\nThanks for your help.",
"username": "mj69"
},
{
"code": "mongodb://localhost:27017mongodb://hostname:27017mongod2022-10-05T09:37:09.126-06:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\nauthorization",
"text": "Hi @mj69, and welcome to the MongoDB Community forums! You would connect to your MongoDB instance in a similar fashion. If your application and MongoDB instance are running on the same server, then your connection string would be mongodb://localhost:27017. If the application and MongoDB instance are on different servers then the connection string would be mongodb://hostname:27017.In the above examples I’m assuming you started the mongod process on the default port of 27017. If you chose a different port, then you would need to change that. I’m also assuming one of the warning you are getting is similar to the following:If you did enable authorization, then you would need to pass your credentials in similar to how you did with your Atlas connection.Let us know if you have any other questions.",
"username": "Doug_Duncan"
},
{
"code": "mongodmongomongodbind_ip0.0.0.0/etc/mongodb.confmongodb://<IP address of ubuntu server>:27017/?tls=true\nmongodb+srv://<username>:<password>@<some address>.mongodb.net/?retryWrites=true&w=majorityAuthentication Method",
"text": "Thank you @Doug_Duncan for your response.\nSince I’m very new to this topic of working with MongoDB on a remote Virtual Private Server, things are a bit more complicated than that on my side.Yes, my Node.JS app and MongoDB Community Edition are both on a remote Ubuntu server.\nI remember when working with MongoDB on my local Windows 10 machine, I had to activate it with mongod, first. Only then I could enter the Mongo shell in another PowerShell instance with mongo.A. What if my website’s end users want to add or remove data to and from my in-server database? Will the server keep mongod command active even if I don’t do that? Is such thing even necessary in this scenario?This article says I have to use the IP address of my remote server hosting MongoDB when trying to connect to it (from for example a Win10 machine) when using Compass:\nhttps://www.techrepublic.com/article/how-to-install-the-mongodb-gui-compass-and-connect-to-a-remote-server/\nI’ve changed the bind_ip to 0.0.0.0 in /etc/mongodb.confB. Is this URI string correct for connecting Compass to the remote server db:I used this URI string in my app when using the MongoDB cluster:\nmongodb+srv://<username>:<password>@<some address>.mongodb.net/?retryWrites=true&w=majorityC. Where is the username and password when MongoDB is installed on a remote server? Do I have to create them in Mongo shell on the remote server? If yes, then how?D. Does creating a user on in-server database change the URI string for Compass? Does it have something to do with Authentication Method section in Compass?E. In absence of such user, can Compass access the in-server db directly?I know that’s a lot. I appreciate your help.",
"username": "mj69"
},
{
"code": "",
"text": "Hi @mj69,I see this follow up question was also posted and addressed via Connecting a remote server to Compass?.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Running MongoDB Community Edition on Ubuntu server? | 2022-10-06T20:16:04.240Z | Running MongoDB Community Edition on Ubuntu server? | 2,176 |
null | [
"queries"
]
| [
{
"code": "body=`{\n\t\"query\": {\"account.companyName\":\"Groups\",\"_createdAt\":{\"$gte\":ISODate(\"2016-10-11T00:00:00Z\")}},\n\t\"projection\": {\n\t\t\"crn\": 1,\n\t\t\"state\": 1\n\t}\n}`; \nt= JSON.parse(body);\n var res= collection.find(t.query,t.projection,function(err, cursor){\n cursor.toArray(callback);\n db.close();\n});\n",
"text": "How does one write a JSON with Data Queries? I am building a mongodb function to accept HTTP request and parse query. I’ve tried new Date(“2016-10-11T00:00:00Z”) and ISO date function. I always end up with JSON parse errors as it can’t accept functions. How can we write a dynamic query string which can be sent in requestbody of http call? Thaks.",
"username": "SLOKA"
},
{
"code": "JSON.parse()Wed Oct 12 2022 11:04:45\nbody=`{\n\t\"query\": {\"account.companyName\":\"Groups\",\"_createdAt\":{\"$gte\":ISODate(\"2016-10-11T00:00:00Z\")}},\n\t\"projection\": {\n\t\t\"crn\": 1,\n\t\t\"state\": 1\n\t}\n}`; \nt= JSON.parse(body);\n var res= collection.find(t.query,t.projection,function(err, cursor){\n cursor.toArray(callback);\nconst { date } = req.body;\nconst res = await collections.aggregate([\n {\n $match: {\n \"account.companyName\": \"Groups\",\n _createdAt: {\n $gte: date.toISOString()\n }\n }\n },\n {\n $project: {\n \"crn\": 1,\n \"state\": 1\n }\n }\n])\n",
"text": "Hi @SLOKA,Welcome to the MongoDB Community forums Just to clarify when we use JSON.parse() on the ISODate– it gets converted toHowever, I recommend avoiding freeform queries like this on API endpoints. Instead of allowing the body of the endpoint to be a freeform JSON document that is sent directly to the server, let the endpoint accept only date values as the body.Consider the security aspect of the database, let’s say your body is empty, then it can clone your entire collection.Thus, we can write the above query considering the above endpoints:I hope it helps!Let us know if you have any further questions.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks Kesav. I am working on the some basic adapter like function query that can be changed at run time by the end user with the selection of the fields. So, I want to build the query outside of Mongodb.",
"username": "SLOKA"
}
]
| MongoDB functions with date query | 2022-10-10T15:21:25.626Z | MongoDB functions with date query | 1,471 |
null | []
| [
{
"code": "diagnostic.datajournallocal",
"text": "This is certainly not a complaint, but I am curious as to why Mongo 6 appears to consume in general far less memory than Mongo 4, with the same hardware, replica-configuration, same data, same indexes, same user activity, etc. I am no longer seeing large memory spikes that I used to see on Mongo 4 (sometimes resulting in swap usage), but the downside is that on a 64gb RAM machine, only about 20% appears to be used and remains consistently flat.The only notable difference between my Mongo 4 setup and Mongo 6 is that I now have symbolic links of diagnostic.data, journal and local folders to a local SSD disk, separate from database data on a SAN.Has there been some notable change between Mongo 4 and Mongo 5 that should see me allocate less resources to RAM and perhaps to CPU instead?",
"username": "smock"
},
{
"code": "",
"text": "Hi @smockWell this is certainly a good thing isn’t it There are large updates to WiredTiger between MongoDB 4.0, 4.2, 4.4, 5.0, and 6.0, so it depends on which “MongoDB 4” you’re talking about.Without knowing your exact situation and use case, if I have to guess it’s perhaps due to multi document transaction, read concern majority, and replica set synchronization improvements, and constant improvement in WiredTiger internals. You mentioned that older versions have large memory spikes; typically this was caused by the need to keep multiple versions of documents in WiredTiger memory that occurs when the workload requires WiredTiger to do so (transactions is one reason, among many). Newer WiredTiger have a mechanism that does not need to keep them all in memory so that is in line with memory usage improvements you are seeing.Note that this is just a sweeping generalization and may not be what you experienced at all the downside is that on a 64gb RAM machine, only about 20% appears to be used and remains consistently flat.That could mean that your hardware is now overprovisioned for the workload with those WiredTiger improvements. However I would be very careful in changing anything and would carefully examine all the angles before concluding anything.Best regards\nKevin",
"username": "kevinadi"
}
]
| Mongo 6 consumes far less memory than Mongo 4? | 2022-10-17T09:27:14.669Z | Mongo 6 consumes far less memory than Mongo 4? | 1,410 |
null | [
"dot-net",
"atlas-device-sync"
]
| [
{
"code": "List Data:\n{\n \"title\": \"LiveDataIsaac\",\n \"properties\": {\n \"Data\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"DriverName\": {\n \"bsonType\": \"string\"\n },\n \"TestName\": {\n \"bsonType\": \"string\"\n },\n \"User\": {\n \"bsonType\": \"string\"\n },\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_partition\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [\n \"_id\",\n \"_partition\"\n ]\n}\nList:\n public class LiveDataIsaac : RealmObject\n {\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"TestName\")]\n public string Name { get; set; }\n\n [MapTo(\"DriverName\")]\n public string ID { get; set; }\n\n [MapTo(\"User\")]\n public string MobileData { get; set; }\n\n [MapTo(\"Data\")]\n public IList<string> Data { get; }\n\n [MapTo(\"_partition\")]\n [Required]\n public string Partition { get; set; }\n }\n",
"text": "Hello,please could you help me getting list data and binary data from Atlas to a mobile device.current schemas:C# Model:thank you for all your help!",
"username": "Isaac_Cragg"
},
{
"code": "RealmList<string>",
"text": "Honestly I don’t know. I tried RealmList<string> but it didn’t create any schema for this type. Normal array doesn’t seem to be the way as you can’t just add things to it.\nI made it in the simplest way and that is - create new type that inherits from EmbeddedObject. This way it’s almost like an array of values (but in fact it’s an array of objects). Not the perfect solution but I also couldn’t find an answer to the question you have.",
"username": "Michal_Kania"
},
{
"code": "",
"text": "Didn’t you already post a similar question here?",
"username": "Jay"
},
{
"code": "",
"text": "I don’t see how it’s related and I’m also interested in the answer. How do you make an array of primitive/basic type? IList clearly doesn’t work. RealmList also didn’t make it for me. I would like to know the answer too.",
"username": "Michal_Kania"
},
{
"code": "",
"text": "Good day. Did this ever get resolved @Isaac_Cragg ?\nIt would be quite the accomplishmen if I could do array/Lists in Realm objects, but I keep hitting this limitation on RealmObjects/Embedded objects. Which puts a frustrating hole in what I want to do. Which is embed a list in favour of needing to separate the embedded list into a separate collection. THEN merge them in the application some how.I thought I saw a plan to do that on github but, being new-ish, it’s dabilitating to be stuck on this limitation.While I managed to get the nested objects into Atlas and a sync to Realm I know I won’t be able to get them out easily. Once you start inheriting Embeded objects the “unsupported” errors start. Why can I even sync them if I can’t use them on the client side? I’m either using the wrong something, I’m missing something (likely) or it’s a limitation that requires some excessive writes to hack it. To much wasted time ",
"username": "Colin_Poon_Tip"
},
{
"code": "",
"text": "@Colin_Poon_TipDo you have the exact same use case and question, which wasgetting list data and binary data from Atlas to a mobile deviceIf so, can you present some code you’ve attempted so we can see where the issue is? If not, perhaps posting a separate question with your use case would be in order.",
"username": "Jay"
},
{
"code": "",
"text": "Hey there,I believe it should work, as long as it matches up to the online schema, also please remember to set the primitive data arrays as required, as I think this threw an error without.C# property:[MapTo(“permission_ids”)]\n[Required]\npublic IList permissionIds { get; }Mongo Schema:“permission_ids”: {\n“bsonType”: “array”,\n“items”: {\n“bsonType”: “string”\n}\n},hope this helps,Isaac",
"username": "Isaac_Cragg"
},
{
"code": "",
"text": "Thanks very much @Isaac_Cragg .\nI’ve been hacking around and I think while I managed ot store array’s of objects I guess, what I’m wondering is:\nAnd forgive me if I hijack your thread, but you’re cool and helpful When you’re writting back to a local REALM though…you have to inherrit EmbeddedObject, but Lists only allow get’s but not set’s. Which I belive means every write with a change ot the embeded list means you have to create a new recored with the list initialized in construction.It certainly doesn’t compile using a set’s.\nIf that’s the case the so beit. It’s odd, but I imagine there’s a technical reason?\nI’m just trying to avoid writting the workaround to a List.Add which would be wonderbar!! Unless, I’m missing something?Thanks for you time!!\nC",
"username": "Colin_Poon_Tip"
},
{
"code": "Tender[] Payment {get; set;}",
"text": "Just as an adendum. Turns out you can’t [Require] arrays with RealmObjects.\nHowever, I hacked around the IList issue.\nI declared my properties as arrays. For example[BsonElement(“payment”)\nTender[] Payment {get; set;}\n…\nThat satisfies the compiler with inheritance on RealmObject.\nThe hack, and I assume they’ll finish IList in another driver update, is to work with Lists and when setting the property just invoke ToArray() and vise versa (ToList()).Having said all that, I’ve still yet to see a replication of my REALM to my client side. Some weird errors I’ve opened a case about. I think something’s bunged up Cheerio!!\nCPT",
"username": "Colin_Poon_Tip"
}
]
| Schema Help, for getting arrays to realm | 2022-02-07T09:10:04.768Z | Schema Help, for getting arrays to realm | 4,767 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "I am using the C# Driver (2.10.1) and I periodically get this error and I cannot find anything online so far.“Command update failed: bson length doesn’t match what we found in object with unknown _id.”I am updating a single document. I am really at a loss, any help would be appreciated.Mike",
"username": "Michael_Harris"
},
{
"code": "[BsonId]\n[BsonRepresentation(BsonType.ObjectId)]\npublic string Id { get; set; }\n",
"text": "I believe this might be to do with the BSON.ObjectId not matching to the class you’re using, are you decorating the model with something like:Mongo will automatically create an Id property when you don’t have one on your model but I’ve always found it safer to set one explicitly",
"username": "Will_Blackburn"
},
{
"code": "",
"text": "Hi @Michael_Harris, welcome!I am updating a single document. I am really at a loss, any help would be appreciated.If the suggestion from @Will_Blackburn does not solve your problem, could you provide a code snippet of the update operation that could reproduce the problem ?Regards,\nWan.",
"username": "wan"
},
{
"code": "MongoError: bson length doesn't match what we found in object with unknown _id\nbson length doesn't match what we found in object with unknown _id: MongoError: bson length doesn't match what we found in object with unknown _id\n at MessageStream.messageHandler (/usr/src/app/indy/Indy/node_modules/mongodb/lib/cmap/connection.js:253:20)\n at MessageStream.emit (events.js:311:20)\n at processMessage (/usr/src/app/indy/Indy/node_modules/mongodb/lib/cmap/message_stream.js:140:12)\n at MessageStream._write (/usr/src/app/indy/Indy/node_modules/mongodb/lib/cmap/message_stream.js:66:7)\n at doWrite (_stream_writable.js:441:12)\n at writeOrBuffer (_stream_writable.js:425:5)\n at MessageStream.Writable.write (_stream_writable.js:316:11)\n at Socket.ondata (_stream_readable.js:714:22)\n at Socket.emit (events.js:311:20)\n at addChunk (_stream_readable.js:294:12)\n at readableAddChunk (_stream_readable.js:275:11)\n at Socket.Readable.push (_stream_readable.js:209:10)\n at TCP.onStreamRead (internal/stream_base_commons.js:186:23)\n",
"text": "Hi,We also experience this issue on a regular basis. It seems to occur mostly (if not only) within a Docker container. We use the 4.2.1-bionic image from Docker Hub, but have tested with a bunch more, including 4.2.3 and the latest 4.2.5Driver: NodeJS 3.5.2\nServer: 4.2.1\nProblem: Incidental error about unknown_id, see stacktrace below\nEnvironment: Kubernetes v1.14.8. Mongo uses a single-RW persistent EXT4-volume.Note that we have not seen this on a non-Dockerized environment before. The only occasions in which this has occurred so far is within our CI (Gitlab-CI, same Docker image as on Kubernetes) and within Kubernetes.The stacktrace we get is:",
"username": "Piet_van_Agtmaal"
},
{
"code": "",
"text": "Same issue with Golang mongo-driver v1.10.1 and mongo:4.2 docker image.",
"username": "Gleb_Khodurev"
}
]
| Command update failed: bson length doesn't match what we found in object with unknown _id | 2020-02-23T04:05:27.092Z | Command update failed: bson length doesn’t match what we found in object with unknown _id | 3,813 |
null | []
| [
{
"code": "",
"text": "ERROR: child process failed, exited with 1#even after I had created /data/db manually and change permission to 777.",
"username": "Ren_Song"
},
{
"code": "",
"text": "Check mongodb.log\nThe switch/flag preceding each parameter should be double hypen\nMay be it ignored those params and trying to start mongod on default port/dbpath but you may be already have a running instance there",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Ren_Song,\nHave you disabled SElinux?\nIs the dbpath consistent with the one in the configuration file?Best Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "It is still a directory permission issue. I have already got it resolved. Thank you!",
"username": "Ren_Song"
},
{
"code": "",
"text": "It is not about SElinux. Thank you for giving me a chance to learn about SElinux!",
"username": "Ren_Song"
},
{
"code": "",
"text": "It is not about hypen. It is still a directory permission issue. Thank you for trying to help!",
"username": "Ren_Song"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongod --dbpath /data/db --fork --logpath /data/db/mongodb.log | 2022-09-25T01:53:57.374Z | Mongod –dbpath /data/db –fork –logpath /data/db/mongodb.log | 1,919 |
null | [
"data-modeling"
]
| [
{
"code": "",
"text": "Hello,\nI am new to MongoDB and I have a question around storing image files in Mongo.I have read and I understand that there are 3 different ways to store the file:GridFSInline: Basically inbed the picture in the actual document… (I know I need to be mindful of the 16M size limit for the document)Reference: This would entail storing a Reference URL instead of the actual file (At least that is my understanding).While I am familiar with the second two methods, I am not at all familiar with the GridFS method.Here is what I need to do… I am designing an online e-commerce website that will sell products… I initially wanted to include a photo gallery of up to 3 to 5 images of the product. Because of this, I am concerned with using the inline method and I lean toward the reference method. I just wanted to check and see if I am on the right track.If Reference is the most optimal method, is there any recommendations on what storage to use with the URL? I am developing on Azure.Thanks.Respectfully,David Thompson",
"username": "David_Thompson"
},
{
"code": "",
"text": "Hi @David_Thompson ,You are correct in leaning towards referencing a hosted URL in MongoDB rather then using GridFS. Today the image hosting solutions like S3 or others are so much easier to use and cheaper so storing this data in MongoDB does not make real sense.The GridFS is a general solution when storing Binary data in MongoDB , but I would recommend it for systems that does not have access to API’s and services therefore have no other way of storing the data rather then in the database…Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you… I have one more question that I need guidance on.Currently I am storing my user in a collection… I wanted to have two addresses stored with each user. One for physical address and the other for shipping. I am currently looking at embedding the address object into the User collection.My current schema is like this.User Collection\n_id\nfirst_name\nlast_name\nphone\nemail\nAddress (This is an object from the address class)\nphysical\nshippingThere are other fields, but for brevity sake I limited them to the bare necessary ones.Currently I can create a User object and load that into the database using a creatUser method and I pass it both the address object and the user object.I can get the address objects (both addresses) to load and update in the database.Where I am struggling is I created a getUser method using the email address of the user. I can get the object to the cursor so I know there is something there, but I am struggling with getting the information stored in the object. I am trying to find out if I query for the user object using the email address, does that return the user AND the address object embedded? If so… How do I access all the information in the record.I have tried the\nfor obj in user:\nprint(obj.get(user.first_name)This doesn’t return anything.Any suggestions?",
"username": "David_Thompson"
},
{
"code": "",
"text": "Hi @David_Thompson ,What driver are you using? Python?Can you provide more code ?If the query has embedded documents and you are not projecting specific fields you should get back the entire document and should be able to iterate that as any other json object in your client.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I am sorry for the late response… I figured it out. I am new to Python and I didn’t realize I needed to access fields using so basically I had the object in memory, and to access it I needed to enclose the fields in like this. print(obj[‘first_name’]) where obj is the variable that was created with the query.",
"username": "David_Thompson"
}
]
| Working with Image Files in MongoDB | 2022-09-27T08:42:29.435Z | Working with Image Files in MongoDB | 3,739 |
null | [
"indexes"
]
| [
{
"code": "{\n \"LookupId\": \"2713b525-d8f3-4f20-8bbd-5a5ec02108c5\",\n \"storeDate\": \"2022-10-10T04:30:38.394Z\",\n \"count\": 6871\n}\n",
"text": "I have a simple modelbut want to put a constraint on creating more than one LookupId per day.\nIf I index on storeDate then I can have a record every millisecond, I only want one LookupId records for each day.\nDo I have to create another property with just the date and then index on that … or ?",
"username": "Glen_Worrall"
},
{
"code": "\"storeDate\": \"2022-10-10T00:00:00.000Z\",",
"text": "I only see 2 different ways to do that.\"storeDate\": \"2022-10-10T00:00:00.000Z\",I would prefer option 2.",
"username": "steevej"
},
{
"code": " const aDate = new Date();\n aDate.setHours(0,0,0,0)\n",
"text": "Thanks Steeve,\nI went for Option 2 which was quite easily controlled from my app",
"username": "Glen_Worrall"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unique Constraint for daily records | 2022-10-16T05:54:01.440Z | Unique Constraint for daily records | 1,579 |
null | [
"swift"
]
| [
{
"code": "let drugMonographURLs = [1.json,2.json,.......6000.json]\nvar drugs = [RLMDrug]()\n\ndrugMonographURLs.forEach { item in\n queue.async(group: group) {\n do {\n let data = try Data(contentsOf: item)\n let drug = try JSONDecoder().decode(RLMDrug.self, from: data)\n drugs.append(drug)\n }\n catch {\n print(error)\n }\n }\n }\nlet drug = try JSONDecoder().decode(RLMDrug.self, from: data)\n",
"text": "There are close to 6000 JSON files I am downloading from backend. Later I will parse and insert the same records into RealmDB with below piece of codeAt some point of time app is getting crashed immediately after this lineand getting error saying malloc: double free for ptr 0x7f8d4d1c5e00",
"username": "Basavaraj_KM1"
},
{
"code": "queueRLMDrug",
"text": "Hi @Basavaraj_KM1,Your code snippet is rather incomplete (what’s queue, for example?), and doesn’t seem to involve Realm at all: while presumably RLMDrug is an object that would ultimately end up in a Realm DB, within this specific sample all the objects are still unmanaged, so the SDK doesn’t seem to touch anything.Last but not least, what’s your use case? What’s the rationale of having thousands of individual JSON files, instead of more manageable alternatives (for example, applying the data directly on the backend, and let Device Sync fill up the clients)?",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Yes this issue we are getting before accessing Realm instance at the time of parsing the datalet drug = try JSONDecoder().decode(RLMDrug.self, from: data)",
"username": "Basavaraj_KM1"
}
]
| App is getting crashed randomly with error malloc: double free for ptr | 2022-10-13T16:42:17.156Z | App is getting crashed randomly with error malloc: double free for ptr | 2,169 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.