image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "",
"text": "Small db less than 1000 documents\"Your cluster is being upgradedIt will be unavailable for a few minutes during the conversion\"Stuck for over 12 hours now. Chat support unresponsive. Need help",
"username": "Acs"
},
{
"code": "",
"text": "Hi @AcsWelcome to MongoDB community.This metter is best covered by our support plans. Please register to one of our support plans and contact support.This can be done simply by clicking the support tab and registering.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Acs, This sounds really frustrating: I’m sorry to hear this happened. Did you ultimately get help?-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "@Acs did you ever resolve this? I’ve just done M0 to M2 and it’s stalled upgrading with the message:Your cluster is being upgraded…\nIt will be unavailable for a few minutes during the conversion.",
"username": "Neil_Docherty"
},
{
"code": "",
"text": "I’ve also done an M0 to M2 upgrade and its been stuck for serveral hours with the same message. My prod DB is down and I cannot even access the backups. This is really bad.",
"username": "Jordan_Burgess"
},
{
"code": "",
"text": "Hi @Jordan_Burgess and @Neil_Docherty ,As mentioned before, those issues are covered by Atlas support.Please note that the M0 cluster is a for development and getting started purposes, it is not for production use.thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Were you able to figure out the issue and resolved it?\nI am stuck now at the same place. This is such idiocity of MonGo to make user get stuck and then push them to buy such highly expensive Support.",
"username": "Rajath_S_K"
},
{
"code": "",
"text": "I think I ended up just scrapping it and repopulating the data (luckily that was an option). I left the company I was doing this for over a year ago so I don’t have access to notes, etc… anymore to give a more concrete answer. We were moving to the hire tier to get backups working as we moved out of the development and into the production phase of the project. The irony of the situation wasn’t lost.",
"username": "Neil_Docherty"
},
{
"code": "",
"text": "The developer support tier has a FREE trial as well as the existing FREE in-app support:\nimage1699×1005 136 KB\n",
"username": "chris"
},
{
"code": "",
"text": "Hello, i’m facing the same issue and I don’t mind paying for support because we have our users database on there.Tried to register for support plan but I couldn’t create a case.",
"username": "Jeroid_Limited"
},
{
"code": "Developer & Premium Support Plans",
"text": "Hi @Jeroid_Limited,Please contact the atlas in-app chat support regarding raising a support case but also mention the stuck upgrade. In saying so, the same linked page has the instructions on how to create a support case in the Developer & Premium Support Plans tab.Regards,\nJason",
"username": "Jason_Tran"
}
] | Cluster upgrade stuck for over 12 hours | 2021-04-12T03:14:23.279Z | Cluster upgrade stuck for over 12 hours | 4,831 |
null | [
"aggregation",
"performance"
] | [
{
"code": "",
"text": "Hi, I’m trying to find why one of my aggregation is choking the server on 5.0.18 but is working fine on 5.0.13 (the issue was discovered after yesterday’s upgrade and fixed today by downgrading).I’m using Java (reactive) driver, latest version. No changes were applied to indexes.From my initial match, I’m doing 2 lookups using a $match and $eq with a variable. One of the lookup has a sub-lookup using the basic form (local/foreign field).i.e. Collection A → Collection B and Collection C matching indexed fields of B and C from a variable of A. Then Collection C → D using the simple form on a indexed field of D.It jumps from a sub-sec. query to choking one CPU for minutes.My current guess is that the 2 $match using a variable from A are not using the indexes on B/C anymore…",
"username": "Jean-Francois_Lebeau"
},
{
"code": "db.collection.explain(\"executionStats\").aggregate(...)db.collection.explain(\"executionStats\").aggregate(...)",
"text": "Hi @Jean-Francois_Lebeau,I assume nothing changed (same query, same server) except the MongoDB version. Could you provide the following information if possible (possibly from a v5.0.18 test server for 2.):Otherwise if a 5.0.18 test environment is not available for 2. above then please provide:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Major performance hit for aggregation with lookup from 5.0.13 to 5.0.18 | 2023-07-04T19:30:40.432Z | Major performance hit for aggregation with lookup from 5.0.13 to 5.0.18 | 590 |
null | [] | [
{
"code": "",
"text": "When I try to update:\natlas alerts settings update --projectId --notificationType MICROSOFT_TEAMS\nI get the following error:\n400 (request “MISSING_ATTRIBUTE”) The required attribute eventTypeName was not specified.\nBut when I try to add this:\nError: unknown flag: --microsoftTeamsWebhookURL",
"username": "Sandro_Silva1"
},
{
"code": "atlascliatlascliatlascli",
"text": "Thanks for raising this one @Sandro_Silva1,This has recently been added to version 1.9.1 of the atlascli. Please check out the 1.9.1 change logs for more details regarding the ability to configure microsoft teams alerts via the atlascli.Of course you’ll need to update atlascli to 1.9.1 first and then test out the options. Let me know if you require further assistance with configuring the microsoft teams alert.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | (request "MISSING_ATTRIBUTE") The required attribute eventTypeName was not specified | 2023-05-23T16:45:52.170Z | (request “MISSING_ATTRIBUTE”) The required attribute eventTypeName was not specified | 458 |
null | [
"java",
"containers"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-06-28T11:46:29.202+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22988, \"ctx\":\"conn41\",\"msg\":\"Error receiving request from client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":141,\"codeName\":\"SSLHandshakeFailed\",\"errmsg\":\"SSL handshake received but server is started without SSL support\"},\"remote\":\"10.84.0.25:33564\",\"connectionId\":41}}\n{\"driver\": {\"name\": \"mongo-java-driver|sync|spring-boot\", \"version\": \"4.8.2\"}, \"os\": {\"type\": \"Linux\", \"name\": \"Linux\", \"architecture\": \"amd64\", \"version\": \"5.10.162+\"}, \"platform\": \"Java/BellSoft/17.0.7+7-LTS\"}\n",
"text": "Hi there,My coworkers and I have configured a replica-set on Docker using MongoDB 6.0. We are facing an error when we connect a microservice using java client. Our config files for each member have tls.mode=disabled and microservice connection-string have tls=false, however primary server logs show this:We asume, we should configure tls on server (it looks like it is mandatory) but, why is mongo driver trying to open a connection using tls if connection string sets option to false? this is the connection-string: mongodb+srv://customers:[email protected]/customers?tls=falseThe driver information we can see in microservice logs:Thanks for your time!",
"username": "Alejandro_Nino"
},
{
"code": "",
"text": "What did you see when the java code tries to connect to the server using that connection string? Can the code successfully connect?i see the server log says severity info, not an error.",
"username": "Kobe_W"
},
{
"code": "Unsatisfied dependency expressed through method 'createCustomerUseCase' parameter 0: Error creating \nbean with name 'customerRepository' defined in com.company.pocmongodb.infrastructure.CustomerRepository defined in @EnableMongoRepositories declared on MongoRepositoriesRegistrar.EnableMongoRepositoriesConfiguration: Cannot resolve refere\nnce to bean 'mongoTemplate' while setting bean property 'mongoOperations' \n at com.mongodb.ConnectionString.<init>(ConnectionString.java:410) \n at org.springframework.boot.autoconfigure.mongo.MongoPropertiesClientSettingsBuilderCustomizer.applyHostAndPort(MongoPropertiesClientSettingsBuilderCustomizer.java:62) \n at org.springframework.boot.autoconfigure.mongo.MongoPropertiesClientSettingsBuilderCustomizer.customize(MongoPropertiesClientSettingsBuilderCustomizer.java:51) \n at org.springframework.boot.autoconfigure.mongo.MongoClientFactorySupport.customize(MongoClientFactorySupport.java:55) \n at org.springframework.boot.autoconfigure.mongo.MongoClientFactorySupport.createMongoClient(MongoClientFactorySupport.java:49) \n at org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration.mongo(MongoAutoConfiguration.java:52) \n at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) \n at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) \n at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) \n at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:139) [1 skipped] \n at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:655) \n ... 94 common frames omitted\n2023-06-29 09:18:06,600 [.com:30002] DEBUG [o.m.d.cluster] Updating cluster description to {type=REPLICA_SET, servers=[{address=mongo_3.ns.company.com:30003, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.Mon\ngoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake}, caused by {java.io.EOFException: SSL peer shut down incorrectly}}, {address=mongo_2.ns\n.company.com:30002, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake}, caused by {ja\nva.io.EOFException: SSL peer shut down incorrectly}}, {address=mongo_1.ns.company.com:30001, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {jav\nax.net.ssl.SSLHandshakeException: Remote host terminated the handshake}, caused by {java.io.EOFException: SSL peer shut down incorrectly}}] \n2023-06-29 09:18:06,601 [.com:30001] DEBUG [o.m.d.connection] Closing connection connectionId{localValue:21} \n2023-06-29 09:18:06,601 [.com:30001] DEBUG [o.m.d.cluster] Updating cluster description to {type=REPLICA_SET, servers=[{address=mongo_3.ns.company.com:30003, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.Mon\ngoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake}, caused by {java.io.EOFException: SSL peer shut down incorrectly}}, {address=mongo_2.ns\n.company.com:30002, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake}, caused by {ja\nva.io.EOFException: SSL peer shut down incorrectly}}, {address=mongo_1.ns.company.com:30001, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {jav\nax.net.ssl.SSLHandshakeException: Remote host terminated the handshake}, caused by {java.io.EOFException: SSL peer shut down incorrectly}}] \n2023-06-29 09:18:06,602 [.com:30002] DEBUG [o.m.d.connection] Closing connection connectionId{localValue:20} \n2023-06-29 09:18:06,602 [.com:30003] DEBUG [o.m.d.connection] Closing connection connectionId{localValue:24}\n",
"text": "Hi @Kobe_W, thanksOur microservice does not run, we see this in the log:If we remove tls option from connection-string, microservice deploys but it fails to connect to the databaseI’ve been re-reading this part of the Mongo documentation DNS seed list Note and checking the ConnectionString.java at 410 line. And, it looks like in order to works we have to add ssl AND tls option.",
"username": "Alejandro_Nino"
},
{
"code": "",
"text": "Our microservice does not run, we see this in the log:tls needs to be disabled in connection string as server doesn’t have the support. So you will have to fix this exception.",
"username": "Kobe_W"
},
{
"code": "+srvtlsssltruefalsetls=falsessl=falsemongodb+srv://customers:[email protected]/customers?tls=false&ssl=false&readPreference=secondary\n",
"text": "At the end, this is the solution. We had to disable tls and ssl explicitly in the connection string. We misunderstood this part of the mongo documentation:Use of the +srv connection string modifier automatically sets the tls (or the equivalent ssl) option to true for the connection. You can override this behavior by explicitly setting the tls) (or the equivalent ssl) option to false with tls=false (or ssl=false ) in the query string.This is connection string we are using now:",
"username": "Alejandro_Nino"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB community disabled TLS Java client error | 2023-06-28T22:04:54.708Z | MongoDB community disabled TLS Java client error | 981 |
null | [
"aggregation",
"compass",
"golang"
] | [
{
"code": "bson.A{\n bson.D{\n {\"$lookup\",\n bson.D{\n {\"as\", \"lwData\"},\n {\"from\", \"invUnits\"},\n {\"let\", bson.D{{\"pid\", \"$_id\"}}},\n {\"pipeline\",\n bson.A{\n bson.D{\n {\"$match\",\n bson.D{\n {\"$expr\",\n bson.D{\n {\"$eq\",\n bson.A{\n \"$productId\",\n \"$$pid\",\n },\n },\n },\n },\n {\"productType\", \"assort\"},\n {\"transfers.soldInfo\", bson.D{{\"$exists\", false}}},\n },\n },\n },\n bson.D{{\"$group\", bson.D{{\"_id\", 1}}}},\n },\n },\n },\n },\n },\n}\n[\n [\n {\n \"Key\": \"$lookup\",\n \"Value\": [\n {\n \"Key\": \"as\",\n \"Value\": \"lwData\"\n },\n {\n \"Key\": \"from\",\n \"Value\": \"invUnits\"\n },\n {\n \"Key\": \"let\",\n \"Value\": [\n {\n \"Key\": \"pid\",\n \"Value\": \"$_id\"\n }\n ]\n },\n {\n \"Key\": \"pipeline\",\n \"Value\": [\n [\n {\n \"Key\": \"$match\",\n \"Value\": [\n {\n \"Key\": \"$expr\",\n \"Value\": [\n {\n \"Key\": \"$eq\",\n \"Value\": [\n \"$productId\",\n \"$$pid\"\n ]\n }\n ]\n },\n {\n \"Key\": \"productType\",\n \"Value\": \"assort\"\n },\n {\n \"Key\": \"transfers.soldInfo\",\n \"Value\": [\n {\n \"Key\": \"$exists\",\n \"Value\": false\n }\n ]\n }\n ]\n }\n ],\n [\n {\n \"Key\": \"$group\",\n \"Value\": [\n {\n \"Key\": \"_id\",\n \"Value\": 1\n }\n ]\n }\n ]\n ]\n }\n ]\n }\n ]\n]\nbson.A{\n\t\tbson.M{\"$lookup\": bson.M{\n\t\t\t\"from\": \"invUnits\",\n\t\t\t\"let\": bson.M{\"pid\": \"$_id\"},\n\t\t\t\"as\": \"lwData\",\n\t\t\t\"pipeline\": bson.A{\n\t\t\t\tbson.M{\"$match\": bson.M{\n\t\t\t\t\t\"productType\": product.Type.Assortment,\n\t\t\t\t\t\"$expr\": bson.M{\"$eq\": bson.A{\"$productId\", \"$$pid\"}},\n\t\t\t\t\t\"transfers.soldInfo\": bson.M{\"$exists\": false}},\n\t\t\t\t},\n\t\t\t\tbson.M{\"$group\": bson.M{\"_id\": 1}},\n\t\t\t},\n\t\t}},\n\t}\n[\n {\n \"$lookup\": {\n \"as\": \"lwData\",\n \"from\": \"invUnits\",\n \"let\": {\n \"pid\": \"$_id\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n \"$productId\",\n \"$$pid\"\n ]\n },\n \"productType\": \"assort\",\n \"transfers.soldInfo\": {\n \"$exists\": false\n }\n }\n },\n {\n \"$group\": {\n \"_id\": 1\n }\n }\n ]\n }\n }\n]\n{$match: {\n $or: [ { type: \"product\"}, {type: \"virtual\"} ],\n $or: [ { sku: \"1234\"}, {sku: \"5678\"} ],\n}}\n",
"text": "It’s great that you’ve added the export to Go language, but I think we should have the option to export in bson.M{} instead of the bson.D{} it is right now.\nThe reason is that it’s a lot easier on big pipelines to simply export it on a file, and then copy and paste that directly to Compass. While the bson.D requires extensive manipulations of that export before injecting it into Compass.\nHere is an example of an export from Compass:The output of that is:With all the Key and Value, we need to go through each of them and modify, or have to rewrite the full thing, which can cause some problem if you do any mistakes, not even talking about the time to do it on a 10+ pipeline!Now the same thing with bson.MAnd the output:As you can see, it’s a lot more condensed, also you can just use that output directly!I’m aware that in some situation it may fail, for exampleBut it seems that this is already not allowed, so I think it’s safe.",
"username": "Shadoweb_EB"
},
{
"code": "",
"text": "Is it something that you can consider doing? It would save me quite some time to have it, because for now I can’t use the Go export as it takes 5x more time to format the data than it is with a normal JSON.",
"username": "Shadoweb_EB"
},
{
"code": "",
"text": "For people curious how to do it, you can use ChatGPT, it converts very well.",
"username": "Shadoweb_EB"
}
] | Compass Export Pipeline to bson.M{} instead of bson.D{} | 2022-12-18T13:36:51.648Z | Compass Export Pipeline to bson.M{} instead of bson.D{} | 1,905 |
null | [] | [
{
"code": "",
"text": "Hello Team @Shane_McAllisterI have been looking for a document that will help in configuring a Maximum Availability for reliability for my MongoDB clusters(combination of Azure cloud and on-premise nodes), Can anyone help with an official document of the implementation or any other based on experience.Thank you,\nAyo",
"username": "Ayo_Exbizy"
},
{
"code": "",
"text": "Not sure what else you need apart from this.",
"username": "Kobe_W"
}
] | Hybrid Implementation of MongoDB clusters(between On-Premises and Azure) | 2023-07-08T13:26:56.621Z | Hybrid Implementation of MongoDB clusters(between On-Premises and Azure) | 372 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I am building an API backend for a social media platform where users can post blogs and other user can like, share and comment on it.Along with the basic details of the blog post (title, image, author name and photo), the API should returnI would like to know the best schema design for this requirement. I am planning to store blog content, like and comment in one collection. Please suggest",
"username": "Ummer_Irshad"
},
{
"code": "{\n_id : ...\nTitle : ...\nPostTime : ... \nnLikes : ...\nNumComments : ...,\nAuthor : { \n UserId : ...,\n ProfilePic : ..., \n Name : ....\n}\n...\n}\n{\n _id : ...,\n BlogId : ... // Reference to blogs\n PostTime : ... ,\n Text : ... \n Author : { \n UserId : ...,\n ProfilePic : ..., \n Name : ....\n}\n,\nnLikes : ...\n}\n{\n _id : ...,\n ReferenceId : ... // Reference to blogs or comments\n LikeTime : ... ,\n Author : { \n UserId : ...,\n ProfilePic : ..., \n Name : ....\n}\n",
"text": "Hi @Ummer_Irshad ,Schema design may vary between different requirements and application data access.The main guidance is that data that is queried together should be stored together in a document while not hitting any known antipattern.With the example you describe here and the limited information it sounds like you can use the following schema (extended reference pattern):Blogs collectionComments collectionLikes collection :Let me know if that makes sense.Read the following:Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!Get a summary of the six MongoDB Schema Design Anti-Patterns. Plus, learn how MongoDB Atlas can help you spot the anti-patterns in your databases.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{\n\tid: { type: String, },\n\ttype: { type: String, required: true, enum: ['post', 'comment'] },\n\tpost_uid: { type: String, required: true },\n\tcontent: { type: String, required: false },\n\tauthor_id: { type: Number, required: true },\n\ttimestamp: { type: Date, required: true },\n\tvote_count: { type: Number, required: true },\n\tcomment_count: { type: Number, required: true },\n\tparent_id: { type: String, required: false },\n\tvotes: [\n\t\t{\n\t\t\tvote_id: { type: String},\n\t\t\tvoted_user_id: { type: Number, required: true },\n\t\t\ttimestamp: { type: String, required: true },\n\t\t},\n\t],\n})\n",
"text": "Thanks Pavel for your quick response.One drawback I could see here is about embedding Author info inside collection. So, as and when a user updates the profile pic, we have to update it in all of his blogs, comments and likes? Is there any way to handle this?What about the below schema? Here, we are storing blog and comments in a single collection, with a field “type” to distinguish it. If it is a comment, then we will keep the blogId in it’s “parent_id” field. Then, we need to have separate collection for “Users”. And, of course, we need to run a second query to get the profile pic.Any suggestions?",
"username": "Ummer_Irshad"
},
{
"code": "https://hosting.com/user_123456/profile_current.jpg\n",
"text": "Hi @Ummer_Irshad ,Well having the documents in a single collection or 2 seperate collections is not that impacting as long as you will need the same amount of reads/queries to fetch it.You can potentially store just pointers to the profile pic of each user in the users collection.However, another idea is to keep the most updated profile pic in a users generic name and to overwrite this picture name with the new one each time. This way you will not need to have the updates. And you will get history of pictures as well .Regarding the votes (likes) in an array just note that there is a possible unbound arrays there if this grows to thousands or more in a post or comment that is highly popular … Consider if that is ok to go that path or you need outlier pattern…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny\ni think he means embedding Author in the document (that exists in the Blogs collection)\nso what about this situation ?\nif an Author’s name for example change, we have then to query all the blogs of that author and update them\nis there a better way of doing this ??\ncould you explain further this idea : You can potentially store just pointers to the profile pic of each user in the users collection.",
"username": "Khammassi_HoussemEdd"
},
{
"code": "",
"text": "Hi @Khammassi_HoussemEdd ,How many scenarios do change previous posts auther name? Does it make sense for applications?In case you still want to follow this scenario you can have several options:The idea is to avoid any kind of lookup for user details for a generic posts view.Ty",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "you can use populate() of mongoose.\nUsing this we just have to provide reference of the author in the blogs.\nyou can read more about it here mongoose populate",
"username": "Aditya_Dubey"
},
{
"code": "",
"text": "I’m not a specialist in this domain; however, I think you should get in touch with content creators with a large fanbase that can help you with your problem. Social media platforms are used for various purposes, from entertainment to business, and if you want to become famous, you should post quality content.",
"username": "Jennifer_Colocase"
},
{
"code": "",
"text": "It’s been a hot minute since this thread was active. Anyway, here’s my two cents: I’d recommend splitting your data into separate collections - one for ‘Posts’, another for ‘Comments’, and finally ‘Likes’. This would make your model more flexible and efficient.",
"username": "Alex_Deer"
},
{
"code": "",
"text": "Hey all, I have a question, when we have separate collections for comments and likes, but also want to keep track of the number of likes and comments that each Blog document has (like in @Pavel_Duchovny 's example), what would be a good way to keep the new comment/like document insertion/deletion operation and the number of comments/likes incrementation/drecrementation operation atomic so that they are in sync?",
"username": "fried_empanada"
}
] | What is the best schema for a blog post for storing the blog content, like, share and comment? | 2021-11-11T11:18:44.837Z | What is the best schema for a blog post for storing the blog content, like, share and comment? | 21,944 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "I have loaded a json file in mongo then i have performed some operation on that data and i have to save the query result into a textfile but when i am runing that query in mongo shell it is working fine and showing me data as per the condition but when i come out of mongo shell and try to store my query data into txt file it is showing syntax error. I am using eco",
"username": "ANIKESH_KUMAR"
},
{
"code": "",
"text": "What error are you getting and how are you trying to extract the data? Redirect console output from a shell or using mongoexpprt or something else?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Redirecting consol output from shell .",
"username": "ANIKESH_KUMAR"
},
{
"code": "",
"text": "And whats the error? Whats the command you are running?",
"username": "John_Sewell"
},
{
"code": "",
"text": "(echo “use survey”; echo “db.details.find({age:{$gt:60}})”) | mongo > medical.txt",
"username": "ANIKESH_KUMAR"
},
{
"code": "",
"text": "In medical.txt file its showing syntax error",
"username": "ANIKESH_KUMAR"
},
{
"code": "JSON.stringify(db.getSiblingDB('survey').getCollection('details').find({age:{$gt:60}}).toArray())",
"text": "Can you paste the error exactly you’re getting?Try doing:JSON.stringify(db.getSiblingDB('survey').getCollection('details').find({age:{$gt:60}}).toArray())Or something similar.",
"username": "John_Sewell"
},
{
"code": "",
"text": "\n168885042222663994852849430712691920×886 321 KB\nAttaching image of the error accuring. Please let me know if you find the image",
"username": "ANIKESH_KUMAR"
},
{
"code": "",
"text": "\n168885101788755191851196067618731920×886 257 KB\nThis is the command i am using",
"username": "ANIKESH_KUMAR"
},
{
"code": "mongosh --quiet --eval \"db.getSiblingDB('survey').getCollection('details').find()\"\n",
"text": "I could not get that working like that either on a command line, however this does work:I’d recommend swapping to the new mongosh as opposed to the legacy mongo shell, for a start the quiet option now works so doing this results in output that does not have extra text.You could also use mongoexport:With the filtering options.",
"username": "John_Sewell"
}
] | Mongodb storing file into .txt | 2023-07-08T18:08:44.606Z | Mongodb storing file into .txt | 403 |
null | [
"database-tools",
"schema-validation"
] | [
{
"code": "",
"text": "I created a collection also defined validation rule for the collection using $jsonSchema now when I exported the collection and import in another database i am not seeing the #jsonschema defined in my source database not getting imported in destination DB is there any other way of doing it or i am missing anything here please do guide mekeyan",
"username": "karthikeyan"
},
{
"code": "",
"text": "Mongoexport is a tool for generally just getting data out, I’m not sure it’ll do that. We use it for exporting to CSV etc for feeding to other systems or it’ll also output to JSON format.Mongodump however, will do a lot more, i.e. rebuild indexes on restore etc.Testing locally, I created a collection with a validator, dumped it using mongodump and restored it somewhere else, the new location had the schema validator included.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks john for the reply didn’t try with Mongodump . I thought how oracle export works similarly Mongoexport also works.",
"username": "karthikeyan"
}
] | Why jsonSchema metadata not exported along with mongoexport | 2023-07-08T10:24:10.416Z | Why jsonSchema metadata not exported along with mongoexport | 564 |
null | [
"aggregation",
"node-js",
"mongoose-odm",
"transactions"
] | [
{
"code": "withdrawadd\n{\n \"data\": [\n {\n \"_id\": \"64a84bbf6182051f094132ae\",\n \"amount\": 850,\n \"transactionType\": \"add\",\n \"createdAt\": \"2023-07-06T17:30:39.065Z\"\n },\n {\n \"_id\": \"64a84bc76182051f094132b2\",\n \"amount\": 2850,\n \"transactionType\": \"add\",\n \"createdAt\": \"2023-07-06T17:30:47.379Z\"\n },\n {\n \"_id\": \"64a84bd16182051f094132b6\",\n \"amount\": 740,\n \"transactionType\": \"add\",\n \"createdAt\": \"2023-07-07T17:30:57.994Z\"\n },\n {\n \"_id\": \"64a84c2c6182051f094132c0\",\n \"amount\": 1400,\n \"transactionType\": \"withdraw\",\n \"createdAt\": \"2023-07-07T17:32:28.868Z\"\n }\n ]\n}\nconst data = await Activity\n .aggregate()\n .match({\n createdAt: {$gt: new Date(new Date(new Date().getTime() - 1000*60*60*24*30))},\n transactionType: {$ne: \"convert\"},\n createdBy: {$eq: new ObjectId(req.user.userId)}\n })\n .project(\"transactionType createdAt amount\")\ngroup(\n {\n _id: { date: \"$createdAt\", type: \"$transactionType\"},\n totalAmount: { $sum: \"$amount\"}\n }\n )\n{\n \"data\": [\n {\n \"_id\": {\n \"date\": \"2023-07-07T17:32:28.868Z\",\n \"type\": \"withdraw\"\n },\n \"totalAmount\": 1400\n },\n {\n \"_id\": {\n \"date\": \"2023-07-06T17:30:47.379Z\",\n \"type\": \"add\"\n },\n \"totalAmount\": 2850\n },\n {\n \"_id\": {\n \"date\": \"2023-07-06T17:30:39.065Z\",\n \"type\": \"add\"\n },\n \"totalAmount\": 850\n },\n {\n \"_id\": {\n \"date\": \"2023-07-07T17:30:57.994Z\",\n \"type\": \"add\"\n },\n \"totalAmount\": 740\n }\n ]\n}\nwithdrawadd{\n \"_id\": {\n \"date\": \"2023-07-07T17:30:57.994Z\",\n \"xxx\": [{\n \"type\": \"add\",\n \"totalAmount\": 4450\n }, {\n \"type\": \"withdraw\",\n \"totalAmount\": 1400\n }\n },\n \n },\n\n \"_id\": {\n \"date\": \"other date\",\n \"xxx\": \"same data as above\"\n },\n .......\n },\n\n}\n",
"text": "Hello everyone. I’m currently learning Express and MongoDB using Mongoose in my apps.I have a case in that I want to get data grouped by date and type.\nin another explanation:\nI want to get the total amount for each withdraw and add per day.this is the data that I have at this stage:I’m having this result from this query:I tried to add this group condition but its group just the date:this is what I got from my group function:My wish is to have a result like this or any better way that could provide me the total amount for each withdraw and add per day.I’ll be grateful for any kind of help and thanks",
"username": "Oussama_Louelkadi"
},
{
"code": "db.getCollection(\"Test\").aggregate([\n{\n $group:{\n _id:'$createdAt',\n transactions:{$push:{type:'$transactionType', totalAmount:'$amount'}}\n }\n},\n{\n $project:{\n _id:0,\n date:'$_id',\n transactions:1\n }\n}\n])\n",
"text": "Something like this?Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "John_Sewell"
},
{
"code": "createdAt.group(\n {\n _id: { \n day: {$dateToString: { format: \"%Y-%m-%d\", date: \"$createdAt\"}} ,\n type: \"$transactionType\"\n },\n totalAmount: { $sum: \"$amount\"}\n }\n )\n",
"text": "I have had an issue with the createdAt. since it’s a DateTime type I couldn’t group my records with it.I have changed the group function and date format, and I find an acceptable result:",
"username": "Oussama_Louelkadi"
},
{
"code": "createdAt",
"text": "Hello, John. Thanks for your time and your efforts.I have had an issue with the createdAt. since it’s a DateTime type I couldn’t group my records with it.please check my solution below. I have changed the date format so I can group my records.then let me know your comments there if I can do that in a better way.Thanks, Again!",
"username": "Oussama_Louelkadi"
},
{
"code": "",
"text": "I did wonder about the date time precision in the output!\nDid you really want things grouped by day with an array of summed transaction types or what you currently have, as per your last message?",
"username": "John_Sewell"
},
{
"code": "db.getCollection(\"Test\").aggregate([\n{\n $sort:{\n createdAt:1\n }\n},\n{\n $group:{\n _id:{\n day:{$dateToString: { format: \"%Y-%m-%d\", date: \"$createdAt\"}},\n tranType:'$transactionType'\n \n },\n amount:{$sum:'$amount'}\n }\n},\n{\n $group:{\n _id:'$_id.day',\n transactions:{\n $push:{\n type:'$_id.tranType',\n amount:'$amount'\n }\n }\n }\n},\n{\n $project:{\n _id:0,\n date:'$_id',\n transactions:1\n }\n}\n])\n",
"text": "Something like this will get you your original desired output:Note if you add an index on the createdDate and start with a sort, it’ll hit it if that’s as desired to avoid a colscan. You had a $match in the original post so as long as that’s supported by indexes you should be good.",
"username": "John_Sewell"
}
] | Group By Multiple fields | 2023-07-07T18:23:06.218Z | Group By Multiple fields | 587 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "I accidentally removed the admin account and decided that the easiest solution was to reinstall mongodb community edition version 6.After a series of unsuccessful attempts I have managed to get the service running. But when I\n$ mongoshI’m presented with:-bash: /usr/bin/mongosh: No such file or directoryI’ve been trying to resolve what I thought would be a simple reinstall, for hours.Somebody. Anybody. PLEASE help. Thank you,\nSam",
"username": "sam_ames"
},
{
"code": "find / -iname mongosh\nwget https://downloads.mongodb.com/compass/mongodb-mongosh-1.10.1.x86_64.rpm\n",
"text": "Hi @sam_ames,\nFirst verify that it is indeed not there, then run the following command:If it is not actually installed you can do:If your os is centos or rhelRegards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Oh my operating system is Ubuntu 22, how can I install?Many thanks for your support",
"username": "sam_ames"
},
{
"code": "",
"text": "Hi @sam_ames,\nDepends from the architetture.\nI link you the download page:The MongoDB Shell is a modern command-line experience, full with features to make it easier to work with your database. Free download. Try now!After you’ve choose the correct link, run the following command from bash:wget linkRegards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Thanks so much.It looks like I do have it installed but it’s not working when I try\n$ mongosh\nHow do I repair this problem, please advise meThanks,\nSam",
"username": "sam_ames"
},
{
"code": "",
"text": "It looks like I do have it installed but it’s not working when I tryShow us the output of ls -lrt mongosh\nDid you try to run it from the bin directory?",
"username": "Ramachandra_Tummala"
}
] | No such file or directory error | 2023-07-07T16:35:35.997Z | No such file or directory error | 882 |
null | [
"security",
"realm-web"
] | [
{
"code": "",
"text": "Hi All,\nI have a simple question. When using the realm sdk for web, I need to specify the Realm app id in my frontend app. Can this be misused? If the realm app rules allow inserts, then what prevents someone from spamming my collection with inserts from his own app?Rgds,\nDebashish",
"username": "Debashish_Palit"
},
{
"code": "",
"text": "@Debashish_Palit : Welcome to the community.Realm App id can considered as private property your app and shouldn’t be exposed, but can it misused is hard to answer.Multiple features are available that can help you prevent such situation like",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "its safe to share the APP ID in to the frontend code?, its a good practice to paste the APP_ID direct in the js script or its necessary to use a environment variable .envPD: assuming that authentication by email or other system is enabled\nthanks",
"username": "Freddy_Mansilla"
},
{
"code": "",
"text": "@Freddy_Mansilla: I wouldn’t recommend hard coding the APP ID.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "if im working only with a frontend webapp page, without a backend server.how is the best alternative to connect with mongodb api and don’t share the APP_ID in the js script? is there an alternative? does mongodb app services hosting has a tool to save environment variables?what is the best scheme to work and get safe?\nthanks to answer I really appreciate.",
"username": "Freddy_Mansilla"
}
] | How to protect the Realm app id? | 2021-07-19T15:05:29.456Z | How to protect the Realm app id? | 3,577 |
null | [] | [
{
"code": "",
"text": "Hi, i cant start my mongodb on amazon ec2 linux. Anyone can help?",
"username": "Ahmad_Asyraf"
},
{
"code": "",
"text": "Hi @Ahmad_Asyraf,\nCan you paste here your configuration file, and the error?Regards",
"username": "Fabio_Ramohitaj"
}
] | Failed to start mongodb with code=exited status=2 | 2023-07-07T11:28:57.164Z | Failed to start mongodb with code=exited status=2 | 243 |
null | [
"aggregation",
"golang"
] | [
{
"code": "[\n {\n $match:\n /**\n * query: The query in MQL.\n */\n {\n \"subjects.type\": \"accessgroup_id\",\n type: \"access\",\n $or: [\n {\n resources: {\n $elemMatch: {\n attributes: {\n $elemMatch: {\n name: \"serviceName\",\n operator: \"equals\",\n values: {\n $elemMatch: {\n $in: [\n \"mcmp:core-lite:service\",\n ],\n },\n },\n },\n },\n },\n },\n },\n ],\n },\n },\n {\n $unwind:\n /**\n * path: Path to the array field.\n * includeArrayIndex: Optional name for index.\n * preserveNullAndEmptyArrays: Optional\n * toggle to unwind null and empty values.\n */\n {\n path: \"$subjects\",\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $lookup:\n /**\n * from: The target collection.\n * localField: The local join field.\n * foreignField: The target join field.\n * as: The name for the results.\n * pipeline: Optional pipeline to run on the foreign collection.\n * let: Optional variables to use in the pipeline field stages.\n */\n {\n from: \"accessgroup\",\n let: {\n group_id: {\n $toObjectId: \"$subjects.value\",\n },\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\"$_id\", \"$$group_id\"],\n },\n },\n },\n ],\n as: \"groups\",\n },\n },\n {\n $unwind:\n /**\n * path: Path to the array field.\n * includeArrayIndex: Optional name for index.\n * preserveNullAndEmptyArrays: Optional\n * toggle to unwind null and empty values.\n */\n {\n path: \"$groups\",\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $match:\n /**\n * query: The query in MQL.\n */\n {\n \"groups.members\": {\n type: \"iam_id\",\n value: \"2012\",\n },\n },\n },\n]\n/** \n* Paste one or more documents here\n*/\n{\n \"created_at\": {\n \"$date\": \"2023-07-07T05:56:00.319Z\"\n },\n \"description\": \"\",\n \"etag\": \"20-36db8fc8b7d07df08a1b38404bd9654ce40b430f\",\n \"resources\": [\n {\n \"attributes\": [\n {\n \"name\": \"serviceName\",\n \"values\": [\n \"mcmp:core-lite:service\"\n ],\n \"operator\": \"equals\"\n }\n ],\n \"accesstags\": []\n }\n ],\n \"roles\": [\n {\n \"type\": \"platform\",\n \"role_id\": \"krn:v1:mcmp:public:core-lite:iam:::role:administrator\"\n }\n ],\n \"subjects\": [\n {\n \"type\": \"accessgroup_id\",\n \"value\": \"64a6ec1ffcbb0b7bfb7111ff\"\n }\n ],\n \"type\": \"access\",\n \"updated_at\": {\n \"$date\": \"2023-07-07T05:56:00.319Z\"\n }\n}\n{\n \"_id\": {\n \"$oid\": \"64a6ec1ffcbb0b7bfb7111ff\"\n },\n \"created_at\": {\n \"$date\": \"2023-07-07T07:39:00.239Z\"\n },\n \"description\": \"\",\n \"etag\": \"20-4d9756036d550c3e0cc0d931df80cefbbefda793\",\n \"isFederated\": false,\n \"members\": [\n {\n \"type\": \"iam_id\",\n \"value\": \"2011\"\n },\n {\n \"type\": \"iam_id\",\n \"value\": \"2012\"\n },\n {\n \"type\": \"iam_id\",\n \"value\": \"2013\"\n }\n ],\n \"name\": \"testapikey\",\n \"rules\": [],\n \"tenantId\": \"64a6e9fffcbb0b7bfb7111d4\",\n \"updated_at\": {\n \"$date\": \"2023-07-07T07:39:00.239Z\"\n }\n}\n",
"text": "Problem: My aggregate pipeline is giving an inconsistent result intermittently when called multiple times through a go code.Below is the pipeline that we are executing through mongodb Go Driver. This is resulting in wrong results occassionally.This is basically working on two collections in a db. → Document in “policy” collection → Document in “accessgroup” collectionIs there any known issue with go-mongodriver with aggregation module?",
"username": "Tirumalesh_Killamsetty"
},
{
"code": "",
"text": "When you say “Wrong Results” or inconsistent, is it just a completely different output or different order or sometimes more or less documents?\nCan you repro this strangeness via Compass or the shell if you execute it multiple times?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks John.\nAlways I should be getting N documents, but intermittently I get less than N.",
"username": "Tirumalesh_Killamsetty"
},
{
"code": "",
"text": "That is strange indeed! I don’t use the go driver so unaware of any issues, the only think I can think to suggest is to try running the query repeatedly from the shell and see if you can repro it to try and narrow if down if you’ve not already done that.\nHopefully someone else has some more useful ideas!",
"username": "John_Sewell"
}
] | Aggregate pipeline returning inconsistent results every time when we hit mongo db | 2023-07-07T15:42:46.468Z | Aggregate pipeline returning inconsistent results every time when we hit mongo db | 609 |
null | [
"node-js"
] | [
{
"code": "userDataappDatathis.dbPath = join(app.getPath(\"appData\"), \"deep\", \"realm\");\nRealm.defaultPath = this.dbPath;\n",
"text": "I have some questions regarding how to create a realm file / directory for desktop apps. I’m building an electron app for both Mac and Windows. I’m testing on Mac at the moment but can’t seem to find the info I need in the docs or the sample Electron QuickStart info.For a Mac desktop app to work properly, if your app creates files they have to be written to a directory that is allowed by the OS. And, especially in the case of Electron apps where they have their code / data bundled into ASAR archive, you can’t just use the default ‘current working directory’ and allow Realm to create its file(s) in the app installed bundle location.You have to write your files into the user’s app data directory. Here’s the info from Electron docs:So on my test machines this is - /Users/MY_NAME/Library/Application Support/deepI’ve created a folder inside this user data directory with the name “realm” - /Users/MY_NAME/Library/Application Support/deep/realm.I then use this to try and get Realm to create its files in that location:Then, I proceed with not specifying a “path” in my realm configuration and then open the Realm. I initially tried to use just the path mentioned above in the “path” field of the Realm configuration object and it gave me an error saying “Is a directory”.So, I tried to change defaultPath for the Realm object and point it to the correct user directory. However, this must silently fail bc I don’t get an exception and Realm opens/creates a Realm in my app’s current working directory while I’m debugging. This works bc when a desktop app / electron / etc is being debugged, Realm and MacOS can and will allow my app to create files in its current working directory.But, that won’t work in production. In the release build, the Realm files/DB must be created in the user’s data directory. So, how do I configure Realm to use the directory I need it to use?",
"username": "d33p"
},
{
"code": "this.dbPath = join(app.getPath(\"appData\"), \"deep\", \"realm\", \"deep.db.realm\");\n",
"text": "This worked:Have to specify a filename, even if it doesn’t exist. I need to test whether it will create the directory as well next…",
"username": "d33p"
},
{
"code": "",
"text": "Hmm, this does not work in the production build. When I have Realm enabled in the app, do the configuration as noted above to open a Realm in the user’s app directory the app will not run… When I remove the Realm related code, it runs fine in production / install mode.So, there’s some kind of permissions problem where the app cannot create Realm files on my app’s behalf, at least on MacOS.If anyone has experience building production electron apps and using Realm, please lmk. Debugging this is far from simple… I can’t even get console.log output from an installed MacOS app. And, I can’t create log files… I’m stuck/blocked at the moment. Don’t want to abandon Realm on the desktop but I may have to…",
"username": "d33p"
},
{
"code": "~/users/username/Library/Containers/app name/data/library/application support/default.realm",
"text": "Some additional clarity is needed; is this an iOS app or macOS app?If macOS, you have full control over where Realm files are written; desktop, documents etc.For example, during development, we routinely store our test Realms in Dropbox - that we I can access them from any workstations.By default, local only Realm data is stored in the~/users/username/Library/Containers/app name folderWithin that folder the actual path is/data/library/application support/default.realmDo you need to change the location of where the Realm is stored?Jay",
"username": "Jay"
}
] | Create Realm file on MacOS with Electron | 2023-07-06T14:37:10.029Z | Create Realm file on MacOS with Electron | 569 |
[] | [
{
"code": "",
"text": "\nproblemamongo1154×911 37.1 KB\n",
"username": "Sebastian_Valencia"
},
{
"code": "",
"text": "Try 127.0.0.1 instead of localhost\nAlso read about Ipv6 vs Ipv4 support",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Nodejs 18 is not supporting to MongoDB 4.4.0.\nPlease let us know which version of MongoDB version will support. what are the changes required for upgrade NodeJS version 18",
"username": "Vivekanand_Kommineni"
}
] | Problem connecting to mongo | 2022-12-21T14:41:46.890Z | Problem connecting to mongo | 1,728 |
|
null | [] | [
{
"code": "",
"text": "I am currently working on a data visualization task in MongoDB Atlas, where I need to create a line graph to track the total count of successful entries over a series of hours.Here’s the challenge I’m facing: Each document in the data cluster has a numerical field, and for a document to be considered ‘successful,’ this field must fall within a specific range of values. I want to count only those documents that meet this criterion in order to generate the graph. Additionally, the count should be incremented at the corresponding timeframe specified by each document, resulting in a cumulative graph.However, the complexity arises when I try to visualize cumulative data across multiple categories, which are also specified by the documents. Using a grouped column chart, I was able to create a cumulative graph, but it sums up the successful entries across all categories without distinguishing between them.To overcome this, my goal is to create a cumulative line graph with multiple lines, where each line represents a different category. This way, I can analyze the cumulative success counts for each category separately.I would greatly appreciate any guidance or suggestions on how to achieve this visualization in MongoDB Atlas. Thank you!",
"username": "Matthew_Taylor"
},
{
"code": "$match$group$setWindowFields[\n {\n $match: { \n released: { $ne: null },\n rated: { $ne: null },\n runtime: { $gt: 100 } \n },\n },\n {\n $group: { \n _id: { year: { \"$year\": \"$released\" }, rated: \"$rated\" },\n runtime: { \"$sum\": \"$runtime\" } \n }\n },\n {\n $setWindowFields: {\n partitionBy: \"$_id.rated\",\n sortBy: { \"_id.year\": 1 },\n output: {\n cumulativeTotal: {\n $sum: \"$runtime\",\n window: { documents: [\"unbounded\", \"current\"] }\n }\n }\n }\n }\n]\n",
"text": "Hi @Matthew_Taylor -Charts has a “Compare Value” option that you can use to calculate cumulative totals, but currently it only works on single series charts. If you want to make this work for multi-series charts you’ll need to write a custom aggregation pipeline.I’m not 100% sure I understand your requirements, but here’s an attempt to build a chart similar to what you are describing using the Movies sample dataset. Basically:Query:Resulting chart:\n\nimage1469×812 88.6 KB\nDoes this help?",
"username": "tomhollander"
},
{
"code": "",
"text": "This helps. I was able to replicate that graph with the sample movies data. The main thing I am stumped on now is that my project handles the dates differently. Your code contains a $year field as well as a $released field, whereas my data has a $timestamp field with nothing else to compare it to in $group. The timestamp is also a date rather than a number. I am using your strategy but my X-axis just says “Invalid Date”.",
"username": "Matthew_Taylor"
},
{
"code": "released$year$month",
"text": "Glad you’re making progress. In the sample I shared there is only a single date field called released. $year is an aggregation function which extracts the year part of a date. Charts does similar things in its pipelines but it gets a bit more complex if you need to extract multiple parts like $month as well. How do you want your dates displayed on the X axis?",
"username": "tomhollander"
},
{
"code": "chart.setFilter()",
"text": "Currently, our team requires timestamps to be as granular as possible, accommodating user-selected timeframes ranging from months to hours. However, this granularity is causing problems due to Charts’ 5000 document limit. We are in need of a workaround to overcome this limitation. Any suggestions or alternative approaches would be greatly appreciated.There is also another issue we are encountering. When embedding the chart using the JavaScript SDK, we encounter an issue with the Y-axis not always starting at zero. We have tied using a filter in the Atlas Charts UI that allows us to choose the earliest date for the chart to display. However, because the data has already been accumulated since a certain date, the Y-axis doesn’t consistently begin at zero. As a temporary fix, we hardcoded a specific date in the aggregation pipeline to ensure the chart starts at zero. However, a new issue arises when users select a date via the JavaScript-based UI we implemented. If the selected date is after the hardcoded aggregation date, the chart starts above zero, which is undesirable. Currently, we are using the chart.setFilter() function in the JavaScript SDK.I would like to know if there’s a reliable way to ensure the graph always starts at zero, or if it’s possible to modify the aggregation pipeline dynamically from the JavaScript SDK to address this issue.Any guidance, suggestions, or code examples would be immensely helpful. Thank you in advance for your support.",
"username": "Matthew_Taylor"
}
] | Cumulative line graph with multiple categories | 2023-06-13T21:05:02.270Z | Cumulative line graph with multiple categories | 766 |
null | [
"aggregation"
] | [
{
"code": "const searchField = \"$data.product\";\n\nconst searchField = \"$data.\" + key;$data.${key}",
"text": "But If I try to use concatenation and add a variable name const searchField = \"$data.\" + key; then it fails with the error:MongoServerError: PlanExecutor error during aggregation:: caused by:: $regexMatch needs ‘input’ to be of type stringI have tried using template format but it also does not work $data.${key}",
"username": "AbdulRahman_Riyaz"
},
{
"code": "",
"text": "What does the actual query that’s sent to the server look like? Can you output the full aggregation created and show that?",
"username": "John_Sewell"
}
] | regexMatch fails with a variable string | 2023-07-07T14:20:47.599Z | regexMatch fails with a variable string | 215 |
null | [
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "mycollection.find({}, { sort: { createdAt: 1 }, skip: 110, limit: 10 })",
"text": "Currently at my company we are using MongoDB as our primary data storage.So I have a collection that maps all our data regarding items and an item can be anything available for sale on shop. So I am using Pagination to fetch the data from database but whenever I use the sort option with a field createdAt it is giving me duplicate entries along with that?I am using Mongoose ORM on Node JS.Here is a sample query:\nmycollection.find({}, { sort: { createdAt: 1 }, skip: 110, limit: 10 })",
"username": "Vedant_Gandhi"
},
{
"code": "",
"text": "That sounds odd, are you sure that duplicates don’t exist and it’s the sorting that’s highlighting the fact you have duplicates?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Yes, I have checked if duplicates exist and also when I remove the sort option there is no duplication.",
"username": "Vedant_Gandhi"
},
{
"code": "",
"text": "Can you pick the IDs of two items that are duplicates and look those up directly on the collection and verify what their values are?",
"username": "John_Sewell"
},
{
"code": "",
"text": "There are no duplicates in database. Only while returning results its repeating the documents. I actually found a solution for this- mongodb - Mongo DB duplication issue while using sorting with limit and skip in aggregation - Stack OverflowWhen we sort a field that consist of repeating values we need to specify a unique field along with that.So I had to add _id:1 along with the my sort fields.",
"username": "Vedant_Gandhi"
},
{
"code": "",
"text": "Ahhh, so it was duplicating over multiple pages of data as the sort was not unique?You may also want to look at avoiding using .skip and add a criteria that’s indexed, when you get a lot of data and skipping many many pages it can get slow.Anyway, glad you got it sorted!",
"username": "John_Sewell"
},
{
"code": "",
"text": "Yes Sure I’ll do that too. Thanks for your help.",
"username": "Vedant_Gandhi"
}
] | Sorting data causes duplication | 2023-07-07T10:00:51.696Z | Sorting data causes duplication | 721 |
null | [
"node-js",
"atlas-functions",
"app-services-cli",
"app-services-data-access"
] | [
{
"code": "",
"text": "Hello,I have recently made public an app of ours ( IOS and Android). I would like to be able to get all verified accounts to see how many users have made an account in our app.I have tried both realm-CLI and cloud functions (MongoDB Atlas App Services Admin API) to get all app users but I only seem to get back 50 accounts.Is there a way to get all accounts in order for me to count them and return a number?Any help pointing in the right direction will be greatly appreciated.Best regards,\nRasvan",
"username": "Rasvan_Andrei_Dumitr"
},
{
"code": "nodeconst realm = require(\"realm\");\nconst app = new Realm.app(\"<your-app-id>\");\nconst totalUsers = app.allUsers().length;\nconsole.log(\"Total users: \", totalUsers);\n",
"text": "Hi @Rasvan_Andrei_Dumitr! I have asked internally if this is somehow capped. An alternate method I can offer is to create a small node script.EDIT: this won’t work, see response below",
"username": "Andrew_Meyer"
},
{
"code": "after",
"text": "I will redact my previous workaround, since this only returns a list of users that have logged onto the device (reference).\nHowever, the Admin API supports pagination through the after parameter. It should be possible to write a script to build a list of all users and get the count.",
"username": "Andrew_Meyer"
}
] | How to get all registered users | 2023-07-07T10:20:58.526Z | How to get all registered users | 630 |
[
"react-native",
"android",
"app-services-cli"
] | [
{
"code": " LOG Running \"SyncTutorial\" with {\"rootTag\":1}\n ERROR Error: Exception in HostFunction: Cannot write to class Link when no flexible sync subscription has been created., js engine: hermes\nError: ENOENT: no such file or directory, open 'C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanVersionFour\\template-app-react-native-todo-main\\JavaScript'\n at Object.openSync (node:fs:585:3)\n at Object.readFileSync (node:fs:453:35)\n at getCodeFrame (C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanVersionFour\\template-app-react-native-todo-main\\node_modules\\metro\\src\\Server.js:1047:18)\n at Server._symbolicate (C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanVersionFour\\template-app-react-native-todo-main\\node_modules\\metro\\src\\Server.js:1133:22)\n at async Server._processRequest (C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanVersionFour\\template-app-react-native-todo-main\\node_modules\\metro\\src\\Server.js:468:7) {\n errno: -4058,\n syscall: 'open',\n code: 'ENOENT',\n path: 'C:\\\\Users\\\\matth\\\\Documents\\\\ReactNativeAppDevelopment\\\\TeeScanVersionFour\\\\template-app-react-native-todo-main\\\\JavaScript'\n}\nError: ENOENT: no such file or directory, open 'C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanVersionFour\\template-app-react-native-todo-main\\http:\\10.0.2.2:8081\\index.bundle?platform=android&dev=true&minify=false&app=com.synctutorial&modulesOnly=false&runModule=true'\n at Object.openSync (node:fs:585:3)\n at Object.readFileSync (node:fs:453:35)\n at getCodeFrame (C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanVersionFour\\template-app-react-native-todo-main\\node_modules\\metro\\src\\Server.js:1047:18)\n at Server._symbolicate (C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanVersionFour\\template-app-react-native-todo-main\\node_modules\\metro\\src\\Server.js:1133:22)\n at async Server._processRequest (C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanVersionFour\\template-app-react-native-todo-main\\node_modules\\metro\\src\\Server.js:468:7) {\n errno: -4058,\n syscall: 'open',\n code: 'ENOENT',\n path: 'C:\\\\Users\\\\matth\\\\Documents\\\\ReactNativeAppDevelopment\\\\TeeScanVersionFour\\\\template-app-react-native-todo-main\\\\http:\\\\10.0.2.2:8081\\\\index.bundle?platform=android&dev=true&minify=false&app=com.synctutorial&modulesOnly=false&runModule=true'\n}\nimport { BSON } from 'realm';\n\nexport class Link extends Realm.Object<Link> {\n _id!: BSON.ObjectId;\n user!: string;\n LinkID!: string;\n originalURL!: string;\n\n static schema: Realm.ObjectSchema = {\n name: 'Link',\n primaryKey: '_id',\n properties: {\n _id: {type: 'objectId', default: () => new BSON.ObjectId()},\n user: { type: 'string', default: '[email protected]' },\n LinkID: { type: 'string', default: 'lnk_3dhs_9dCPkpOgUmB' },\n originalURL: 'string',\n },\n };\n}\n\nimport {createRealmContext} from '@realm/react';\nimport {Item} from './ItemSchema';\nimport {Link} from './LinkSchema';\n\nexport const realmContext = createRealmContext({\n schema: [Item, Link],\n});\n// createItem() takes in a summary and then creates an Item object with that summary\n // const createItem = useCallback(\n // ({summary}: {summary: string}) => {\n // // if the realm exists, create an Item\n // realm.write(() => {\n // return new Item(realm, {\n // summary,\n // owner_id: user?.id,\n // });\n // });\n // },\n // [realm, user],\n // );\n\n const createItem = useCallback(\n ({summary}: {summary: string}) => {\n // if the realm exists, create an Item\n realm.write(() => {\n return new Link(realm, {\n user: '[email protected]',\n LinkID: 'lnklnk',\n originalURL: 'youtube.com',\n });\n });\n },\n [realm, user],\n );\n",
"text": "Hello, I tried cloning this repo: https://github.com/mongodb/template-app-react-native-todo (downloaded it as a zip file, uncompressed it and then changed the app id in the atlasConfig.json to match what I have for my example app under app services in MongoDB atlas). I also did a realm-cli login with a private and public API key in the root directory of the project. However, when I try writing data to a schema I want to create called Links, I get the following error:\n\nimage394×846 68.6 KB\n\nThis is the error in my js server:Essentially, the only things I added to the code after downloading the repo and running it are a file titled LinkSchema.tsx that looks like so:I also changed my RealmContext.ts file to look like so:And in my itemlistview.tsx file, I changed the createItem function to write to my Link schema (I remembered to do an import link from linkschema.tsx at the top of my file). You can see the original function (that works by the way and writes to my cluster: todo->item in MongoDB atlas) that I commented out and the function I wrote that’s not commented out:Even after turning developer mode on, I still can’t write to the Links schema. I also tried setting my devices sync to flexible via my app services UI but that didn’t change anything.I uploaded the repo of the project I’m currently working on to Github so that you can easily navigate my code and tell me where I can modify it to fix this issue: GitHub - MatthewGerges/TeeScan: A react native app that uses Realm Sync to connect to MongoDBTo get realm SDK to work, I also tried downloading the app services zip file from the app services UI but the Zip folder had nothing in it. I also tried using the realm cli to create a template app with a given app id but after creating the app and doing a pull and push command, it gave me the error group not found.",
"username": "Matthew_Gerges"
},
{
"code": "C/C++: ninja: error: manifest 'build.ninja' still dirty after 100 tries\n\nFAILURE: Build completed with 2 failures.\n\n1: Task failed with an exception.\n-----------\n* What went wrong:\nExecution failed for task ':expo-modules-core:buildCMakeDebug[armeabi-v7a]'.\n> com.android.ide.common.process.ProcessException: ninja: Entering directory `C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanDbSync2\\teescandbsynchro2\\node_modules\\expo-modules-core\\android\\.cxx\\Debug\\1v13e6r6\\armeabi-v7a'\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n [0/1] Re-running CMake...\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/Users/matth/Documents/ReactNativeAppDevelopment/TeeScanDbSync2/teescandbsynchro2/node_modules/expo-modules-core/android/.cxx/Debug/1v13e6r6/armeabi-v7a\n\n C++ build system [build] failed while executing:\n @echo off\n \"C:\\\\Users\\\\matth\\\\AppData\\\\Local\\\\Android\\\\Sdk\\\\cmake\\\\3.22.1\\\\bin\\\\ninja.exe\" ^\n -C ^\n \"C:\\\\Users\\\\matth\\\\Documents\\\\ReactNativeAppDevelopment\\\\TeeScanDbSync2\\\\teescandbsynchro2\\\\node_modules\\\\expo-modules-core\\\\android\\\\.cxx\\\\Debug\\\\1v13e6r6\\\\armeabi-v7a\" ^\n expo-modules-core\n from C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanDbSync2\\teescandbsynchro2\\node_modules\\expo-modules-core\\android\n ninja: error: manifest 'build.ninja' still dirty after 100 tries\n\n* Get more help at https://help.gradle.org\n\nDeprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.\n\nYou can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.\n\nSee https://docs.gradle.org/7.5.1/userguide/command_line_interface.html#sec:command_line_warnings\n\nExecution optimizations have been disabled for 1 invalid unit(s) of work during this build to ensure correctness.\nPlease consult deprecation warnings for more details.\n\nBUILD FAILED in 1m 32s\n227 actionable tasks: 35 executed, 192 up-to-date\nC:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanDbSync2\\teescandbsynchro2\\android\\gradlew.bat exited with non-zero code: 1\nError: C:\\Users\\matth\\Documents\\ReactNativeAppDevelopment\\TeeScanDbSync2\\teescandbsynchro2\\android\\gradlew.bat exited with non-zero code: 1\n at ChildProcess.completionListener (C:\\Users\\matth\\AppData\\Roaming\\npm\\node_modules\\expo-cli\\node_modules\\@expo\\spawn-async\\src\\spawnAsync.ts:65:13)\n at Object.onceWrapper (node:events:642:26)\n at ChildProcess.emit (node:events:527:28)\n at ChildProcess.cp.emit (C:\\Users\\matth\\AppData\\Roaming\\npm\\node_modules\\expo-cli\\node_modules\\cross-spawn\\lib\\enoent.js:34:29)\n at maybeClose (node:internal/child_process:1092:16)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5)\n ...\n at spawnAsync (C:\\Users\\matth\\AppData\\Roaming\\npm\\node_modules\\expo-cli\\node_modules\\@expo\\spawn-async\\src\\spawnAsync.ts:26:19)\n at spawnGradleAsync (C:\\Users\\matth\\AppData\\Roaming\\npm\\node_modules\\expo-cli\\src\\commands\\run\\android\\spawnGradleAsync.ts:83:28)\n at assembleAsync (C:\\Users\\matth\\AppData\\Roaming\\npm\\node_modules\\expo-cli\\src\\commands\\run\\android\\spawnGradleAsync.ts:57:16)\n at actionAsync (C:\\Users\\matth\\AppData\\Roaming\\npm\\node_modules\\expo-cli\\src\\commands\\run\\android\\runAndroid.ts:145:22)\n",
"text": "I also tried approaching this problem from a different angle and I followed this tutorial: Build an Offline-First React Native Mobile App with Expo and Realm React Native to create a Realm SDK from scratch. Cloning the repo that this tutorial followed and changing the app id did not work so I decided to follow the webpage step by step and just use the repo as a reference. Everything was going well until I got to the step on “Prebuilding our Expo App.” This is what the section outlines:Prebuilding our Expo App\nOn save we’ll find this error:\nCode Snippet\n1\nError: Missing Realm constructor. Did you run “pod install”? Please see https://realm.io/docs/react-native/latest/#missing-realm-constructor for troubleshooting\ncopyIcon\nRight now, Realm React Native is not compatible with\nExpo Managed Workflows\n. In a managed Workflow Expo hides all iOS and Android native details from the JavaScript/React developer so they can concentrate on writing React code. Here, we need to\nprebuild\nour App, which will mean that we lose the nice Expo Go App that allows us to load our app using a QR code.\nThe Expo Team is working hard on improving the compatibility with Realm React Native, as is our React Native SDK team, who are currently working on improving the compatibility with Expo, supporting the Hermes JavaScript Engine and expo-dev-client. Watch this space for all these exciting announcements!\nSo to run our app in iOS we’ll do:\nCode Snippet\n1\nexpo run:ios\ncopyIcon\nWe need to provide a Bundle Identifier to our iOS app. In this case we’ll use com.realm.read-later-maybe\nThis will install all needed JavaScript libraries using yarn, then install all native libraries using CocoaPods, and finally will compile and run our app. To run on Android we’ll do:\nCode Snippet\n1\nexpo run:androidHowever, when I do an expo run:android, I get the following error (I tried following advice online to delete node modules and do another npm or yarn install but that didn’t help):Essentially, all I’m trying to do is to write to MongoDB atlas (my cloud database) from my react native code without having to write a backend in node (I thought RealmSDK would solve this issue for me but I’m having more problems with it than I thought). Ideally, if you could solve the issue I’m experiencing with the repo from my personal github that I shared above, it would be great. Otherwise, telling me why expo run:android in my second app is not working could also help. Also, please provide me with all the code I need to write to my LinksSchema and read a Link from LinksSchema along with any other modifications I need to make to my code (I want to eliminate all the unneeded code from the repo I copied from the react-native template to do application).",
"username": "Matthew_Gerges"
},
{
"code": "LinkuseRealm",
"text": "Hello @Matthew_Gerges! Sorry you are having issues. Luckily the solution is quite simple. You must subscribe to the Link collection in order to do anything with it (read/write/update). The easiest way to do it would be to configure an initial subscription on the RealmProvider. Alternatively, you can use useRealm within a child component to add a subscription.\nHope this helps! Keep in mind, any model you add to your application will need a subscription in order to access the collection.",
"username": "Andrew_Meyer"
}
] | Cannot Write to New Schema Using Realm Sync in React Native | 2023-07-05T22:44:33.618Z | Cannot Write to New Schema Using Realm Sync in React Native | 752 |
|
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hello Team,[MongoDB.Driver] (NuGet Gallery | MongoDB.Driver 2.20.0)\nIn last 9 months (Start of 2023) there has been 4 major version released for MongoDB Driver SDK supporting .NET Core.\nWe are currently using v 2.18.0 and planning to migrate to some time next year due to vulnerability tagged for the versions < 2.19.0Can you suggest or give insights on the EOL or Support for versions v2.18.0 and above for better decision making.",
"username": "Mukesh_Kumar10"
},
{
"code": "",
"text": "Hi Mukesh,We follow Semantic Versioning, and so the last four releases have all been minor (e.g. 2.19.0) or patch (e.g. 2.19.1) releases.We don’t have an official EOL policy for drivers, however server versions follow a 3 year EOL policy from date of major release. We generally advise users to upgrade to the latest version of the driver whenever upgrading.I hope this helps! Let me know if you have any additional questions.",
"username": "Patrick_Gilfether1"
},
{
"code": "",
"text": "Thank you @Patrick_Gilfether1. This helps!",
"username": "Mukesh_Kumar10"
}
] | MongoDB Driver EOL or Support (v 2.18.0) | 2023-07-06T16:40:17.655Z | MongoDB Driver EOL or Support (v 2.18.0) | 459 |
[
"compass",
"connecting"
] | [
{
"code": "",
"text": "Getting error on my node server - Server selection timed out after 30000 ms. Similarly not able to access Db from compass, ssh and the website\nScreenshot 2023-03-25 at 2.53.55 PM2588×1110 166 KB\n",
"username": "Suraj_Jorwekar"
},
{
"code": "",
"text": "is 2/3 of your replicas also down? im facing the same issue",
"username": "Darren_Zou"
},
{
"code": "",
"text": "The issue automatically got resolved after 30 mins",
"username": "Suraj_Jorwekar"
},
{
"code": "",
"text": "In the future, if you have this issue go straight to opening a support ticket.",
"username": "Brock"
},
{
"code": "",
"text": "I have the same problem right now! Mongodb Website shows my db is down!",
"username": "Saeid_Mohadjer"
}
] | IMPORTANT - MongoDB Cluster down | 2023-03-25T09:24:47.299Z | IMPORTANT - MongoDB Cluster down | 1,221 |
|
null | [
"crud"
] | [
{
"code": "db.users.updateMany(\n <filter>,\n <update>,\n {\n limit : 100\n }\n);\n",
"text": "Hi\nI want to limit the number of document updates in one command.for examplehttps://jira.mongodb.org/browse/SERVER-55967",
"username": "Abolfazl_Ziaratban"
},
{
"code": "",
"text": "Check this link.",
"username": "Ramachandra_Tummala"
},
{
"code": "tmp{\n \"_id\" : 1,\n \"name\" : \"A\",\n},\n{\n \"_id\" : 2,\n \"name\" : \"B\",\n},\n{\n \"_id\" : 3,\n \"name\" : \"C\",\n},\n{\n \"_id\" : 4,\n \"name\" : \"D\",\n},\n{\n \"_id\" : 5,\n \"name\" : \"E\",\n},\n{\n \"_id\" : 6,\n \"name\" : \"F\",\n},\n{\n \"_id\" : 7,\n \"name\" : \"G\",\n}\n3 Time ****************************** [id of docs modified]\n |SV1 -----[Update CMD]------------> 1,2,4\n Requests|SV2 ----------[Update CMD]-------> 5\n |SV3 -------[Update CMD]----------> 3,6,7\n",
"text": "Thanks Ramachandra_Tummala.\nThis is not my answer but this post is near.\nactually , i want update only 100 document in one command in php app.Like Limit clause in MariaDBFor example:\ni have 7 documents in tmp collection and have 3 server that want connect to the MongoDB server for run that command(one update command).In a hypothesis concurrency (limit of update is 3):",
"username": "Abolfazl_Ziaratban"
},
{
"code": "",
"text": "Do you have not any other solution?",
"username": "Abolfazl_Ziaratban"
},
{
"code": "",
"text": "Please vote … Hi\nI want to limit the number of document updates in one command.\n\nfor example\n\ndb.users.updateMany(\n ,\n ,\n {\n limit : 100\n }\n);\n\n\nhttps://www.mongodb.com/community/forums/t/how-to-limit-the-number-of-document-updates/102204/3",
"username": "Abolfazl_Ziaratban"
},
{
"code": "",
"text": "These code doesn’t works\nUpdate many doesn’t work wit limit",
"username": "harsh_jaiswal"
},
{
"code": "db.getCollection(\"Test\").aggregate([\n{\n $sort:{\n name:1\n }\n},\n{\n $limit:3\n},\n{\n $addFields:{\n active:true\n }\n},\n{\n $merge:{\n into:'Test',\n on:'_id',\n whenMatched:'merge'\n }\n}\n])\n",
"text": "Could try something like this:Mongo playground: a simple sandbox to test and share MongoDB queries onlineUse the $merge operator and have a limit in the pipeline:Edit: 4.4 introduced the merge into same collection which is needed.MongoDB on-demand materialized view, SELECT INTO",
"username": "John_Sewell"
}
] | How to limit the number of document updates? | 2021-04-09T16:22:00.247Z | How to limit the number of document updates? | 20,993 |
[] | [
{
"code": "",
"text": "\nimage1578×204 7.69 KB\n\n31708×904 209 KB\n\n41069×811 129 KB\n\n21402×349 18.8 KB\nbecause a problem occurred\nI used the script as instructed, but it doesn’t work.\nWhat part am I missing?please help me thank you",
"username": "Park_49739"
},
{
"code": "",
"text": "Hi @Park_49739 -Thanks for using Relational Migrator, and sorry to hear you are having problems. It looks like this sync job may have failed for some other reason, if the script is not having the desired effect. Can you send the text of any error messages you see in the Issues list below the red banner?thanks\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "\nimage789×265 4 KB\nAs you said, the first problem was solved.\nHowever, the next time I had the same problem as above. What can i check?",
"username": "Park_49739"
},
{
"code": "",
"text": "Did it successfully migrate some data before failing? The timeout exception could be the result of poor network connectivity between Relational Migrator and your MongoDB instance, or it could be that your MongoDB instance is underpowered. In general it’s a good idea to run Relational Migrator in a network location that is as close as possible to your source and destination databases.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "As @tomhollander mentioned, this might be related to another reason. As the read times out can you please check the following:Go back to the Data Migration pane and click on create new sync job.\nThe connection data of you last connection should be filled in already, please verify and provide the correct password and click on test connection\nIf this fails, you need to check the connection string and/or credentials.\nimage560×782 32.9 KB\n\nI this is the window I am referring to.Best Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "It was a network problem. All issues were resolved and synchronization confirmed.Thanks for your help.",
"username": "Park_49739"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Relational Migrator it does not work | 2023-07-07T02:09:45.960Z | Relational Migrator it does not work | 638 |
|
null | [
"swift"
] | [
{
"code": "",
"text": "We have a Mac app that relies on Realm and Device Sync. Users are authenticated via email/password. The app’s customers are enterprises that pay for a certain number of user accounts (one account for each employee, etc.)To combat credential-sharing, we’d like to impose a condition such that when a user logs in on one Mac, any active sessions on other devices are automatically logged out. This way, a customer can’t purchase 3 user accounts and share them across 17 employees, for instance. (This app is designed such that online access is required—the app will gracefully shutdown if disconnected from Device Sync.)I know the Atlas APIs have a “custom user authentication” flow and that I can list all users and all devices for a user. But before I dive into implementing all that, is there a simple way in the Realm Swift SDK to say: “Log this user in, and terminate any other sessions he might already have running”?",
"username": "Bryan_Jones"
},
{
"code": "/logout",
"text": "Hi @Bryan_Jones,The Realm SDKs don’t have this functionality built in, but you can accomplish something similar using the App Services Admin API /logout endpoint which will revoke all sessions for the given user ID. You would likely want to call this before login since it will otherwise revoke the session that was just logged in.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Thanks! I did see that endpoint, but it takes the ID of a user. Is there a simple way to look up that ID by email address?At the app’s log-in screen, I’ll have the user’s email and password. To revoke their other sessions I need the associated userID. There are only two ways I see to get that:Log the user in, then inspect the user metadata to get the id. Revoke all sessions using the ID, then log in AGAIN. (Not great.)Use the endpoint to dump ALL users and then enumerate them until I find the one with a given email. (Also not great—potentially many users, paginated responses, etc.)Is there a cleaner way?",
"username": "Bryan_Jones"
},
{
"code": "",
"text": "Unfortunately there’s no way currently to programmatically look up a user from their auth provider metadata. Your best bet for now would be to do the first option (which I recognize is not great), or alternatively you could use custom user data that lives in your cluster to lookup the user ID for a given email address.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "You can also feel free to upvote this feature request which may help with your use case.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Just spit balling… what if you have a collection dedicated to storing user data including fields like the user id, email address, etc at the point of registration. Inserts to the collection could be done via an auth create trigger whenever a new user registers.When a login happens, you could call a function from the SDK sending the email address as a function argument before the login happens in the sdk. The function can do a query to look up the email address in the user collection and find the id before using the logout API within the function itself.",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "I considered that, but the docs for calling functions seem to require that a user first be logged into the Realm:",
"username": "Bryan_Jones"
},
{
"code": "",
"text": "That’s true, perhaps then a call to a https endpoint that runs the function?\nIt still requires user auth but this can be a server API key",
"username": "Mansoor_Omar"
}
] | Limit User Credentials to One Device at a Time | 2023-07-06T04:53:17.752Z | Limit User Credentials to One Device at a Time | 663 |
null | [
"queries",
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": " for(let i=0; i<users.length; i++){\n let usr = await User.findOneAndUpdate({usuarioID: users[i].usuarioID}, {$push: {broadcast_message: {id: idNotificacion._id, seen: false}}} )\n\n if(usr === null ){\n \n users[i].broadcast_message = [{id: idNotificacion._id, seen: false}]\n const usuarioID = await User.create(users[i]);\n }\n }\n}\nconst UserSchema = new mongoose.Schema({\n\t_id,\n message_broadcast: [{ \n id: {\n type: mongoose.Schema.Types.ObjectId ,\n ref: 'BroadcastNotification', //another collection, containing only Notification Messages.\n required: true\n },\n }\n ]}\n}\n",
"text": "Hi community.\nI’m developing a Notification system. I have basically a User schema containing an array of references to Notification messages.The server sends to my API endpoint an big list of Users and the message to notify them.\nI want to add the Message ID to each User’s array if the user exists, or create it if the user doesn’t exist.Since I am a new comer to the NoSQL world, it is taking me time to realize the best way of doing it and not fall into a Big O^N problem.The User collection is expected to have more than 100K entries.\nRight now my algorithm is as follows:\nNew broadcast notification arrives\nI query mongo to check if the user exists.\nIf it does, I update the array\nIf it doesn’t I create the new User and add the notification to its array.\nProblem: The more users I add to the db, the longer it takes to query for existing users.\nThat is the Big O^N problem.Is there any smart or clever solution? I am not really proficient at NoSQL yet.This is the short version of the schema:",
"username": "Maxi_dziewa"
},
{
"code": "",
"text": "Is there an index on the collection on field usuarioID? And take a look at upserts which I believe should do the kind of thing you want.",
"username": "John_Sewell"
},
{
"code": "const bulkOperations = usuarios.map( (usuario : any) => {\n return {\n updateOne: {\n filter: { usuarioID: usuario.usuarioID },\n update: { $push: {mensajes_broadcast: idNotificacion } },\n upsert: true\n }\n };\n});\n await Usuario.bulkWrite(bulkOperations)\n}\n",
"text": "Thats it!\nThe index was by default on _id, but I was using userID for my search.\nI just added an index on userID and that was it.\nJust inserted over 70K documents in 17 seconds, and 30K in less than 10 seconds. More than enough to move forward.Thanks again John!",
"username": "Maxi_dziewa"
},
{
"code": "",
"text": "You could also look at doing an upsert in a bulk update…",
"username": "John_Sewell"
},
{
"code": "",
"text": "I just saw you are…my reading skills failed there!",
"username": "John_Sewell"
},
{
"code": "",
"text": "You could have a look at the bulk options though, if your updates don’t HAVE to be in order you could set the options for this…which could improve performance.Documentation for mongodb",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to create it if it doesn't exist or update it if it does exist? | 2023-07-06T18:00:22.831Z | How to create it if it doesn’t exist or update it if it does exist? | 579 |
null | [] | [
{
"code": "",
"text": "The flowcontrol document (returned db.runCommand({serverStatus: 1}) for one of my mongo deployments is reporting a value of 100 for the targetRateLimit value. On other mongo deployments the value is 1,000,000,000.How do I set the targetRateLimit ? Is it as simply as updating the document in the database?Thanks.",
"username": "Charles_Clayton"
},
{
"code": "",
"text": "Maybemongod --setParameter flowControlTargetLagSeconds=20",
"username": "Kobe_W"
},
{
"code": "",
"text": "That change is being put through the approvals process.",
"username": "Charles_Clayton"
}
] | Flow Control Status - Increase targetRateLimit value | 2023-07-05T20:00:44.553Z | Flow Control Status - Increase targetRateLimit value | 516 |
null | [
"aggregation",
"queries",
"node-js",
"crud",
"change-streams"
] | [
{
"code": "mongodbwtimeoutjfsyncwtimeoutMSjournalsslCRLcrltlsCertificateFiletlsCertificateKeyFilecertcatlsCAFilesslCAstringcrlsslCRLstringcerttlsCertificateKeyFilesslCertstringkeytlsCertificateKeyFilesslKeystringpassphrasetlsCertificateKeyFilePasswordsslPassstringrejectUnauthorizedtlsAllowInvalidCertificatessslValidatebooleanincludeResultMetadatafindOneAnd...trueModifyResultfalsenullfindOneAndDeletefindOneAndUpdatefindOneAndReplace// With a document { _id: 1, a: 1 } in the collection\nawait collection.findOneAndDelete({ a: 1 }, { includeResultMetadata: false }); // returns { _id: 1, a: 1 }\nawait collection.findOneAndDelete({ a: 2 }, { includeResultMetadata: false }); // returns null\nawait collection.findOneAndDelete({ a: 1 }, { includeResultMetadata: true }); // returns { ok: 1, lastErrorObject: { n: 1 }, value: { _id: 1, a: 1 }}\nchangeStreamPreAndPostImages$changeStreamSplitLargeEventawait db.createCollection('test', { changeStreamPreAndPostImages: { enabled: true }});\nconst collection = db.collection('test');\nconst changeStream = collection.watch([{ $changeStreamSplitLargeEvent: {} ], {\n fullDocumentBeforeChange: 'required'\n});\n\nfor await (const change of changeStream) {\n console.log(change.splitEvent); // If changes over 16MB: { fragment: n, of: n }\n}\nCollectionconst indexes = await collection.listSearchIndexes().toArray(); // produces an array of search indexes\nawait collection.createSearchIndex({ name: 'my-index', definition: < index definition > } ); \nawait collection.updateSearchIndex('my-index', < new definition >);\nawait collection.dropSearchIndex('my-index');\nbsonlistDatabasesnameOnlylistDatabasesnameOnlydb.admin().listDatabases({ nameOnly: true });\n// [\n// { name: 'local' },\n// { name: 'movies' },\n// ...\n// ]\nsaslprepsaslprepsaslprepTypeErrorTypeErrorsaslprepmapconst cursor = collection.find({ name: 'john doe' }).map(({ name }) => name);\nfor await (const document of cursor) {\n console.error(document); // only prints the `name` field from each document\n}\nCursor.mapconst cursor = collection.find({ name: 'john doe' }).map(() => {\n throw new Error('oh no! error here'); // \n});\nawait cursor.next(); // process crashes with uncaught error\ntransformCursor.hasNext()Cursor.next()const cursor = collection.find({ name: 'john doe' }).map((document) => document.name);\nwhile (await cursor.hasNext()) { // this transforms the first document in the cursor once\n\tconst doc = await cursor.next(); // the second document in the cursor is transformed again\n}\nCursor.hasNexthasNextmongodb",
"text": "The MongoDB Node.js team is pleased to announce version 5.7.0 of the mongodb package!wtimeout, j, and fsync options have been deprecated, please use wtimeoutMS and journal instead.In an effort to simplify TLS setup and use with the driver we’re paring down the number of custom options to the ones that are common to all drivers. This should reduce inadvertent misconfiguration due to conflicting options.The legacy “ssl-” options have been deprecated, each has a corresponding “tls-” option listed in the table below (except for sslCRL, you may directly use the Node.js crl option instead). tlsCertificateFile has also been deprecated, please use tlsCertificateKeyFile or pass the cert directly to the MongoClient constructor.In addition to the common driver options, the Node.js driver also passes through Node.js TLS options provided on the MongoClient to Node.js’ tls.connect API, which may be convenient to reuse with other Node.js APIs.This option defaults to true, which will return a ModifyResult type. When set to false, which will\nbecome the default in the next major release, it will return the modified document or null if nothing matched.\nThis applies to findOneAndDelete, findOneAndUpdate, findOneAndReplace.When change stream documents exceed the max BSON size limit of 16MB, they can be split into multiple fragments in order to not error when sending events over the wire. In order to enable this functionality, the collection must be created with changeStreamPreAndPostImages enabled and the change stream itself must include an $changeStreamSplitLargeEvent aggregation stage. This feature requires a minimum server version of 7.0.0.Example:This PR adds support for managing search indexes (creating, updating, deleting and listing indexes). The new methods are available on the Collection class.Take a look at the bson package’s release notes!Unlike our other compression mechanisms snappy was loaded at the module level, meaning it would be optionally imported whether or not the driver was configured to use snappy compression. Snappy is now aligned with our other optional peer dependencies and is only loaded when enabled.This allows users who do not use these features to not have them installed. Users who do use these feature will now have them lazy loaded upon first use.The listDatabases API exposes the nameOnly option which allows you to limit its output to only the names of the databases on a given mongoDB deployment:Prior to this fix, the option was not being set properly on the command, so the output was always given in full.Thanks to @redixhumayun for submitting this fix!saslprep is an optional dependency used to perform Stringprep Profile for User Names and Passwords for SCRAM-SHA-256 authentication. The saslprep library breaks when it is bundled, causing the driver to throw TypeErrors.This release includes a fix that prevents the driver throwing TypeErrors when attempting to use saslprep in bundled environments.The cursor API provides the ability to apply a map function to each document in the cursor:Starting in version 4.0 of the driver, if the transform function throws an error, there are certain scenarios where the driver does not correctly catch this error and an uncaught exception is thrown:This release adds logic to ensure that whenever we transform a cursor document, we handle any errors properly. Any errors thrown from a transform function are caught and returned to the user.Version 4.0 introduced a bug that would apply a transform function to documents in the cursor when the cursor was iterated using Cursor.hasNext(). When combined with Cursor.next(), this would result in transforming documents multiple times.This release removes the transform logic from Cursor.hasNext, preventing cursor documents from being transformed twice when iterated using hasNext.We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "Warren_James"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB NodeJS Driver 5.7.0 Released | 2023-07-06T20:28:46.837Z | MongoDB NodeJS Driver 5.7.0 Released | 1,197 |
null | [
"flexible-sync"
] | [
{
"code": "[\n {\n \"name\": \"admin\",\n \"apply_when\": {\n \"%%user.custom_data.isTeamAdmin\": true\n },\n \"document_filter\": {\n \"read\": {\n \"team\": \"%%user.custom_data.team\"\n },\n \"write\": {\n \"team\": \"%%user.custom_data.team\"\n }\n },\n \"read\": true,\n \"write\": true\n },\n {\n \"name\": \"user\",\n \"apply_when\": {},\n \"document_filters\": {\n \"read\": {\n \"team\": \"%%user.custom_data.team\"\n },\n \"write\": {\n \"owner_id\": \"%%user.id\"\n }\n },\n \"read\": true,\n \"write\": true\n }\n]\n{\n owner_id: 1,\n team: \"A\"\n}\n",
"text": "Hi,So I was reading the flexible sync permissions guide and in the Tiered Privileges I was wondering how you would stop users from inserting documents for other teams.In the Tiered Privileges section there’s an admin and user role. The admin can read/write to the team they belong to, while the user role can only write to what they own and read from the team they belong to.My question is what stops a user role from updating the document to have a team they do not belong to?i.e\nThe doc hasIf the user has owner_id:1 and belongs to team A what stops them from updating the team field in the doc from “A” to “B”? I guess you can have field level permissions so that the user role can’t update the Team field but then how does it get set to Team: “A” in the first place?Thanks,",
"username": "Tam_Nguyen1"
},
{
"code": "document_filters.readdocument_filters.write{\n \"write\": {\n \"$and\": [\n { \"owner_id\": \"%%user.id\" },\n { \"team\": \"%%user.custom_data.team\" }\n ]\n }\n}\n",
"text": "Hi @Tam_Nguyen1,Flexible sync prevents you from writing what you cannot read, so in this case the document_filters.read would prevent a user from writing to a document with a team that does not match the one in their custom user data. It would not, however, prevent a user from moving a document out of their team with that scheme. If you wanted to enforce that, you could add an additional predicate to the document_filters.write:",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Makes sense to me! Thanks.",
"username": "Tam_Nguyen1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Flexible sync security | 2023-07-06T16:42:14.005Z | Flexible sync security | 598 |
null | [
"java",
"atlas-cluster",
"spring-data-odm"
] | [
{
"code": ".**pem**application.propertiesspring.data.mongodb.uri=mongodb+srv://cluster0.uzmiipr.mongodb.net/demo?authSource=%24external&authMechanism=MONGODB-X509&ssl=true&tls=true&tlsCertificateKeyFile=classpath:CX509-cert-9188001417325818856.pem\n Command failed with error 8000 (AtlasError): 'certificate validation failed' on server\n",
"text": "I am trying to connect MongoDB from a Spring Boot application using an X.509 certificate downloaded from the MongoDB Atlas site. I have placed the .**pem** file in the resource directory. Additionally, I have added the following URI to the application.properties file:However, when I attempt to perform any operation, I receive the following error message:I have also tried connecting to the database using a username and password, and in that case, I am able to connect and access the database successfully. I would appreciate your guidance on resolving this issue.",
"username": "janarthanan_s"
},
{
"code": "tlsCertificateKeyFile",
"text": "Hi @janarthanan_sThe Java driver doesn’t support the tlsCertificateKeyFile query parameter. You have to add the certificate to the system key store. See https://www.mongodb.com/docs/drivers/java/sync/current/fundamentals/connection/tls/#configure-the-jvm-key-store for details.Regards,\nJeff",
"username": "Jeffrey_Yemin"
}
] | MongoDB Atlas spring boot X.509 certificate based connection issue | 2023-07-06T08:56:17.156Z | MongoDB Atlas spring boot X.509 certificate based connection issue | 786 |
null | [
"compass",
"golang"
] | [
{
"code": "incomplete read of message header: context deadline exceeded \ncursorErr := dbCursor.All(childctx, result); \n",
"text": "I have a collection with 763 documents which I fetch 200 at a time using cursor based pagination.\nI am using MongoDB Find Function and then using cursor.All to unmarshall the response into their respective data structures.\nThe first 3 pages (200*3 = 600) work smoothly however on the last page I see a context deadline exceeded issue.Upon further inspection it’s coming from the following line of codeThis happens when the limit is set as 200, however if I change it to 163 (the exact number of documents on the last page) it works perfectly.I tried the exact same query on Compass and it works fine.",
"username": "Hrishikesh_Thakkar"
},
{
"code": "incomplete read of message header: context deadline exceeded \nchildctx",
"text": "@Hrishikesh_Thakkar Thank you for the question! It is possible that a context deadline is exceeded after the driver receive the server response but before it can finish reading it, which would result in the error:In this particular case, the underlying net error would be “i/o timeout”. Does increasing the deadline on the childctx context resolve the issue?",
"username": "Preston_Vasquez"
},
{
"code": "",
"text": "Hi @Preston_Vasquez, I tried doing that but the issue isn’t with the timeout right? Also I have set the timeout as 60 seconds from 30 and the same issue persists.I observed that if there’s 163 documents left to query and the limit set is 163 that works. However even changing the limit to 164 causes the context deadline exceeded issue",
"username": "Hrishikesh_Thakkar"
},
{
"code": "",
"text": "@Hrishikesh_Thakkar This error indicates that a context deadline has exceeded, the context times out the io causing a network error. Here is an example in gist that will occasionally reproduce the issue (you may have to adjust the deadline). In the case of this example, the solution is to extend the deadline. Does your implementation look similar?",
"username": "Preston_Vasquez"
},
{
"code": "filter_criteria = {'address': '0x9c8ff314c9bc7f6e59a9d9225fb22946427edc03', '_id':{'$gt': '10x9c8ff314c9bc7f6e59a9d9225fb22946427edc03639'}} # Example filter\nsort_criteria = [('_id', pymongo.ASCENDING)] # Example sort criteria\n\nresult = collection.find(filter_criteria, cursor_type=pymongo.CursorType.NON_TAILABLE).limit(200).sort(sort_criteria)\n\nprint(result)\n# Print the fetched data\nfor document in result:\n print(\"Printing Id\")\n print(document[\"_id\"])\n\n# Close the MongoDB connection\nclient.close()\n",
"text": "Hi @Preston_Vasquez actually it does, thank you very much for sharing the gist. My only concern is that even without having a context deadline (i.e using context.Background()) there’s still no data returned. I even created a Python Implementation of the same query and its returning the first batch of 101 and after that the service just hangs. There’s no prompt. I waited for a couple of minutes.\nSharing the Python snippet as wellNot sure if this helps but when I exported the sample set to a separate collection the queries worked as expected. It seems that the issue arises when they are in larger collections.",
"username": "Hrishikesh_Thakkar"
}
] | Context Deadline Exceeded Issue Cursor All Golang | 2023-06-30T07:55:40.535Z | Context Deadline Exceeded Issue Cursor All Golang | 1,478 |
[
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi I.m using Jammy (22.04.2 LTS) OS and whenever I’m trying to execute mongosh or mongod command i’m getting core-dumped error. Several times tried reinstalling the packages but still the issue persists. Can anyone help in fixing this.\n\nservice1075×210 67.5 KB\n",
"username": "Tushar_Jain1"
},
{
"code": "",
"text": "ILL means illegal instructions\nCheck whether mongodb you are installing is supported or not on the OS\nCheck pre installation checks,compatibility matrix,cpu microarchitecture requirements etc",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "used https://repo.mongodb.com/apt/ubuntu/dists/jammy/mongodb-enterprise/7.0/multiverse/binary-amd64/mongodb-mongosh_1.9.1_amd64.deb this package after uninstalling mongodb following How To Uninstall MongoDB | MongoDB | MongoDB page. Seems issue is fixed as of now.",
"username": "Tushar_Jain1"
}
] | Core Dumped error while trying to connect to documentDB | 2023-07-06T07:47:06.260Z | Core Dumped error while trying to connect to documentDB | 427 |
|
null | [
"connecting"
] | [
{
"code": "connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&connectTimeoutMS=100000&gssapiServiceName=mongodb&socketTimeoutMS=100000\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection timed out :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\niredTiger message\",\"attr\":{\"message\":\"[1688542311:637059][10933:0x7f3a7403f700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 341775, snapshot max: 341827 snapshot count: 3, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2023-07-05T07:32:02.744+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:46572\",\"connectionId\":19534,\"connectionCount\":77}}\n{\"t\":{\"$date\":\"2023-07-05T07:40:00.570+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:56164\",\"connectionId\":19535,\"connectionCount\":78}}\n{\"t\":{\"$date\":\"2023-07-05T08:40:17.003+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn20\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":1854086}}\n{\"t\":{\"$date\":\"2023-07-05T08:40:17.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn20\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:58894\",\"connectionId\":20,\"connectionCount\":78}}\n# file size\nLimitFSIZE=infinity\n# cpu time\nLimitCPU=infinity\n# virtual memory size\nLimitAS=infinity\n# open files\nLimitNOFILE=64000\n# processes/threads\nLimitNPROC=64000\n# locked memory\nLimitMEMLOCK=infinity\n# total threads (user+kernel)\nTasksMax=infinity\nTasksAccounting=false\n# mongod.conf\n\n\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n\n\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\ndb version v4.4.5\nBuild Info: {\n \"version\": \"4.4.5\",\n \"gitVersion\": \"ff5cb77101b052fa02da43b8538093486cf9b3f7\",\n \"openSSLVersion\": \"OpenSSL 1.1.1n 15 Mar 2022\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"debian10\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\nMongoDB shell version v4.4.5\nBuild Info: {\n \"version\": \"4.4.5\",\n \"gitVersion\": \"ff5cb77101b052fa02da43b8538093486cf9b3f7\",\n \"openSSLVersion\": \"OpenSSL 1.1.1n 15 Mar 2022\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"debian10\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n",
"text": "We have a standalone MongoDB server and for some reason we are seeing that Mongodb is no longer accepting any new connections the weird part is thatI’m attempting to make a Mongodb connectionmongo ‘mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb&socketTimeoutMS=100000&connectTimeoutMS=100000’Received TimeoutI see the following resultbut nothing is visible on the mongodb.logThe only thing is visible are …I can see that Connection Count is 78 but I don’t know the max value for it.I suspect this could be because of Max Open Files but I doubt it because of the following\na) No such error is visible on the logs\nb) The clients are timing out instead of returning failure immediately.\nc) the systemd unit has the value set to 64000I have a feeling if I restart the server everything will become normal but we would like to know\nwe can prevent such problem in future.And I was right after the restart things did work Also, I’m not sure what the max open files limit for Mongodb user from systemclt unit it seems like 64000Mongo db version we haveMongo shell versionI can confirm with systemd unit file the File limit is set to 640000",
"username": "Virendra_Negi"
},
{
"code": "",
"text": "Did you see anything useful in driver logs? it might be in the driver side if the connection request never reaches the server side. e.g. the driver has run out of connections in the pool and a new request has to wait for one. (and eventually times out)",
"username": "Kobe_W"
},
{
"code": "",
"text": "@Kobe_W : problem persisted on every service that was trying to establish connection with mongoDBAll services or commands were timing out with no explanation on mongo logs.",
"username": "Virendra_Negi"
},
{
"code": "",
"text": "Did you try to set the logLevel to something higher? The higher the number (1-5) the more logs you see. This might show a hidden message or something else going on in the future. Just note it generates a lot of logs the higher you go so don’t keep it on all the time. Only if you are seeing a problem.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "@tapiocaPENGUIN I believe even for that mongo shell has to connect which was not happening at all.As this was a production server I could not have let this happen for too long as it was affecting our other services and I had to restart the MongoDB server which solved the issue.My intention was to understand if there is anyone that has seen issue something similar which could help us in future if we are affected with the problem again.",
"username": "Virendra_Negi"
}
] | Mongo no longer accepting the connection | 2023-07-05T09:15:55.044Z | Mongo no longer accepting the connection | 770 |
null | [
"python",
"production"
] | [
{
"code": "",
"text": "We are pleased to announce the 4.4 release of PyMongo - MongoDB’s Python Driver. This release adds support for MongoDB 7.0, experimental support for Queryable Encryption range queries, improves support for type-checking, updates our pymongocrypt min version to 1.6.0, and improves the performance of bson encoding.See the changelog for a high level summary of what’s new and improved or see the 4.4 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongo 4.4 documentation \nSource: GitHub - mongodb/mongo-python-driver at 4.4.0Thank you to everyone who contributed to this release!",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | PyMongo 4.4 Released | 2023-06-21T18:10:22.625Z | PyMongo 4.4 Released | 1,181 |
null | [
"java",
"atlas-functions",
"android",
"realm-web",
"app-services-data-access"
] | [
{
"code": " @Override\n public void onBeforeReset(Realm realm) {\n Log.w(\"auth\", \"Beginning client reset for \" + realm.getPath());\n }\n @Override\n public void onAfterReset(Realm before, Realm after) {\n Log.w(\"auth\", \"Finished client reset for \" + before.getPath());\n }\n @Override\n public void onError(SyncSession session, ClientResetRequiredError error) {\n Log.e(\"auth\", \"Couldn't handle the client reset automatically.\" +\n \" Falling back to manual client reset execution: \"\n + error.getErrorMessage());\n // close all instances of your realm -- this application only uses one\n dbApp..close();\n try {\n Log.w(\"auth\", \"About to execute the client reset.\");\n // execute the client reset, moving the current realm to a backup file\n error.executeClientReset();\n Log.w(\"auth\", \"Executed the client reset.\");\n } catch (IllegalStateException e) {\n Log.e(\"auth\", \"Failed to execute the client reset: \" + e.getMessage());\n // The client reset can only proceed if there are no open realms.\n // if execution failed, ask the user to restart the app, and we'll client reset\n // when we first open the app connection.\n Log.e(\"auth\", \"Sync error. Restart the application to resume sync.\");\n }\n // open a new instance of the realm. This initializes a new file for the new realm\n // and downloads the backend state. Do this in a background thread so we can wait\n // for server changes to fully download.\n ExecutorService executor = Executors.newSingleThreadExecutor();\n executor.execute(() -> {\n Realm newRealm = Realm.getDefaultInstance();\n // ensure that the backend state is fully downloaded before proceeding\n /*try {\n dbApp.getSync().getSession(globalConfig).downloadAllServerChanges(10000,\n TimeUnit.MILLISECONDS);\n } catch (InterruptedException e) {\n e.printStackTrace();\n }*/\n Log.w(\"auth\",\"Downloaded server changes for a fresh instance of the realm.\");\n newRealm.close();\n });\n // execute the recovery logic on a background thread\n try {\n executor.awaitTermination(20000, TimeUnit.MILLISECONDS);\n } catch (InterruptedException e) {\n e.printStackTrace();\n }\n }\n })\n",
"text": "I have setup a Realm App and Atlas database correctly. I activated device sync and developer mode and put read and write access to just “true”. I added some documents to some collections with MongoDB Compass.\nNow I try to connect in Android Studio with Java SDK via a user email and password (that I created in the Realm UI). The user gets logged in. Then I initialize Realm etc. and it works. Then I try a symple query on the realm and no data at all gets found. No error, just no data there. It behaves as if the entire realm was empty. I also have Client Reset strategy like this, but it didn’t help. The client just does not get ANY data.The client reset also works but still, I don’t get any data.",
"username": "SirSwagon_N_A"
},
{
"code": "",
"text": "i have the same issue…did you solve it?",
"username": "rouuuge"
},
{
"code": "",
"text": "Hi, can you send a link to your application and/or explain more about what you are doing. Generally when this happens one of a few things are going on:Thanks,\nTyler",
"username": "Tyler_Kaye"
}
] | Realm data not visible for Client Java SDK | 2022-07-28T16:18:11.560Z | Realm data not visible for Client Java SDK | 2,531 |
null | [
"replication"
] | [
{
"code": " [rsBackgroundSync] sync producer problem: 13106 nextSafe(): { $err: \"getMore executor error: UnknownError no details available\", code: 17406 }\n",
"text": "",
"username": "Soni_Singh1"
},
{
"code": "",
"text": "Hey @Soni_Singh1,Thank you for reaching out to the MongoDB Community forums [rsBackgroundSync] sync producer problem: 13106 nextSafe(): { $err: “getMore executor error: UnknownError no details available”, code: 17406 }Based on the error log, it appears that the issue you are experiencing is related to replication. To further address this issue, could you please share the following details:Looking forward to hearing back from you!Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Can any one help for this error | 2023-07-05T13:31:07.818Z | Can any one help for this error | 554 |
null | [
"data-modeling"
] | [
{
"code": "// Students collection\n{\n _id: \"joe\",\n name: \"Joe Bookreader\"\n},\n{\n _id: \"jane\",\n name: \"Jane Shaw\"\n}\n\n// Classes collection\n{\n _id: \"Physics 101\",\n year: \"2023\",\n semester: \"first\"\n professor: \"Dr James Grunfield\"\n location: \"Room 123\"\n students: [\"joe\", \"jane\", ....] // references Students collection _id\n}\n// Classes Attendance collection\n{\n _id: \"Physics - Session 1\",\n class_id: \"Physics 101\" // references the classes collection _id\n date: new ISODate(\"2023-07-03T08:00:00Z\"),\n attendance: [\n {\n student_id: \"jane\", // references Students collection _id\n time_in: new ISODate(\"2023-07-03T08:00:00Z\"),\n time_out: new ISODate(\"2023-07-03T11:00:00Z\")\n },\n {\n student_id: \"joe\", // references Students collection _id\n time_in: new ISODate(\"2023-07-03T08:00:00Z\"),\n time_out: new ISODate(\"2023-07-03T11:00:00Z\")\n },\n ]\n}\n",
"text": "Hi,\nI am trying to create my own schema design and I have searched on Google on how to do this and read the MongoDB documentation to the best of my ability.\nThis is my first MongoDB schema design that is based on what I wanted to do.I want to model my school attendance management system and I should be able to satisfy the following requirements.Create a record or list of students enrolled in my universityCreate a list of classes and the students enrolled per each classRecord the attendance per each class session including the time in and time out of each student.I have developed the following schema design and would like to ask for expert advice if this is optimal.In terms of data query, I would like to do the followingExecute CRUD operations on my studentsExecute CRUD operations on my class collection. Able to add, edit, and delete, students enrolled in the class.View the list of students that joined each class scheduleLet me know what you think of my design. I am very much eager for expert comments, please. Thank you!",
"username": "Nel_Neliel"
},
{
"code": "",
"text": "Any advice, please? Thanks",
"username": "Nel_Neliel"
},
{
"code": "",
"text": "Hey @Nel_Neliel,Welcome to the MongoDB Community forums In terms of data query, I would like to do the followingOverall, your design aligns well with the requirements you’ve mentioned. It will let you efficiently manage student enrollment, track attendance, and perform the required queries. Also, please note that the effectiveness of a schema design also depends on the specific usage patterns and query requirements of your application.You can also consider indexing fields that are frequently used in queries to optimize performance. Additionally, it will be helpful to keep re-evaluating the schema design as the data grows.Also if you’re starting your MongoDB journey, I would recommend the following resources:Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Can you comment on my University Schema Design? | 2023-07-03T08:50:16.948Z | Can you comment on my University Schema Design? | 746 |
null | [
"aggregation"
] | [
{
"code": "{\n \"branchId\": \"NLR001\",\n \"monthYear\": [\n {\n \"dateMonthYear\": \"1-01-2023\",\n \"standardId\": [\n {\n \"standardId\": \"UKG\",\n \"stAttendanceStatus\": [\n {\n \"stRollNo\": \"RoleNumber-000\",\n \"attendanceStatus\": \"true\"\n }\n ]\n }\n ]\n }\n ]\n}\n\"dateMonthYear\": \"1-01-2023\"\"2-01-2023\"\"2-01-2023\"\"monthYear.dateMonthYear\"\"1-01-2023\"{ \"branchId\": \"NLR001\" },\n { \"monthYear.dateMonthYear\": \"1-01-2023\" }\n",
"text": "above is my collectionQuestion: Find all students with \"dateMonthYear\": \"1-01-2023\"I added $match as below: I am getting other Dates lists also \"2-01-2023\" and \"2-01-2023\" etc…How to find only collects with \"monthYear.dateMonthYear\": \"1-01-2023\"",
"username": "Mohammed_Ali4"
},
{
"code": "Atlas atlas-b8d6l3-shard-0 [primary] test> db.post233547.aggregate([ {\n... $match: {\n... \"branchId\": \"NLR001\",\n... \"monthYear.dateMonthYear\": {\n... $eq: ISODate(\"2022-12-31T18:30:00.000+00:00\")\n... }\n... }\n... }])\n[\n {\n _id: ObjectId(\"64a6654cfb77c7c470e09843\"),\n branchId: 'NLR001',\n monthYear: [\n {\n dateMonthYear: ISODate(\"2022-12-31T18:30:00.000Z\"),\n standardId: [ { standardId: 'UKG', stAttendanceStatus: [ [Object] ] } ]\n }\n ]\n }\n]\n{\n \"branchId\": \"NLR001\",\n \"dateMonthYear\": \"1-01-2023\",\n \"standardId\": \"UKG\",\n \"attendanceData\": [\n {\n \"stRollNo\": \"RoleNumber-000\",\n \"attendanceStatus\": true\n }\n ]\n}\n",
"text": "Hi @Mohammed_Ali4 and welcome to MongoDB community forums!!I tried to replicate the above json into my local environment and tried to execute the aggregation pipeline stage as:However, I would like to suggest a more efficient schema design to optimize the querying process and avoid potential limitations.\nI would suggest you reconsider the schema design to something like this:which would make the query more proficient and easy.Please reach out if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
}
] | How to retrieve Students with a Specific Date in a Collection | 2023-07-02T14:15:02.583Z | How to retrieve Students with a Specific Date in a Collection | 493 |
[] | [
{
"code": "",
"text": "Hello,I’m trying to delete a single entry in my array of “unions”, it’s marked orange in the screenshot:\nmongodb docs890×721 62.7 KB\nI have tried to delete it with a delete query by using the the embedded documents “_id”. But then the whole toplevel document is beeing deleted aswell. After some searching i know this is just the way MongoDB works.Is there a way i can accomplish this?",
"username": "SaltyPigeon"
},
{
"code": "",
"text": "Hi @SaltyPigeon,You can use an update / updateOne Query in combination with the $pull operator:\nSomething like that:db.“your_collection”.updateOne( { _id: “your_root_id” }, { $pull: { “unions”: { “_id”: “array_id” } }} )For more: https://www.mongodb.com/docs/manual/reference/operator/update/pull/Hope this helps!Greetings,\nNiklas",
"username": "NiklasB"
},
{
"code": "",
"text": "Thanks for the fast reply.This solution seemes to make a lot of sense, i will try this first thing tomorrow!I will let you know if it worked ",
"username": "SaltyPigeon"
},
{
"code": "",
"text": "@NiklasB works like a charm!!Adding the “root_id” of the document did the trick, as i reviewed my old code i actually worked with the $pull operator but the missing root id was the faulty chain here!\nimage769×377 33.5 KB\nThanks for the great help SaltyPigeon ",
"username": "SaltyPigeon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Delete embedded document from array | 2023-07-05T11:55:43.685Z | Delete embedded document from array | 359 |
|
null | [
"node-js",
"connecting",
"atlas-cluster"
] | [
{
"code": "Connection failed Error: queryTxt ETIMEOUT cluster0.49bhjhk.mongodb.net\n at QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:251:17) {\n errno: undefined,\n code: 'ETIMEOUT',\n syscall: 'queryTxt',\n hostname: 'cluster0.49bhjhk.mongodb.net'\n}\n",
"text": "",
"username": "Kainat_Malik"
},
{
"code": "Connection failed Error: queryTxt ETIMEOUT [cluster0.49bhjhk.mongodb.net](http://cluster0.49bhjhk.mongodb.net)\nat QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:251:17) {\nerrno: undefined,\ncode: ‘ETIMEOUT’,\nmongodb://<username>:<password>....",
"text": "Hey @Kainat_Malik,Welcome to the MongoDB Community forums The error message implies a possible SRV lookup failure.Could you try using the connection string from the connection modal that specifies all 3 hostnames instead of the SRV record?To get this, please head to the Atlas UI within the Database Deployments section and follow the below steps:Replace the original connection string you used with the version 2.2.12 or later node.js connection string copied from the above steps and then restart your application.If it returns a different error, please share that error message here.In addition to the above, I would recommend also checking out the Atlas Troubleshoot Connection Issues documentation.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Connection problem with database | 2023-07-06T03:37:07.026Z | Connection problem with database | 717 |
null | [
"java"
] | [
{
"code": "",
"text": "We are using Quarkus which has a dependency on mongodb-crypt. This library is quite huge… around 29MB. Our application is packaged as an Azure Function and this library is half the size of our total application. We would like to keep our application as small as possible. Therefor the question what this library does and are there other options to provide this library? Maybe to split this up in smaller libraries so that Quarkus has options what needs to be implemented?",
"username": "Dominique_Claes"
},
{
"code": "",
"text": "mongodb-crypt is an optional dependency for the Java driver itself, and is only required for applications that are using https://www.mongodb.com/docs/manual/core/queryable-encryption/ or https://www.mongodb.com/docs/manual/core/csfle/.I’m not sure why (or if) Quarkus takes a hard dependency on this library, so it’s possible that there is something in Quarkus itself that relies on it.Regards,\nJeff",
"username": "Jeffrey_Yemin"
}
] | Quarkus - library mongodb-crypt | 2023-06-26T12:50:16.315Z | Quarkus - library mongodb-crypt | 517 |
null | [
"dot-net"
] | [
{
"code": "EmbeddedDocument",
"text": "Hello,I am currently an active user of the MongoDB .NET driver for my development projects. I noticed that the EmbeddedDocument feature is available in BSON but not in the MongoDB .NET driver.This feature would be incredibly useful for working with nested data structures, making the overall development experience smoother and more efficient.Is there a plan to include this feature in an upcoming release of the .NET driver? If so, could you share an approximate timeline for its availability?Thank you very much for your attention to this matter. I appreciate the ongoing effort you put into improving this indispensable tool.Best Regards,",
"username": "C_Steeven"
},
{
"code": "EmbeddedDocumentEmbeddedDocumentSearch",
"text": "Hi, @C_Steeven,Welcome to the MongoDB Community Forums. I understand that you’re asking about EmbeddedDocument support in the .NET/C# Driver.If you mean support for EmbeddedDocument in Atlas Search (CSHARP-4477), then you are correct that it is available via BSON but not as part of our fluent Search API. That work is scheduled for this quarter and should be available in the coming months. Please follow CSHARP-4477 for progress on this feature.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hi @James,Thank you for your swift and detailed response. You correctly understood that I was referring to EmbeddedDocument support in Atlas Search.I’m glad to hear that work on this feature is ongoing and I look forward to the forthcoming improvements. In the meantime, I have indeed implemented a workaround to utilize EmbeddedDocument, which suffices for my current needs.Nonetheless, I agree with you that a more comprehensive, integrated solution would make the experience much more enjoyable. I will closely follow CSHARP-4477 to keep up to date on the progress.Once again, thank you for your assistance and for the work you do in improving the MongoDB experience in .NET/C#.Best regards,Steeven,",
"username": "C_Steeven"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Availability of EmbeddedDocument Feature for MongoDB .NET Driver | 2023-07-05T13:04:27.982Z | Availability of EmbeddedDocument Feature for MongoDB .NET Driver | 430 |
null | [
"graphql"
] | [
{
"code": "",
"text": "OK so I’ve just added a new field “X” to an object “Y” in my realm schema.The new field now appears in the graphql schema in the web UI. I can go to the Realm (now App Services) web UI, go to GraphQL and use the GraphiQL interface to confirm I can query my new field. It works, fine.When I try to query the new field in my client javascript app using apollo however, I get the error Cannot query field “X” on type “Y”.The only mention of this error I can find in the forums is this one, but I think the solution suggested is for an older version of Realm.Is this a schema caching issue? How to I clear the schema cache in the current latest version of realm?What I’m trying to do is so simple. Am I missing something here?Thanks",
"username": "Shea_Dawson"
},
{
"code": "",
"text": "i have same issue,did you fix it?",
"username": "hou_andy"
}
] | Cannot query field "X" on type "Y" | 2022-07-02T10:10:57.759Z | Cannot query field “X” on type “Y” | 3,099 |
null | [
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": "const convertBalance = async (req, res) => {\n //Get the amount that I want to convert, and the budget that I want to convert to\n const { balanceToConvert, convertTo } = req.body\n if (!balanceToConvert || !convertTo) {\n throw new BadRequest(\"Please provide amount and budget to convert\")\n }\n\n // current budget is the budget I want to convert from\n const currentBudgetId = req.params.id\n\n const currentBudget = await Budget.findOne({\n _id: req.params.id,\n createdBy: req.user.userId\n })\n\n if (!currentBudget) {\n throw new NotFound(\"current budget not found, please select a valid and active budget\")\n }\n\n // get the budget that I want to convert to\n const convertToBudget = await Budget.findOne({\n _id: convertTo,\n createdBy: req.user.userId\n })\n\n if (!convertToBudget) {\n throw new NotFound(\"The budget you wanted to convert to not found, please select a valid and active budget\")\n }\n\n // cut the amount from the budget\n await Budget.findOneAndUpdate({\n _id: req.params.id,\n createdBy: req.user.userId\n },\n {\n balance: currentBudget.balance - balanceToConvert\n },\n {\n new: true,\n runValidators: true\n })\n\n // added that amount to the other budget\n await Budget.findOneAndUpdate(\n {\n _id: convertTo,\n createdBy: req.user.userId\n }, \n {\n balance: convertToBudget.balance + balanceToConvert\n },\n {\n new: true,\n runValidators: true\n }\n )\n \n res.status(StatusCodes.OK).json({ success: true, data: \"amount converted successfully\"})\n}\n",
"text": "Hello everyone. I’m currently learning Express and MongoDB using Mongoose in my apps.I have a case in that I want to convert a balance from one budget to another oneI did it in this essential way:My question is:Thanks",
"username": "Oussama_Louelkadi"
},
{
"code": "",
"text": "At the very least you should put the two updates in a transaction.",
"username": "John_Sewell"
},
{
"code": "",
"text": "You could do this with a $merge back into the Budget collection, but be wary of trying to do everything in one call that renders the code a pain to work with or someone else to maintain.\nYou could also do it in one bulk operation, adding multiple updates to one bulk object and sending that to the server:",
"username": "John_Sewell"
}
] | Convert balance between two budgets | 2023-07-05T15:19:05.766Z | Convert balance between two budgets | 406 |
null | [
"aggregation"
] | [
{
"code": " Moddulle.aggregate([\n {\n $unwind: \"$listEpreuves\"\n },\n {\n $unwind: \"$listEpreuves.resultat\"\n },\n {\n $group: {\n _id: {\n nom: \"$listEpreuves.resultat.nom_etudiant\",\n prenom: \"$listEpreuves.resultat.prenom_etudiant\", \n },\n cursus: {\n $push: {\n designation_moddulle: \"$designation_moddulle\",\n pv_modulaire: {\n code_epreuve: \"$listEpreuves.code_epreuve\",\n valeur_note: \"$listEpreuves.resultat.valeur_note\"\n },\n moyModule:{$multiply:[{$avg:\"$listEpreuves.resultat.valeur_note\"},\"$coefficient\"]\n }\n } },\n moyGlob:{ $avg: {\n $multiply: [\n { $avg: \"$listEpreuves.resultat.valeur_note\" },\n \"$coefficient\"\n ]\n }\n },\n sommeCoef:{$sum:\"$coefficient\"},\n }\n }, \n {\n $group: {\n _id: null,\n data: {\n $push: {\n nom: \"$_id.nom\",\n prenom: \"$_id.prenom\",\n mmoy:\"$_id.moyenneGlobalee\",\n cursus:\"$cursus\",\n moy:{$sum:\"$cursus.moyModule\"},\n someCoeff:\"$sommeCoef\", \n moyenneGlobalee:{$divide: [\"$moy\",\"$someCoeff\"]}, \n } } }, \n },\n {\n $project: {\n _id: 0,\n data: 1, \n \n },},\n",
"text": "in my query the $divide returns null,knowing that the operands aren’t null",
"username": "Amina_Mesbah"
},
{
"code": "someCoeff:\"$sommeCoef\", \n moyenneGlobalee:{$divide: [\"$moy\",\"$someCoeff\"]}, \nmoyenneGlobalee:{$divide: [\"$moy\",\"$sommeCoeff\"]}\n",
"text": "Qu’est-ce que se passe si l’on ecrit( L’ordre d’évaluation)",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "operands aren’t null$sommeCoef might not be null but $someCoeff is in this stage. you cannot use a field in the same stage that is defined",
"username": "steevej"
},
{
"code": "",
"text": "that was the problem thank you",
"username": "Amina_Mesbah"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why the $divide returns null? | 2023-06-24T11:26:48.567Z | Why the $divide returns null? | 402 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.23-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.22. The next stable release 4.4.23 will be a recommended upgrade for all 4.4 users.Fixed in this release:SERVER-73943 Pin code pages in memory in memory constrained systemsSERVER-75922 Partial unique indexes created on MongoDB 4.0 can be missing index keys after upgrade to 4.2 and later, leading to uniqueness violationsSERVER-78126 For specific kinds of input, mongo::Value() always hashes to the same result on big-endian platforms4.4 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Kelsey_Schubert"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.23-rc0 is released | 2023-07-05T15:58:42.192Z | MongoDB 4.4.23-rc0 is released | 608 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I am trying to put a variable into $densify.range.unit while performing a stage of aggregation, but getting error query failed: (FailedToParse) unknown time unit value: $time_unit\nIs there a way to fix it?\nDemo: Mongo playground",
"username": "Alex_Pirogov"
},
{
"code": "The unit of time, specified as an expression that must resolve to one of these strings:\nThe unit to apply to the step field when incrementing date values in field.\n",
"text": "Comparing the documentation between dateTrunk and densify, dateTrunc says unit can take in an expression that must evaluate to one of the known units, but densify does not say this, only that you should pass in the unit.\nPerhaps densify does not allow you to pass in an expression?vs",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you for pointing this out, I didnt notice that. Seems I should search for a walkaround for now. However the possibility to pass expressions into $desify’s arguments seems to be useful. I’ve just discovered that $densify.range.bounds also does not accept expressions. Probably we can create a feautre request?",
"username": "Alex_Pirogov"
}
] | $densify.unit is not resolved | 2023-07-04T16:34:11.190Z | $densify.unit is not resolved | 281 |
null | [
"dot-net",
"field-encryption",
"schema-validation"
] | [
{
"code": "",
"text": "Hi,\nI’m currently implementing CSFLE in C# using the MongoDb driver.\nWe want to rotate our keys and invalidate all documents that don’t pass a specific internal policy (older than X years for example)\nWe still want the old document to be in the collection however.\nIn the documentation it states that if it fails to read the data using the provided Data Encryption key, the binary value for the field will be returned.\nThis is not what happens though - we get an error “Encryption related exception: not all keys requested were satisfied”.We are generating our JSON schema for the encrypted fields during runtime using the currently active DEK for the specified collection. If a document has “expired” in our terms (has not been re-encrypted with the new DEK), we still want it to be persisted in the database, just dead and returning the binary data instead of the decrypted values.Is this possible and is the documentation wrong, or am I misinterpreting something?\nOur dream would be this:\nTry to decrypt the data in the specified fields from the JSON schema with the provided Data Encryption Key - if it fails, set a default value like an empty string, 0, false etc for those values (or even better, a default value for each field or field type that we can set ourselves).\nThis should be a switch of course, to allow users to use it as it is today where it just throws an exception as soon as one field in the entire data set is encrypted with an old key.",
"username": "Emil_Larsson"
},
{
"code": "",
"text": "Hello Emil and welcome!The error that you are seeing is correct given your explanation. There are two different use cases for not being able to decrypt and they each return something different. If your application has access to the keys but the key is not present (ie it has been deleted) then the error you noted in your question is expected. If your application does not have access to the key then you would get the binary data back as the system would not even check to see if the key is present.If you could point me to where in the docs it says that the encrypted value would be displayed that would be helpful so that I can look at it. It sounds like we may need to add some more details in the docs about the expected behavior in these 2 different cases.I hope that helps,Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "",
"text": "Hello and thank you, and thank you for your response.\nOkay, this is the documentation I read: https://www.mongodb.com/docs/manual/core/csfle/reference/decryption/#automatic-decryption-process\nMaybe I misunderstood it but my interpretation of step #2 - “if the Key Vault collection does not contain the specified key, automatic decryption fails and the driver returns the encrypted BinData blob” - is that it tries to fetch the DEK and if it fails it falls back to just returning the encrypted data.We just need to know what is supposed to happen so we can plan and implement it accordingly!\nAlso - should we be fine just rotating our Certificate for our CMK, or should we roll the CMK or DEK:s at some interval aswell? I had trouble finding any “best practice” information, a lot of it is up to interpretation and I am having a hard time deciding for myself if we should rotate our certificate or all of our DEK:s aswellMany thanks!\nEmil",
"username": "Emil_Larsson"
},
{
"code": "",
"text": "Hello Emil,Thank you for pointing out that section, it is incorrect and I’ll get it fixed. In the case you describe, where a key has been deleted, the driver will return an error and not the encrypted blob.As a best practice, when using envelope encryption, you should rotate the CMK. Once you have rotated the CMK you can use the rotate API (also referred to as rewrap) to apply the new CMK to your keyVault. This section in the docs gives an example of how to rotate to a new CMK (after you have created the new CMK in your KMS).Sincerely,Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "",
"text": "Okay! Would it be possible to have a switch when setting up the MongoClient that would return the encrypted blob instead of throwing an error, or a default value like “Redacted” for strings, “0” for integers or something that we can set, or is this a deeper issue?Great, thank you for your help!\nEmil",
"username": "Emil_Larsson"
},
{
"code": "",
"text": "Hi Emil,That would be an enhancement request, which you can make on the FLE feedback site.Thanks,Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Reading data encrypted with old Data Encryption Key | 2023-06-22T08:10:27.932Z | Reading data encrypted with old Data Encryption Key | 913 |
[
"atlas-functions"
] | [
{
"code": "",
"text": "Even after allowing origin, why still receiving CORS prefight error?\nimage1616×159 4.78 KB\n",
"username": "Rajan_Braiya"
},
{
"code": "",
"text": "Hi @Rajan_Braiya,Are you using bearer authentication as outlined here? https://www.mongodb.com/docs/atlas/api/data-api/#authenticate-web-browser-requests",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Hi @Kiro_Morkos, thank you for your reply. I understand that we cannot use email/password for the web application.Even our requirement is to use a custom authentication function, but I am encountering an error: “Http failure response.” hence was trying with Email/Password but I would highly appreciate your assistance in resolving all of my issues. For more details, you can find additional information on the issue at No authentication methods were specified.",
"username": "Rajan_Braiya"
},
{
"code": "",
"text": "You should be able to use any of your authentication providers to get an access token (see https://www.mongodb.com/docs/atlas/app-services/users/sessions/#get-a-user-session-access-token). Once you have an access token, you can use it to authenticate your Data API request via Bearer auth from the browser.",
"username": "Kiro_Morkos"
}
] | Origin 'http://localhost:4200' has been blocked by CORS policy | 2023-07-03T10:21:49.605Z | Origin ‘http://localhost:4200’ has been blocked by CORS policy | 1,190 |
|
null | [
"python",
"atlas"
] | [
{
"code": "",
"text": "I was attending the first lab for the training Connecting to MongoDB in PythonI got the error below on “Check” button of the lab.\nfailed to sampleData load for cluster ‘myAtlasClusterEDU’I terminated the instance in Atlas and recreated and loaded all sample data using the Atlas option to do so.\nStill, I am getting the same error.Please help.Alias.",
"username": "Alias_Parackal_Kunjicheria"
},
{
"code": "",
"text": "I am in the exact same position as you. Have deleted all other projects. Removed the previous sample data and reloaded and I am still getting same error message :“failed to sampleData load for cluster ‘myAtlasClusterEDU’”",
"username": "Octavian_Zagarin1"
},
{
"code": "",
"text": "Hey @Alias_Parackal_Kunjicheria/@Octavian_Zagarin1,Thank you for reaching out to the MongoDB Community forums.“failed to sampleData load for cluster ‘myAtlasClusterEDU’”Are you still experiencing the issue? If so, could you please provide a screenshot of the error message you are encountering and the link to the lab?Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "I have the same issue. I tried deleting all of my collections (sample dataset), it didn’t work, then I deleted entire project, created a new one with the same name and credentials, I imported sample data set, it was imported and then when I try to do check on tutorial, I got the same errror message. Removed everything again and this time I didn’t import dataset, clicked check and still got the same error even though I have enough storage to import it.\n\nloing2064×864 167 KB\nThe possible issue might be the fact that I created a cluster during first videos, I didn’t wait for the practice video and then I got the issue of needing more storage size to import data, but after deleting everything I’m still getting an error and I don’t get it.",
"username": "Programming_Learning"
},
{
"code": "",
"text": "Hey @Programming_Learning,Thank you for sharing the screenshot. Could you also provide the link to the lab?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Link to the lab:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.\nThe issue is intermittent. Tried exactly the same thing today and it worked. Very annoying and unreliable and it’s very discouraging for new learners.",
"username": "Octavian_Zagarin1"
},
{
"code": "",
"text": "Hi @Octavian_Zagarin1,Thank you for sharing the link to the lab. I’m glad it worked. However, we will pass on the feedback to the relevant team.In case of any comments, concerns, or questions, please do not hesitate to reach out.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "I’m also having the same issue and I deleted the sample datasets to try to continue with learning but now nothing works in the shell.If the limit on this test server is 512mb, why does the learning program require us to import a 400+mb data set twice (once through the UI and once through command line). It’s very frustrating! Surely we could learn with just a little sample data set…The course is pitched at beginners but really assumes a lot of knowledge (that is never specified as a pre requisite) in being comfortable with a terminal command line in Linux.Please make a truly beginner friendly course or point to resources where I can fill my command line skills gap so that I can do your “beginner” course. Feeling very frustrated and demotivated.",
"username": "LR-Tokyo"
},
{
"code": "",
"text": "Hi there,I got the same issue with Lab: Connecting to an Atlas Cluster in PHP Applications from the course Connecting to MongoDB in PHP",
"username": "Mahmoud_Al-Husseiny"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Failed to sampleData load for cluster 'myAtlasClusterEDU' - Course [Connecting to MongoDB in Python] | 2023-06-24T06:53:25.392Z | Failed to sampleData load for cluster ‘myAtlasClusterEDU’ - Course [Connecting to MongoDB in Python] | 994 |
null | [
"transactions"
] | [
{
"code": "",
"text": "Hi TeamWhat is the recommended approach for utilizing triggers within a transaction to update multiple collections in MongoDB?\nIn MongoDB, if a trigger fails, will the operations performed on all collections within the transaction be rolled back?",
"username": "Alfredo_Fernando_Ajalla"
},
{
"code": "",
"text": "Hi, do you mind clarifying a little bit more what you are looking to accomplish?It sounds like you are asking what happens to a trigger if it is processing writes from a transaction and fails halfway through. If that is what you are asking, the answer is that the trigger has already fired for the first few events so there is no way for the system to roll back those events being seen. It is possible to uphold this invariant on the developers’ end by utilizing the txnNumber field in the change event https://www.mongodb.com/docs/manual/reference/change-events/insert/#mongodb-data-insert and having your function logic be capable of determining whether it is the end of the transaction or the middle and only firing at the end.Alternatively, if you are asking what happens if your trigger function code uses a transaction to update documents if that function fails halfway through, then the transaction will indeed be rolled back as long as the function does not issue a commit transaction command (this is just MongoDB doing the work).Let me know if this answers your question.\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi Tyler,\nI have three collections: collection A, collection B, and collection C.\nCollection A has a trigger that is invoked when a document is updated it updates records in collection B and collection C.If the trigger fails I need to roll back the updating in collection AI’m asking because a trigger’s response to an operation will always be out of the scope of the original operation and I want the db with data consistency and integrity",
"username": "Alfredo_Fernando_Ajalla"
},
{
"code": "",
"text": "You are correct that a trigger is a response to an action that has already been performed in MongoDB. You could try to (depending on the change) wrap your function in a try/catch where the catch block performs an un-do of the operation (see PreImages for triggers); however, there will be edge cases in which this will not always 100% run.The only way to ensure the 3 writes are made transactionally is to perform them in a single MongoDB transaction. One option might be to use custom https endpoints (https://www.mongodb.com/docs/atlas/app-services/data-api/custom-endpoints/) and you can define a function that updates collection A, B, and C in a transaction, and then you can have that be the only entry point for updates to Collection A (or you can just do this in your application code)Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "so one solution could be a transaction within a try block inside the trigger function,\nIf an error is caught, I could use the document preimages to roll back the changes\nis it the correct solution?",
"username": "Alfredo_Fernando_Ajalla"
}
] | Using triggers and transactions | 2023-06-30T18:40:18.651Z | Using triggers and transactions | 646 |
null | [
"python"
] | [
{
"code": "",
"text": "pymongo.errors.InvalidURI: Username and password must be escaped according to RFC 3986, use urllib.parse.quote_plus",
"username": "Amudhesh_D"
},
{
"code": "",
"text": "https://pymongo.readthedocs.io/en/stable/examples/authentication.html",
"username": "chris"
}
] | pymongo.errors.InvalidURI: Username and password must be escaped according to RFC 3986, use urllib.parse.quote_plus | 2023-01-05T10:11:55.695Z | pymongo.errors.InvalidURI: Username and password must be escaped according to RFC 3986, use urllib.parse.quote_plus | 5,001 |
null | [
"queries",
"crud"
] | [
{
"code": "",
"text": "Hi I am new in using MongoDB and I have a big problem:I don’t know if I am the only one in this case but whenever I try to insert some datas in my database they always send me back this message : “command insert is unsupported”I tried on MongoShell, or on the playground, it respond me always with this message…\nI am sure I am Connected to my DB because when I make requests it respond me back…I tried to use Insert(), InsertOne() or InsertMany().I am using a free M0 cluster to learn, is it the problem ?",
"username": "J_N"
},
{
"code": "",
"text": "No doubt your syntax is incorrect.\nPerform a sample mongosh session and cut and paste it into this topic, please.\nObscure any personal info or passwords.\nEnter it between a line of ``` triple-backticks before and afterwards so it will format nicely.\nShow the error as well as the commands you entered.",
"username": "Jack_Woehr"
},
{
"code": "const { MongoClient } = require('mongodb');\n\n// Set up Express middleware\n\nrouter.use(express.urlencoded({ extended: true }));\nrouter.use(express.json());\n\nconst uri = 'mongodb://....'\n\n// Create a new MongoClient\n\nconst client = new MongoClient(uri); \n\n async function addData() {\n await client.connect()\n try {\n const db = client.db('data'); // Replace with your database name\n const collection = db.collection('first'); // Replace with your collection name\n console.log(req.body)\n let task = { title: req.body.task, completed: false }\n console.log(task)\n // Insert a single document, wait for promise so we can read it back\n const push = await collection.insertOne(task)\n // Find one document\n const myDoc = await collection.findOne();\n // Print to the console\n console.log(myDoc);\n res.sendStatus(201);\n }\n catch (err) {\n console.log(err.stack);\n }\n\n finally {\n await client.close();\n }\n }\n\n addData()\n",
"text": "Thanks for your response Jack_Woehr to help a Newbie like me !My URI is correct , because with the same connection, I can find all my datas in my collection “first”.\nthe problem appears when I try to insert.\nI was wondering if my code was wrong, that’s why I did the “Create my Playground” on MongoDB for VS code.\nI took their example they gave to me but I got the same problem",
"username": "J_N"
},
{
"code": " const db = client.db('data'); // Replace with your database name\n const collection = db.collection('first'); // Replace with your collection name\n",
"text": "Did you substitute valid names here when you used the example?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Yes , it is correct.\nI can read my datas from the “first” collection in the data “data” names.",
"username": "J_N"
},
{
"code": "mongosh",
"text": "I tried on MongoShell, or on the playground, it respond me always with this message…Can you provide a very simple example that fails in mongosh that I can try?",
"username": "Jack_Woehr"
},
{
"code": "[\n {\n _id: ObjectId(\"aksnldnkandlsandl\"),\n title: 'eat',\n completed: false\n },\n {\n _id: ObjectId(\"sakndklandkland\"),\n title: 'sleep',\n completed: false\n }\n]\n",
"text": "Sure, to connect to my MongoShell I type this command : “mongosh $MDB_CONNECTION_STRING;”In my data, I just have one data name “data” with only one collection named “first”\nSo I Type to connect to this DB : “use data”In this collection (“first”), I have only 2 objects, to see it I use the command : “db.first.find()”\nIt respond me this:So now I want to add new object in this DB I use the command : “db.first.insertOne({title:“work”, completed: false})”The shell respond me with this error: \"MongoServerError: command insert is unsupported, correlationID = \"I tried to use other methods to add new object, as “bulkWrite()”, or “insertMany()” with an array as argument, or insert, I got the same message.and this error message I Got it from the “Create a new playground” with “MongoDB with VS code”Do you think I have the wrong version of MongoDB?\nI installed with npm install Mongodb , so the version I am using is 5.6.0 for node version.",
"username": "J_N"
},
{
"code": "Atlas Cluster0-shard-0 [primary] test> show collections\n/* no collections yet present */\nAtlas Cluster0-shard-0 [primary] test> db.first.insertOne({\n... _id: ObjectId(\"aksnldnkandlsandl\"),\n... title: 'eat',\n... completed: false\n... })\nBSONError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer\n/* In other words, your string for an ObjectId is not valid */\n/* So I'll let MongoDB choose an object id for me instead of assigning one */\nAtlas Cluster0-shard-0 [primary] test> db.first.insertOne({title: 'eat', completed: false })\n{\n acknowledged: true,\n insertedId: ObjectId(\"64a4a913f8a38313d2807502\")\n}\nAtlas Cluster0-shard-0 [primary] test> db.first.insertOne({title: 'sleep', completed: false})\n{\n acknowledged: true,\n insertedId: ObjectId(\"64a4a951f8a38313d2807503\")\n}\nAtlas Cluster0-shard-0 [primary] test> db.first.find()\n[\n {\n _id: ObjectId(\"64a4a913f8a38313d2807502\"),\n title: 'eat',\n completed: false\n },\n {\n _id: ObjectId(\"64a4a951f8a38313d2807503\"),\n title: 'sleep',\n completed: false\n }\n]\nAtlas Cluster0-shard-0 [primary] test> \n",
"text": "Okay, here’s what I did.\nI am not having any trouble inserting.\nAm I misunderstanding your problem?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "db.first.insertOne({title:“work”, completed: false})BTW in this line from your post above, the double quote marks in that string are “smart quotes” not standard quote marks. When I pasted in that line just now, I had to change them to standard quote marks.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I don’t think you miss something. You understand my problems.For the _id I changed it on purpose to hide it.\nWhen I do the InsertOne, let MongoDB to chose the Object ID.In your case it looks like it works very well.\nI tried to change the double to single quote it but still not working…The only thing I see different is your MongoShell is on “Atlas Cluster0-shard-0 [primary]”\nand mine is “AtlasDataFederation”, is there any impact ?I checked in my Data API I am able to Read and Write, also I checked my User account if I can change it.I sincerely don’t know why I cannot do the insert… Why this method is Unsupported?",
"username": "J_N"
},
{
"code": "readWrite",
"text": "The only thing I see different is your MongoShell is on “Atlas Cluster0-shard-0 [primary]”\nand mine is “AtlasDataFederation”, is there any impact ?I’m not an Atlas expert, I’m not sure about what the difference means.I checked in my Data API I am able to Read and Write, also I checked my User account if I can change it.So you are saying your user has the specific readWrite role?I sincerely don’t know why I cannot do the insert… Why this method is Unsupported?All I can guess is it’s some sort of authority / role problem.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I checked. For me I have all authority on this data but still not working.\nIf I remembered correctly the error message would be different if I have some authorities restrictions. It just like my Mongosh doesn’t support the method but I don’t know why…I checked the version I am using on MongoShell, it is only mongosh : 1.10.1.\nMaybe this is too old ?",
"username": "J_N"
},
{
"code": "",
"text": "1.10.1 is the same version I am using.",
"username": "Jack_Woehr"
},
{
"code": "MongoServerError: command insert is unsupportedinsertOne()AtlasDataFederation test> use db\nswitched to db db\nAtlasDataFederation db> db.collection.insertOne({title:\"test\"})\nMongoServerError: command insert is unsupported, correlationID = 176edbca56c01e28efce9c8e\nAtlasDataFederation db>\n",
"text": "Hi All,The only thing I see different is your MongoShell is on “Atlas Cluster0-shard-0 [primary]”\nand mine is “AtlasDataFederation”, is there any impact ?The error message MongoServerError: command insert is unsupported appears to be originating from the MongoDB Data Federation instance because the operation was unsupported. The list of supported operations on a Federated Instance are available at the documentation page Query and Write operation commands.Try creating and connecting to a free tier cluster to attempt the insert commands (instead of connecting directly to a data federation instance).For reference, please see the following example from my test environment in which the insertOne() command being attempted on a Data Federation Instance generates the same error:I suspect you’ll have similar output to Jack’s example provided earlier if attempting on an M0 tier cluster.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,You are right in your arguments , I changed my DB connection and it hit to my M0-cluster.\nAfter then it works well!\nSo my problem was my connection string who hit the wrong path.I am very grateful to Jack Woehr and Jason Tran for your quick responses.\nBest regards,J N.",
"username": "J_N"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB for VS code - error message = "command insert is unsupported" | 2023-07-04T15:45:46.267Z | MongoDB for VS code - error message = “command insert is unsupported” | 867 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Lets say I have a collection of documents like:\n[\n{ createdAt: 1688383980100, win: true }\n{ createdAt: 1688383980200, win: false }\n{ createdAt: 1688383980300, win: true }\n{ createdAt: 1688383980400, win: true }\n{ createdAt: 1688383980500, win: false }\n]How can I get the maximum win streak?\nI managed to get it via $group and $accumulator with custom JS functions.\nBut Digital Ocean does not allow JS on their mongo DB servers.Then I spent almost a day trying many pipeline alternatives, no success.\nAny tips or pointers are super welcome! Thanks in advance!",
"username": "Talles_Hentges"
},
{
"code": "",
"text": "I posted the question here on SO too. WIll update the forum if we get a solution there.",
"username": "Talles_Hentges"
},
{
"code": "db.getCollection(\"Streak\").aggregate([\n{\n $project:{\n outVal:{\n $cond:{\n if:{$eq:['$win', true]},\n then:'1',\n else:'0'\n }\n }\n }\n},\n{\n $group:{\n _id:null,\n allRuns:{$push:'$outVal'}\n }\n},\n{\n $project:{\n totalString:{\n $reduce:{\n input:'$allRuns',\n initialValue:'',\n in:{$concat:['$$value', '$$this']}\n }\n }\n }\n},\n{\n $project:{\n splitItems:{\n $split:[\n '$totalString',\n '0'\n ]\n }\n }\n},\n{\n $unwind:'$splitItems'\n},\n{\n $addFields:{\n length:{\n $strLenCP:'$splitItems'\n }\n }\n},\n{\n $sort:{\n length:-1\n }\n},\n{\n $limit:1\n}\n])\n if:'$win',\n",
"text": "Now I’m not proud of this…but…Mongo playground: a simple sandbox to test and share MongoDB queries onlineThis has some rather glaring issues with scalability, but I guess you could run it on a period of data at a time…Edit\nChange the condition to this instead:It annoys me to compare a boolean to true…",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks @John_Sewell! The code works! I also got a reply on SO, will play with the suggestions see what I can come up with and report back here later.",
"username": "Talles_Hentges"
},
{
"code": "",
"text": "Yes, I saw the reply on SO, was going to have a play with that reply later as not used the window interval operator previously.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Checking performance over a collection of about 200,000 records, my solution with the string splitting takes about 6s to complete, and the SO reply takes about 1s so it’s a LOT more performant.\nThey both arrive at the same answer, which is at least something for my solution!",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks a lot for the help and tests @John_Sewell!\nFor the of delivery of the service I was working last week,\nI implemented that calc on Node.JS.But I’ll circle back to it and move the calc to Mongo side,\nI’ll update the thread here when that is done.",
"username": "Talles_Hentges"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to calculate max 'win' streak over sequential documents | 2023-07-03T11:37:24.369Z | How to calculate max ‘win’ streak over sequential documents | 400 |
null | [
"replication",
"compass",
"mongodb-shell"
] | [
{
"code": "",
"text": "I have a mongodb cluster on 3 different VMs. When I try to access to the replicaset via compass using this GUI :mongodb://at192.168.20.1:27017,192.168.20.2:27017,192.168.20.3:27017/?replicaSet=rs0&authSource=admin&appName=mongosh+1.4.1It says getaddrinfo ENOTFOUND masternode or sometimes it says getaddrinfo ENOTFOUND client1However, if I try to connect separately to each one using :\nmongodb://at192.168.20.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.1&authMechanism=DEFAULT\n<mongodb://at192.168.20.2:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.1&authMechanism=DEFAULT/>\nmongodb://at192.168.20.3:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.1&authMechanism=DEFAULT\nIt works just fine.Is there anything wrong with my replicaset GUI and how can I fix this?\nFYI, 192.168.20.1, 192.168.20.2 and 192.168.20.3 is associated with masternode, client1 and client2 respectively as well as I use at instead of @ due to new user tag thingy\nYour reply is very appreciated.",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "I think your appNsme should be Compass\nWhy it is showing mongosh",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I tried it and the outcome is the same as I mentioned in the post that I’m able to access to separate mongodb but if I try to access to the whole replica set it shows the same error.",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "ENOTFOUNDis related to DNS information missing. The following is certainly wrong:at192.168.20.1:27017",
"username": "steevej"
},
{
"code": "",
"text": "Hello,\nI missed type the dns. The actual dns that I used was mongodb://@192.168.20.1:27017,192.168.20.2:27017,192.168.20.3:27017/?replicaSet=rs0&authSource=admin&appName=mongosh+1.4.1 for replica setmongodb://@192.168.20.X:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.1&authMechanism=DEFAULT for separate connection.",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "Before @ user:password",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "Post a screenshot that shows exactly what you are doing that shows the error you are having.Also post a screenshot that shows using the mongosh with the same connection.Please do not redact or obfuscate the user name and password you use since the redaction might hide the error you make. If you are afraid to share the password for a server running on a private network you may always create a dummy user that only has read access on a dummy database.",
"username": "steevej"
},
{
"code": "",
"text": "The first one is the one that I cannot access and error is as follows\n\nimage928×566 36.4 KB\nThis one it works just fine\n\nimage938×473 29.8 KB\n\n\nimage1866×882 44.5 KB\n",
"username": "Thanachai_Namphairoj"
},
{
"code": "ping masternode\nrs.status()\n",
"text": "I think you have an issue with your replica set configuration.I suspect that your replica set configuration uses host names rather than the IP addresses you use to connect, and that some of those are not resolved correctly by your DNS.Using a command line terminal share the output of the following.Then connect with mongosh to a single node, 192.168.20.1:27017 for example and share the output of the command:",
"username": "steevej"
},
{
"code": "",
"text": "Here is the result of rs.status()\n\nimage1266×522 31.7 KB\nThis is the result of ping masternode\n\nimage1023×446 11.4 KB\n",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "If this a replica set it should show all 3 members info but showing as standalone\nHave you run rs.initiate() and added other 2 nodes?\nPlease show rs.conf() output",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "{\nset: ‘rs0’,\ndate: ISODate(“2022-05-30T07:31:56.942Z”),\nmyState: 1,\nterm: Long(“47”),\nsyncSourceHost: ‘’,\nsyncSourceId: -1,\nheartbeatIntervalMillis: Long(“2000”),\nmajorityVoteCount: 2,\nwriteMajorityCount: 2,\nvotingMembersCount: 3,\nwritableVotingMembersCount: 3,\noptimes: {\nlastCommittedOpTime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\nlastCommittedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),\nreadConcernMajorityOpTime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\nappliedOpTime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\ndurableOpTime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\nlastAppliedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),\nlastDurableWallTime: ISODate(“2022-05-30T07:31:48.203Z”)\n},\nlastStableRecoveryTimestamp: Timestamp({ t: 1653895860, i: 1 }),\nelectionCandidateMetrics: {\nlastElectionReason: ‘stepUpRequestSkipDryRun’,\nlastElectionDate: ISODate(“2022-05-25T02:51:46.241Z”),\nelectionTerm: Long(“47”),\nlastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1653447105, i: 1 }), t: Long(“46”) },\nlastSeenOpTimeAtElection: { ts: Timestamp({ t: 1653447105, i: 1 }), t: Long(“46”) },\nnumVotesNeeded: 2,\npriorityAtElection: 1,\nelectionTimeoutMillis: Long(“10000”),\npriorPrimaryMemberId: 1,\nnumCatchUpOps: Long(“0”),\nnewTermStartDate: ISODate(“2022-05-25T02:51:46.260Z”),\nwMajorityWriteAvailabilityDate: ISODate(“2022-05-25T02:51:47.302Z”)\n},\nmembers: [\n{\n_id: 0,\nname: ‘masternode:27017’,\nhealth: 1,\nstate: 1,\nstateStr: ‘PRIMARY’,\nuptime: 607291,\noptime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\noptimeDate: ISODate(“2022-05-30T07:31:48.000Z”),\nlastAppliedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),\nlastDurableWallTime: ISODate(“2022-05-30T07:31:48.203Z”),\nsyncSourceHost: ‘’,\nsyncSourceId: -1,\ninfoMessage: ‘’,\nelectionTime: Timestamp({ t: 1653447106, i: 1 }),\nelectionDate: ISODate(“2022-05-25T02:51:46.000Z”),\nconfigVersion: 1,\nconfigTerm: 47,\nself: true,\nlastHeartbeatMessage: ‘’\n},\n{\n_id: 1,\nname: ‘client1:27017’,\nhealth: 1,\nstate: 2,\nstateStr: ‘SECONDARY’,\nuptime: 448792,\noptime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\noptimeDurable: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\noptimeDate: ISODate(“2022-05-30T07:31:48.000Z”),\noptimeDurableDate: ISODate(“2022-05-30T07:31:48.000Z”),\nlastAppliedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),\nlastDurableWallTime: ISODate(“2022-05-30T07:31:48.203Z”),\nlastHeartbeat: ISODate(“2022-05-30T07:31:55.333Z”),\nlastHeartbeatRecv: ISODate(“2022-05-30T07:31:55.333Z”),\npingMs: Long(“0”),\nlastHeartbeatMessage: ‘’,\nsyncSourceHost: ‘masternode:27017’,\nsyncSourceId: 0,\ninfoMessage: ‘’,\nconfigVersion: 1,\nconfigTerm: 47\n},\n{\n_id: 2,\nname: ‘client2:27017’,\nhealth: 1,\nstate: 2,\nstateStr: ‘SECONDARY’,\nuptime: 607285,\noptime: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\noptimeDurable: { ts: Timestamp({ t: 1653895908, i: 1 }), t: Long(“47”) },\noptimeDate: ISODate(“2022-05-30T07:31:48.000Z”),\noptimeDurableDate: ISODate(“2022-05-30T07:31:48.000Z”),\nlastAppliedWallTime: ISODate(“2022-05-30T07:31:48.203Z”),\nlastDurableWallTime: ISODate(“2022-05-30T07:31:48.203Z”),\nlastHeartbeat: ISODate(“2022-05-30T07:31:55.295Z”),\nlastHeartbeatRecv: ISODate(“2022-05-30T07:31:54.984Z”),\npingMs: Long(“0”),\nlastHeartbeatMessage: ‘’,\nsyncSourceHost: ‘masternode:27017’,\nsyncSourceId: 0,\ninfoMessage: ‘’,\nconfigVersion: 1,\nconfigTerm: 47\n}\n],\nok: 1,\n‘$clusterTime’: {\nclusterTime: Timestamp({ t: 1653895908, i: 1 }),\nsignature: {\nhash: Binary(Buffer.from(“ca77f671a7f355a16649c47ff0d4f500f38d0e0a”, “hex”), 0),\nkeyId: Long(“7097159497856057348”)\n}\n},\noperationTime: Timestamp({ t: 1653895908, i: 1 })\n}Here is my rs.status()",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "{\n_id: ‘rs0’,\nversion: 1,\nterm: 47,\nmembers: [\n{\n_id: 0,\nhost: ‘masternode:27017’,\narbiterOnly: false,\nbuildIndexes: true,\nhidden: false,\npriority: 1,\ntags: {},\nsecondaryDelaySecs: Long(“0”),\nvotes: 1\n},\n{\n_id: 1,\nhost: ‘client1:27017’,\narbiterOnly: false,\nbuildIndexes: true,\nhidden: false,\npriority: 1,\ntags: {},\nsecondaryDelaySecs: Long(“0”),\nvotes: 1\n},\n{\n_id: 2,\nhost: ‘client2:27017’,\narbiterOnly: false,\nbuildIndexes: true,\nhidden: false,\npriority: 1,\ntags: {},\nsecondaryDelaySecs: Long(“0”),\nvotes: 1\n}\n],\nprotocolVersion: Long(“1”),\nwriteConcernMajorityJournalDefault: true,\nsettings: {\nchainingAllowed: true,\nheartbeatIntervalMillis: 2000,\nheartbeatTimeoutSecs: 10,\nelectionTimeoutMillis: 10000,\ncatchUpTimeoutMillis: -1,\ncatchUpTakeoverDelayMillis: 30000,\ngetLastErrorModes: {},\ngetLastErrorDefaults: { w: 1, wtimeout: 0 },\nreplicaSetId: ObjectId(“627e2cebd23c7aae01154b0b”)\n}\n}>\nThis is rs.config()",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "Can you connect by hostname instead IP?\nAre other 2 nodes pingable and resolving to the ips you are using\nAlso output of cat /etc/hosts",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Here is the cat of /etc/hosts\n\nimage692×400 19.8 KB\nI had tried using the hostname instead of IPs and it resulted in the same error.Yes, every node can ping to each other using ping hostname and the ip is all correct.",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "Looks ok\nDid you try to connect to your replica set using hostnames in your connect string?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Yes I did. It gives me the same error\n\nimage915×835 57.5 KB\n",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "ENOTFOUND means that the host name masternode cannot be found.In one of your previous post, you shown that you can ping masternode and the other 2 hosts of your replica set.The only conclusion I can think of, is that you are not running Compass from the same machine as the one you used to run the ping commands.The host names of your replica set must be DNS resolvable from all machines you are using to access the replica set. They should all resolve to IP addresses that are routed from all machines you are using to access the replica set.",
"username": "steevej"
},
{
"code": "",
"text": "That might be the case because I run the replicaset on 3 separate VMs. Are there any ways for me to allow compass on my pc to be able to access to the set? I tried using ufw allow from 0.0.0.0/0 on every single VM but it still doesn’t work",
"username": "Thanachai_Namphairoj"
},
{
"code": "",
"text": "compass on my pc to be able to access to the setSure there is. But I can only rephrase what I wrote in my previous post. The host names use in the replica set must be know by you PC and your PC must be able to route traffic to the corresponding IP addresses.Networking, including host and domain name resolution and routing is complex. I recommend using Atlas. You would be up and running a replica set in no time.",
"username": "steevej"
}
] | Cannot connect to replica set via mongo compass | 2022-05-25T05:02:47.018Z | Cannot connect to replica set via mongo compass | 13,948 |
null | [
"kubernetes-operator"
] | [
{
"code": "",
"text": "Hi everyone,I’m looking for some assistance with my mongo replica-set setup on EKS.\nCurrently, I have two pods (primary and secondary DB instances) along with the operator. To connect from a local computer to any of the instances, I use a VPN and then specify the corresponding pod’s endpoint in the mongo URI. However, I’m facing two issues with this approach.Firstly, the endpoints keep changing whenever there’s a node restart or similar events, which requires regular maintenance to update the connection information.Secondly, connecting to the desired instance isn’t automatic. I have to manually specify the primary instance’s endpoint in the URI to connect to it. I’m looking for an alternative solution, something like using the readPreference parameter.Regarding the first issue, I’ve deployed an internal K8s Load Balancer. Now the connection can be made using the Load Balancer’s ExternalIP in the connection string. However, the problem with this is that the Load Balancer doesn’t know which instance is the primary or secondary, so it connects to either of the two regardless of the specified parameters.With all this considered, I’m currently stuck. I’d greatly appreciate any suggestions or observations you may have.Thanks in advance!",
"username": "Francisco_Bolzan"
},
{
"code": "",
"text": "Hi @Francisco_Bolzan and welcome to MongoDB community forums!!Firstly, the endpoints keep changing whenever there’s a node restart or similar events, which requires regular maintenance to update the connection information.As recommended for the production environment, hardcoding an IP address of the pod is not recommended as the pod IPs are subjected to change in the case of a pod restart.In order to avoid the manual alterations of the connection address, you can specify the DNS names to the pods and use the names for establishing the connection. You can read more about the DNS for service and Pods documentations..In addition, could you help me understand, what is the deployment type are you using between K8 deployment and K8 statefulsets resources as the later are useful for applications that need more stable network identities.\nIn my experience, the statefulsets deployments have unique network IDs that are retained through the pod restarts.I’ve deployed an internal K8s Load Balancer. Now the connection can be made using the Load Balancer’s ExternalIP in the connection string. However, the problem with this is that the Load Balancer doesn’t know which instance is the primary or secondary, so it connects to either of the two regardless of the specified parameters.If I understand correctly, I think you can make use of the official MongoDB driver to connect to the replica set instead of using a load balancer. Note that official drivers need to connect to all members of a replica set and monitor their status as per the server discovery and monitoring spec implemented by all official drivers.Considering the scenario where all the members of the replica set are up and running, the setting up the readPreference would help you to read the data from the desired node.Currently, I have two pods (primary and secondary DB instances) along with the operator.Finally, could you also confirm, if the production environment has one primary, one secondary and one load balancer pods in order to route between the replica sets’ primary and secondary node?\nIf yes, please note that deploying an even number of nodes in a replica set is not a recommended configuration as per the Deploy a Replica Set page.Please feel free to reach out if you have further questions.Regards\nAasawari",
"username": "Aasawari"
}
] | Connecting to correct instance of replica-set | 2023-06-29T12:34:40.612Z | Connecting to correct instance of replica-set | 574 |
null | [
"python",
"atlas-cluster"
] | [
{
"code": "import datetime as dt\n\nimport mongoengine as me\n\nimport shoonya_details as kd\n\ntoday = dt.date.today().strftime('%d-%m-%Y')\n\nme.connect(\n alias=\"core\", db=\"algo_trading\",\n host=f\"mongodb+srv://{kd.MONGO_ID}:{kd.MONGO_PWD}@akcluster0.lw3clrm.mongodb.net/algo_trading\"\n)\n\n\nclass AccessToken(me.Document):\n date = me.StringField(default=today, unique=True)\n access_token = me.StringField(unique=True, required=True)\n\n meta = {\n \"db_alias\": \"core\",\n \"collection\": \"accesstoken\",\n \"indexes\": [\"date\"]\n }\npymongo.errors.ServerSelectionTimeoutError: ac-sa1chw2-shard-00-01.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997),ac-sa1chw2-shard-00-00.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997),ac-sa1chw2-shard-00-02.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997), Timeout: 30s, Topology Description: <TopologyDescription id: 63f49e3534051c1e57bb1c20, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-sa1chw2-shard-00-00.lw3clrm.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-sa1chw2-shard-00-00.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')>, <ServerDescription ('ac-sa1chw2-shard-00-01.lw3clrm.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-sa1chw2-shard-00-01.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')>, <ServerDescription ('ac-sa1chw2-shard-00-02.lw3clrm.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-sa1chw2-shard-00-02.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')>]>\n",
"text": "I am using the python to connect with MongoDB Atlas with mongoengine. the codes are as below.and I am getting error as below:I am using the Mac M1 chip.",
"username": "Atul_Kundaria"
},
{
"code": "Traceback (most recent call last):\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/finvasia_login.py\", line 47, in <module>\n _getting_login(shoonya)\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/finvasia_login.py\", line 32, in _getting_login\n UserToken(date=today, user_token=ret[\"susertoken\"]).save()\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/venv/lib/python3.10/site-packages/mongoengine/document.py\", line 417, in save\n _ = self._get_collection()\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/venv/lib/python3.10/site-packages/mongoengine/document.py\", line 231, in _get_collection\n if cls._meta.get(\"auto_create_index\", True) and db.client.is_primary:\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/venv/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 1115, in is_primary\n return self._server_property(\"is_writable\")\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/venv/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 881, in _server_property\n server = self._topology.select_server(writable_server_selector)\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/venv/lib/python3.10/site-packages/pymongo/topology.py\", line 272, in select_server\n server = self._select_server(selector, server_selection_timeout, address)\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/venv/lib/python3.10/site-packages/pymongo/topology.py\", line 261, in _select_server\n servers = self.select_servers(selector, server_selection_timeout, address)\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/venv/lib/python3.10/site-packages/pymongo/topology.py\", line 223, in select_servers\n server_descriptions = self._select_servers_loop(selector, server_timeout, address)\n File \"/Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/venv/lib/python3.10/site-packages/pymongo/topology.py\", line 238, in _select_servers_loop\n raise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: ac-sa1chw2-shard-00-01.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997),ac-sa1chw2-shard-00-02.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997),ac-sa1chw2-shard-00-00.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997), Timeout: 30s, Topology Description: <TopologyDescription id: 63f4a22a63c014d86b971aa2, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-sa1chw2-shard-00-00.lw3clrm.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-sa1chw2-shard-00-00.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')>, <ServerDescription ('ac-sa1chw2-shard-00-01.lw3clrm.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-sa1chw2-shard-00-01.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')>, <ServerDescription ('ac-sa1chw2-shard-00-02.lw3clrm.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-sa1chw2-shard-00-02.lw3clrm.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')>]>\n",
"text": "",
"username": "Atul_Kundaria"
},
{
"code": "",
"text": "Usually this indicates to me either the ssl/tls library is out of date or the certificate store needs updating.Can’t really help as my Mac knowledge is: 0",
"username": "chris"
},
{
"code": "/Applications/Python\\ 3.10/Install\\ Certificates.command\ncd /Users/atulkundaria/PycharmProjects/pythonProject/pythonProject/finvasia/\nrm -rf venv/\npython3.10 -m venv venv\n",
"text": "Try running this command to update Python 3.10’s bundled openssl certificates:If that doesn’t resolve your issue, then delete and recreate your venv:For more options see TLS/SSL and PyMongo — PyMongo 4.3.3 documentation",
"username": "Shane"
},
{
"code": "",
"text": "The server certificate is not trusted by your “client”. Here the client may mean the lib/framework you are using to connect to that server, or the OS built-in list, depending on from where the list of trusted root certs are retrieved.You can try updating python cert lists as said in above comment, or check trusted list on your mac OS.",
"username": "Kobe_W"
}
] | Getting ssl error while connecting | 2023-02-21T10:42:57.281Z | Getting ssl error while connecting | 1,689 |
null | [
"app-services-cli"
] | [
{
"code": " \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Operation not permitted\"}}",
"text": "Hi,\nThe “mongod” command will reset the owner and group of the sock file.\nevery time when I run this command it will change the owner and group from mongod to root\ndue to this service not running and showing a service stop get in log when I restart the service\n \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Operation not permitted\"}}\nafter chown the mongod:mongod /tmp/mongodb-27017.sock it will work but I want a permanent solution. is there any way?",
"username": "Shakir_Idrisi"
},
{
"code": "",
"text": "You should not run mongod as root\nHow did you install mongodb?\nIf installed to start as service use sudo sysctl command\nCheck your installation instructions for exact command\nIf you want to start mongod manually try to run the command as normal user\nIf you run just mongod it tries start the instance on default port & default dirpath /data/db\nIf you want to start your mongod on another port & dirpath pass these parameters on command line\nEx:\nmongod --port 28000 --dbpath yourdir --logpath etc\nCheck mongodb documentation for various command line params like fork etc",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi,\nI have installed mongodb using Yum and I start the service using systemctl mongod start the service is running.but when I run mongod command as root it will change the sock file permission.",
"username": "Shakir_Idrisi"
},
{
"code": "",
"text": "If it is working with sysctl why you are running mongod as root again?\nIf you try to run it as root alk permissions on dirs,tmp files will change\nYou have to use sudo chmod commands to change it back to mongod owner",
"username": "Ramachandra_Tummala"
},
{
"code": "sudo -u mongodb mongod ....",
"text": "but when I run mongod command as root it will change the sock file permission.This is the expected outcome when running as root.If you feel the need to run mongod manually do it as mongodb. sudo -u mongodb mongod ....",
"username": "chris"
}
] | /tmp/mongodb-27017.sock file premission changed when run mongod command | 2023-07-03T07:47:06.077Z | /tmp/mongodb-27017.sock file premission changed when run mongod command | 794 |
null | [] | [
{
"code": "",
"text": "Not sure where I should email, hopefully someone here can point me in the right directionI am getting a bill for services I did not use!! The only this I did so far are 2 Mongodb University course\nI can’t find contact information for the billing dept.\nChris",
"username": "Chris_Job1"
},
{
"code": "",
"text": "Hi @Chris_Job1,If this is a MongoDB Atlas billing enquiry then I would raise it with the Atlas in-app chat support team. You can open a chat with them by opening up the chat bubble in the bottom right hand corner when you’re logged into the Atlas UI.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Getting a bill in error | 2023-07-04T23:35:56.086Z | Getting a bill in error | 583 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": " \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"index-autocomplete\",\n \"autocomplete\": {\n \"path\": \"token\",\n \"query\": \"John\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"token\": 1,\n \"Score\": { \"$meta\": \"searchScore\"}\n }\n },\n {\n \"$sort\": { \"score\": -1}\n }\n ]\n{\n \"token\": \"Johnson - Smith\",\n \"Score\": 4.894947052001953\n },\n {\n \"token\": \"Johnson\",\n \"Score\": 4.348791122436523\n },\n",
"text": "I have a collection containing a field “token” (among others).\nI defined a new index on that single field, with data type “autocomplete”.\nThen I made a very basic search query, in order to search on that field “token”, using the index. The query looks like this :I receive the following results:As you can see, the score of the first result is higher. However, as the field “token” of the second result has less characters, I expected that it would have a higher score than the first one, but it’s not the case.In MongoDB documentation, I saw that the score with autocomplete was not always accurate. So I already tried the following workarounds:If I use “Johnson” as input string, then only the second workaround works (I have a score of 10,5 for token “Johnson”, which is an exact match, and a score of 9,6 for token “Johnson – Smith”).But when I use “John” as input string, as it’s always a partial match, none of the workaround works, and I have always a higher score for token “Johnson – Smith”.Is there a way to change that behavior ?",
"username": "Nicolas_Guilitte"
},
{
"code": "scoreautocompleteautocomplete\"Johnson\"\"John\"scoreautocompleteconstantfunction",
"text": "Hi @Nicolas_Guilitte,In MongoDB documentation, I saw that the score with autocomplete was not always accurate.I believe the scoring behaviour you’ve mentioned is described as per the score portion of the autocomplete documentation:autocomplete offers less fidelity in score in exchange for faster query execution.But when I use “John” as input string, as it’s always a partial match, none of the workaround works, and I have always a higher score for token “Johnson – Smith”.The workarounds mentioned have to do with exact matches (i.e. Using \"Johnson\" as the search term rather than \"John\").Would perhaps altering the score option in the autocomplete suit your use case? One example could be to set it to a constant value so that they have the same score. There is also a function examples which may possibly help depending on the use case and other fields in the document(s) being searched.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Inaccurate score for autocomplete search | 2023-06-27T12:45:03.789Z | Inaccurate score for autocomplete search | 565 |
null | [
"atlas-triggers"
] | [
{
"code": "[\n {\"updatedFields\":{\"TopLevelArray.1.NestedArray.0.desc\":\"Test Value\"},\"removedFields\\\":[]}\n]\n{\"updateDescription.updatedFields.TopLevelArray.NestedArray\":{\"$exists\":true}}\n",
"text": "I’m trying to have a trigger fire when a nested array is updated. I can’t seem to get the match statement to fire when this array is modified. This is basically what is returned in the change event.I’m trying to match with the following but it doesn’t seem to work since the indexes are in the “updatedFields” document.I would like for it to match when anything at all in “NestedArray” is changed.Any ideas of how I can accomplish this?",
"username": "Tyler_Queen"
},
{
"code": "",
"text": "Hi @Tyler_Queen,I will need to test that to see if it’s possible.on trigger level , but have you considered parsing on trigger function and acting only on specific document?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Yes, I’m doing that in the attached function I just don’t want the trigger to fire every time the collection is updated.",
"username": "Tyler_Queen"
},
{
"code": "",
"text": "Hi @Tyler_Queen ,I could not find a way to do this on the match trigger expression as the field has a string path with “.” in its key So the only way I see is at the moment is in the function…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi,Sorry to re-open this post, but I have the same issue.\nIs there’s a way today to to this ?\nIt’s not really a problem to parse in the function but i’m confused about the billing of the triggers and the fact that we can’t filter a nested array.In my case, i have a collection who contain an array and i need to watch for 2/3 kinds of update of an object in this array, but if the trigger is called at every update on this collection, it will be usefull for like 5% of theses calls.So, potentially, we will be charged for a lot of useless calls in this trigger or it’s not related ?",
"username": "Pierre_More"
},
{
"code": "{\"updatedFields\":{\"TopLevelArray.1.NestedArray.0.desc\":\"Test Value\"}{\"$expr\":{\"$eq\":[{\"$let\":{\"in\":{\"$size\":\"$$foo\"},\"vars\":{\"foo\":{\"$filter\":{\"cond\":{\"$eq\":[\"topLevelField.nestedField\",\"$$this.k\"]},\"input\":{\"$objectToArray\":\"$updateDescription.updatedFields\"}}}}}},{\"$numberInt\":\"1\"}]}}",
"text": "Hi Pierre,This doesn’t appear to be possible for a nested array because the updatedField key includes the array index which can’t be accounted for, as illustrated earlier in the thread:{\"updatedFields\":{\"TopLevelArray.1.NestedArray.0.desc\":\"Test Value\"}If it were a nested field (not array) being updated, then this could be used:{\"$expr\":{\"$eq\":[{\"$let\":{\"in\":{\"$size\":\"$$foo\"},\"vars\":{\"foo\":{\"$filter\":{\"cond\":{\"$eq\":[\"topLevelField.nestedField\",\"$$this.k\"]},\"input\":{\"$objectToArray\":\"$updateDescription.updatedFields\"}}}}}},{\"$numberInt\":\"1\"}]}}Regards",
"username": "Mansoor_Omar"
},
{
"code": "matrix.items{\"updateDescription.updatedFields.matrix.items\":{\"$exists\":true}}matrix.items",
"text": "I have a similar issue with my trigger’s match expression.I have records with the following matrix field:\nI want my trigger to fire every time an update is detected in matrix.items. Is it possible?My match expression is: {\"updateDescription.updatedFields.matrix.items\":{\"$exists\":true}} . But this doesn’t seem to work (maybe because matrix.items is an array?). The trigger doesn’t fire.",
"username": "Laekipia"
},
{
"code": "i\"matrix.items.i.field\"\"matrix.items.i.fieldifield{\n \"$expr\": {\n \"$gt\": [\n {\n \"$size\": {\n \"$filter\": {\n \"input\": { \"$objectToArray\": \"$updateDescription.updatedFields\" },\n \"cond\": { \"$regexMatch\": { \"input\": \"$$this.k\", \"regex\": \"^matrix\\\\.items\\\\..*$\" } }\n }\n }\n },\n { \"$numberInt\": \"0\" }\n ]\n }\n}\nregexposcontext_iddate_addedregex\"^matrix\\\\.items\\\\..*\\\\.pos$\"",
"text": "When it comes to arrays, the challenge is the index part (i ) of the field name \"matrix.items.i.field\" , which dynamically represents what element in the array has been updated.So, the rule to match those events would involve a regular expression, matching all \"matrix.items.i.field no matter the value of i or field:By tweaking the regex part of this rule, it can be applied to any other use case where a trigger needs to fire from an update to an array element.For example, if the trigger needs to fire only when the pos field updates, ignoring updates to context_id or date_added fields, changing regex to \"^matrix\\\\.items\\\\..*\\\\.pos$\" would provide that functionality.The rest of the event handling logic must then be implemented in the corresponding Function.Best,\nAlex",
"username": "Alex_Svatukhin"
}
] | Realm Trigger match on nested array update | 2021-06-10T00:50:09.955Z | Realm Trigger match on nested array update | 5,673 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "try {\nUserSchema.findOne({ email: req.body.email }, (error, user) => {\nconsole.log(\"User:\", user)\nconst userFound = UserSchema.findOne(req.body.email)\nconsole.log(userFound)\n",
"text": "Hello, please forgive me if this is the wrong place.\nI have a login function that’s supposed to locate the user data in the db by their email, however this code is not working, after some research i was informed that mongoose removed callback support, which the function uses, and I find myself unable to figure out how to adapt the code.\nHere’s what I had:I have tried to rewrite as:however the response is a huge object with way too much info and none that I need, I can’t use userFound.name to find the name for example because that is not in the response I get.Thank you for your time",
"username": "Kanna_Istvar"
},
{
"code": "",
"text": "This seems a similar question",
"username": "John_Sewell"
}
] | Login function with no callback | 2023-07-04T20:18:15.954Z | Login function with no callback | 742 |
null | [
"node-js"
] | [
{
"code": "",
"text": "MongoParseError: Password contains unescaped characters\nat new ConnectionString (C:\\Users\\HEWLETT-PACKARD\\Desktop\\flutter\\tysBackend\\node_modules\\mongodb-connection-string-url\\lib\\index.js:90:19)",
"username": "Martin_Ntalika"
},
{
"code": "const adminPassword = encodeURIComponent( process.env.ADMIN_PASSWORD )\n",
"text": "Try try encoding the password text with something like:",
"username": "Joe_Devlin"
},
{
"code": "",
"text": "add % followed by the hexadecimal ascii representation of the special characters you use.",
"username": "Black_Linden"
}
] | Password contains unescaped characters | 2021-12-03T10:54:45.438Z | Password contains unescaped characters | 19,202 |
null | [
"compass"
] | [
{
"code": "",
"text": "Just updated Compass to 1.38.x from 1.32.xWhen did Stage Wizard appear?Interesting feature!",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi @Jack_Woehr!Stage wizard is a new feature introduced in 1.38. Good to hear you find it interesting If you have any feedback about it or if you have any suggestions for what other use cases you’d like to see the wizard, let us know!",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "@Massimiliano_Marcon , Yes, I have found the Suggest a new use case link … I suppose one could suggest a lot of the aggregation questions one finds here in the Developer Forum!",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Compass: When did Stage Wizard appear? | 2023-07-04T00:52:27.773Z | Compass: When did Stage Wizard appear? | 513 |
null | [
"chennai-mug"
] | [
{
"code": "",
"text": "I am delighted to introduce myself as the Chennai MongoDB User Group(MUG) leader. My name is Silambarasan Balasubramanian and I am thrilled to be a part of this amazing community.I’ve 11+ years of experience working with different database technologies and have been using MongoDB for almost 7+ years now. I am also a MongoDB Certified DBA,Developer and SI Architect by the MongoDB Inc.I’ve learned MongoDB from its great documentation and MongoDB University.I am looking forward to working closely together to make our MongoDB group a vibrant hub of knowledge and expertise. Let’s embark on this exciting journey together, embracing the power and possibilities of MongoDB.Best,\nSilambarasan Balasubramanian",
"username": "silambarasan_87259"
},
{
"code": "",
"text": "Welcome to the community @silambarasan_87259 We are glad to have someone with your experience and expertise lead the Chennai MongoDB User Group Community!Hope you’ll find this community to be a valuable resource for all things MongoDB!",
"username": "Harshit"
}
] | Hello Everyone, Introducing Myself as the New MUG Leader! | 2023-07-04T11:23:42.451Z | Hello Everyone, Introducing Myself as the New MUG Leader! | 645 |
null | [
"atlas-device-sync",
"android",
"kotlin"
] | [
{
"code": "realmApp.currentUser.accessTokenhttps://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/loginhttps://realm.mongodb.com/api/admin/v3.0/groups/$realmProjectId/apps/$realmAppId/users/verify_token",
"text": "I’m trying to verify the user token via my server, the android app sends the token to the server after retrieving it from:realmApp.currentUser.accessTokenin the server I get the token to access to the admin api adding in the body the public and private key of organization owner (added to the project) byhttps://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/loginthen ask to admin api, adding the bearer token got in previous step to the header and the token of the user in the body of requesthttps://realm.mongodb.com/api/admin/v3.0/groups/$realmProjectId/apps/$realmAppId/users/verify_tokenI always get this response: 404 {“error”:“user not found”}The client app is an android app using realm sdk in kotlin version 1.9.1 and the server is a ktor server.Thanks to anyone who can help me",
"username": "Matteo_Magnone"
},
{
"code": "users/verify_token",
"text": "What does the body you send to users/verify_token look like?",
"username": "Clemente_Tort_Barbero"
},
{
"code": "",
"text": "Hi, thanks for you reply.\nI have solved the issue checking the body content, that just was the wrong token.Thank you for your help",
"username": "Matteo_Magnone"
}
] | Verify token on Atlas return always User Not Found | 2023-06-28T16:02:55.743Z | Verify token on Atlas return always User Not Found | 744 |
[
"compass",
"mongodb-shell",
"indexes"
] | [
{
"code": "",
"text": "I’d like to know if I can do createSearchIndex from mongo compass, from mongosh. Can I connect to Atlas within compass?Where should I create the index db.cars.createSearchIndex(…)?, with a driver in a scriptThanks",
"username": "javier_ga"
},
{
"code": "",
"text": "Hi @javier_gaThat is the mongosh method. You can do that in compass using its builtin mongosh.Look to the bottom of the compass gui and you will see mongosh, click to expand this.\nimage1430×846 51.9 KB\n",
"username": "chris"
},
{
"code": "",
"text": "For now, the solution is what @chris suggested.Soon, Compass will have a built-in way to create Search indexes in the UI. Stay tuned for that!And yes, Compass can connect to Atlas.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "In the course of mongo, chapter “mongo atlas search”, we practice with atlas in the Lab. We use a .json to create a default index. My question is that if we can use compass to create this .json o refers to this json somewhere in my local machine or in the UI compass. I suppose that from yor reply I can’t create an index with a json file from the compass, is that correct?.",
"username": "javier_ga"
},
{
"code": "mongosh",
"text": "If you feel like copy/paste into the mongosh then those should work as almost all commands are supported.But for ‘tested’ repeatability you are best getting mongosh installed and use that to execute js files for labs.",
"username": "chris"
}
] | Can I create a createSearchIndex from compass? | 2023-07-03T15:32:14.438Z | Can I create a createSearchIndex from compass? | 562 |
|
null | [
"aggregation",
"atlas-triggers"
] | [
{
"code": "{\n members: [\n { \n _id: ....,\n email_statuses: [\n {\n _id: ....,\n prevalent_status: 'accepted'\n }\n ]\n }\n ]\n}\nupdateDescription.updatedFields.members.email_statuses.prevalent_status",
"text": "I am trying to execute a trigger function, on database update, only when a specific field in any of a document’s nested array subarray objects is changed.in my case i have a document containing the members array, and each member contains an email_statuses array, i would like the trigger to execute only when the property prevalent_status changes in any of the member’s email_statusesdocument:I have understood that the updateDescription updatedFields key will contain the indexes of the arrays that hold the element that is updated, therefor i can’t set a $match expression as follows:updateDescription.updatedFields.members.email_statuses.prevalent_status (doesn’t work)Is there a way to formulate the match expresison to achieve what i am trying to do?",
"username": "Dario_Grilli"
},
{
"code": "console.Log(EJSON.Stringify(changeEvent.updateDescription)",
"text": "Do you have any sense of how much load you expect the trigger to have? I generally think that unless this is a trigger that is incredibly performance-sensitive (thousands per second), you will be better off doing your filtering within the function code itself. The reason is that different kinds of operations will appear in the ChangeEvent differently (adding new elements to the list, updating an element, removing an element, unsetting the field, truncating the array, etc).For these reasons, defining the logic in simple JS code might be a lot easier for you if that is acceptable.If not, and you know that the types of operations being made to the document are controlled entirely by you and unlikely to change, I think the best thing to do is to first make the code for your function console.Log(EJSON.Stringify(changeEvent.updateDescription), then remove the match expression and find a few example Change Events that you would like the event to fire for. Then if you want to post them here I can try to help formulate an expression, but without seeing the exact events its hard to write an expression on them.Also, wanted to post this link in case you hadn’t read through it: https://www.mongodb.com/docs/atlas/app-services/triggers/database-triggers/#use-match-expressions-to-limit-trigger-invocationsBest,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thank you for your response Tyler, atm I am s truggling to get the actual update operations working, (opened another thread about it).\nAs soon as i solve that and have the updates working i’ll get back to this topic and paste some updateFields examples.A",
"username": "Dario_Grilli"
},
{
"code": "{\n\t\"updatedFields\": {\n\t\t\"members.1.email_status.5.prevalent_status\": \"ACCEPTED\",\n\t\t\"members.1.email_status.5.statuses.accepted\": {\n\t\t\t\"at\": \"2023-06-16 16:29:49\",\n\t\t\t\"id\": \"AAAAAQAAKY4\"\n\t\t}\n\t}\n}\n{\n\t\"updatedFields\": {\n\t\t\"members.2.email_status.5.prevalent_status\": \"SENT\",\n\t\t\"members.2.email_status.5.statuses.sent\": {\n\t\t\t\"at\": \"2023-06-16 16:29:47\"\n\t\t}\n\t}\n}\n",
"text": "Here a few examples of the updateFieldsThe trigger should fire only when pervalent_status or any param inside statutes change within a email_status of a member",
"username": "Dario_Grilli"
},
{
"code": "[\n {\n $addFields:\n {\n updatedFieldArr: {\n $objectToArray: \"$updateDescription.updatedFields\",\n },\n },\n },\n {\n $match:\n {\n $or: [\n {\n \"updatedFieldArr.k\": {\n $regex:\n \"members.[0-9]+.email_status.[0-9]+.prevalent_status\",\n },\n },\n {\n \"updatedFieldArr.k\": {\n $regex:\n \"members.[0-9]+.email_status.[0-9]+.statuses\",\n },\n },\n ],\n },\n },\n]\n",
"text": "Hi, I played around with this a bit and you can accomplish this via a normal aggregation pipeline like this:Therefore, I think you can add the first bit to the “project” and the second bit to the “match expression”. I have not tried this on a trigger through admitedly, so let me know if this works?As an aside, I found the following process helpful for this:Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Seems like I am unable tu use this inside the trigger match expression in the MongoDB realm gui\nimage1189×333 24.5 KB\n",
"username": "Dario_Grilli"
},
{
"code": "",
"text": "Hi, it does seem like it is possible to do if you manage your own change stream and use the “watch()” API, but unfortunately, triggers allow you to specify a Match and a Project, but we apply the Project after the Match (and in your case, you want the opposite). I think given this complexity I would once again push you to either:Let me know if that works for you, I think its the best option long term since it lets you more clearly define what event you want to react to in code as opposed to aggregation expressions.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Ty Tyler, now that I can confirm that it can’t be handled as I hoped, I will have to test the cost of allowing the trigger to always initiate (it is bound to aws lambda do every trigger even if useless will have a cost) or if to modify data modelA",
"username": "Dario_Grilli"
}
] | Trigger match update on nested array object property | 2023-06-14T18:11:42.327Z | Trigger match update on nested array object property | 1,074 |
null | [
"database-tools",
"backup"
] | [
{
"code": " Failed: <database>.<collection>: error creating collection <database>.<collection>: error running create command: (AtlasError) parameter changeStreamPreAndPostImages is disallowed in create command\n",
"text": "So I was using mongodump and mogorestore to clone one database into another project, the mongodump work fine but when trying to sun the command mongorestore I get the following error:I have try to search for this error but havent found anyone talking about itBoth mongorestore and mongodump are with versions 100.7.0 and both clusters are in version 6.0.6.I have already done this a couple of times with no problems, suddenly it appeared and not sure why",
"username": "Angel_Trevino"
},
{
"code": "mongorestoremongodumpmongorestore",
"text": "Hi @Angel_Trevino - Welcome to the communityCan you advise the following:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "mongodump --uri <uri source cluster>/<database>\nmongorestore --uri <uri destination cluster>\n",
"text": "Hi @Jason_Tran thank you for the welcoming message.These where the following commands that I executed.Both clusters are in different projects but they are the shared tiers.The last time I perform this was like 2-3 weeks ago aprox.I also tried to do it in another system, and it produce the same error. The systems I did it with were with Windows and MacOSNot sure if its anything with my configuration or what tbh",
"username": "Angel_Trevino"
},
{
"code": "mongodumpFull DocumentDocument Preimage<database>.<collection>",
"text": "Thanks for providing those details @Angel_Trevino,Are you able to confirm if:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello @Jason_Tran thank you for your response,Neither do the Source or the destination have any triggers register to them.",
"username": "Angel_Trevino"
},
{
"code": "",
"text": "Not sure if this is helpful for debugging the issue but I managed to make my copy by using mongoexport and mongoimport, the only issue is that I had to run the command for each collection I had.",
"username": "Angel_Trevino"
},
{
"code": "",
"text": "Just updating this post - This was behaviour was identified as bug and was fixed several weeks ago.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Error on mongorestore (AtlasError) parameter changeStreamPreAndPostImages is disallowed in create command | 2023-05-22T17:38:34.793Z | Error on mongorestore (AtlasError) parameter changeStreamPreAndPostImages is disallowed in create command | 1,011 |
null | [
"schema-validation"
] | [
{
"code": "Error:\n\nfailed to validate upload changesets: field \"crop\" in table \"Farm\" should have link type \"objectId\" but payload type is \"Null\" (ProtocolErrorCode=212)\n\"crop\": {\n \"bsonType\": \"objectId\"\n }\n",
"text": "I recently noticed my Realm app was no longer syncing. Looking at the Atlas App Services showed a client token error that necessitated sync be terminated and restarted. Upon doing so, I’m now getting a BadChangeset Error;This app has been in production more than a year, so this error is odd. In the client code, the crop property is marked as nullable. In the App Services schema, the crop property is not listed under the ‘required’ property and is defined as;Within App Services, under App Settings, I’ve enabled the setting ‘Null Type Schema Validation’. This has not resolved the issue. How do you set a property as optional in the schema? Previously, if a property was not set in the schema’s ‘required’ property, it was presumed optional.",
"username": "Mauro"
},
{
"code": "",
"text": "hmm odd - the option you flagged there should handle this case. - can you share the url to your app please and we will take a look on the backend?",
"username": "Ian_Ward"
}
] | BadChangeset Error | 2023-07-01T22:55:21.865Z | BadChangeset Error | 608 |
null | [] | [
{
"code": "",
"text": "Hello!\nI am trying to create a data pipeline which uses for development only and not for production use cases. For this purpose I have an on-premise machine with a MongoDB that I want to sync to the Atlas MongoDB and from the cloud on request only. Any native support for this feature?Thanks ahead",
"username": "Guy_Dahan"
},
{
"code": "",
"text": "We have a private preview of our edge server here -would this solve your use case or what are you looking for?",
"username": "Ian_Ward"
}
] | Local MongoDB to sync only on request | 2023-06-21T06:00:10.943Z | Local MongoDB to sync only on request | 616 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "I can’t seem to find a way to unlink an auth provider from a user account. Is there any plan to add this feature?",
"username": "Sam_A98"
},
{
"code": "",
"text": "@Sam_A98 hi Sam - are you looking to do this from the user SDK or from an Admin API?",
"username": "Ian_Ward"
}
] | Is there any plan to add user.unlinkCredential to the sdk | 2023-06-14T07:29:50.345Z | Is there any plan to add user.unlinkCredential to the sdk | 745 |
null | [
"atlas-cluster"
] | [
{
"code": "{\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map {\n 'xxx-00-pri.xxx.mongodb.net:27017' => [ServerDescription],\n 'xxx-01-pri.xxx.mongodb.net:27017' => [ServerDescription],\n 'xxx-02-pri.xxx.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-7rmoin-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set {}\n }\n",
"text": "Hi I am getting the following error when nodes app is trying to access Atlas db from google app engine. Following steps I followedNote: ‘xxx’ is place holderAny suggestion on how to resolve this issue?",
"username": "Harsh_Koshti"
},
{
"code": "Active",
"text": "Hi @Harsh_Koshti,Are there 2 connections created? Or do you mean that you accepted the initiated connection and a single connection appears in the Peering tab of Atlas (that has state Active)?As per the GCP Set up a Network Peering Connection documentation:You must add your VPC CIDR block address (or subset) associated with the peer VPC to the IP access list before your new VPC peer can connect to your Atlas cluster.Did you add the Atlas CIDR to the IP access list or the GCP VPC associated with your GAE environment?Lastly, just to be sure, please go over the Network Peering between an Atlas VPC and Two Virtual Networks with Identical CIDR Blocks documentation to confirm you don’t have overlapping CIDR’s.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb-gap peering not working | 2023-07-01T07:45:12.980Z | Mongodb-gap peering not working | 456 |
null | [] | [
{
"code": "",
"text": "when I click on chart tab to do visuilzation, nothing shown on page, it is just a white page. by the way I used diffrent browsers chrome ,explore,firefox. same problem",
"username": "Rahaf_Alzahrani"
},
{
"code": "",
"text": "Hi @Rahaf_Alzahrani and welcome in the MongoDB Community !Could you please share your Atlas Project ID (it’s in the URL) so the Atlas engineers can investigate?Thanks,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi, @MaBeuLux88 the same thing is happening to me, when I click on chart tab to do visualization nothing shown on page, it is just a white page.",
"username": "Juan_Felipe_Gomez_Velez"
},
{
"code": "",
"text": "This has been fixed. All your dashboards should be working now.",
"username": "Avinash_Prasad"
},
{
"code": "",
"text": "Thanks. I can see the boards",
"username": "Juan_Felipe_Gomez_Velez"
},
{
"code": "",
"text": "Hi, @MaBeuLux88 the same thing is happening to me, when I click on chart tab it says “Could not load instance. Please try again”.\n\nScreenshot 2023-04-18 at 12.40.37 AM1992×1150 174 KB\n",
"username": "Neha_Tyagi1"
},
{
"code": "",
"text": "Oops I’m just discovering this post today.\nLooks like a completely different problem. I hope your issue was fixed in the meantime.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Chart tab doesn't work"shown white page" | 2022-06-15T23:37:32.459Z | Chart tab doesn’t work”shown white page” | 3,249 |
[
"data-modeling"
] | [
{
"code": "Schema 1\n{\n \"name\": \"Shama\",\n \"contact\": {\n \"age\": 18,\n \"grade\": \"A\",\n \"school\": \"XYZ High School\"\n }\n},\n\nSchema 2\n{\n \"name\": \"Shama\",\n \"age\": 18,\n \"grade\": \"A\",\n \"school\": \"XYZ High School\"\n},\n\nImage Schema 1\n{\n \"_id\": <ObjectId1>,\n \"username\": \"123xyz\",\n \"phone\": \"1230456-7890\",\n \"email\": \"[email protected]\",\n \"level\": 5,\n \"group\": \"dev\"\n},\n\n",
"text": "I want to know the benefits of embedding the data like phone number and email in separate contact key likeWhat is the benefits of object schema 1 before schema 2?\nI have seen this schema many place while someone learn the data modelling of mongodb.However the querying of schema is still easy as compared to schema 1, then why schema 1 is so popular. Please anyone put some light on it.Is Image Schema 1 is better or not",
"username": "Zubair_Rajput"
},
{
"code": "",
"text": "From a personal point of view I find grouping data together makes a more readable document, you’re using the structure of the document to describe the data.\nIn our main application we have up to 1700 fields on a document that are all nested within groups that describe the type of data that they represent. This makes it easy to quickly navigate to the data that we want to look at, or to simply project out the areas of a document that we want to deal with as opposed to having to specify a project statement that has 100 or so fields in the definition.",
"username": "John_Sewell"
}
] | What is the benefits of storing email and phone number in contact field rather then simply using phone and email itself only | 2023-07-03T18:36:55.460Z | What is the benefits of storing email and phone number in contact field rather then simply using phone and email itself only | 457 |
|
null | [
"queries",
"transactions"
] | [
{
"code": "select * from transactions t where substring(t.voucherNumber,1,16) in (:voucherList)\n",
"text": "Hello,I’m a new guy in the NoSQL world and I need to migrate an existing project on PostgreSQL to Mongodb.I’m struggling with one specific query which is really not complicated on SQL side but I cannot manage to do the equivalent in noSql.Could you help me please ?The SQL query that I need to “adapt” to NoSql world As you can see, I store voucherNumbers in 30 characters in database but for a specific workflow, I received a list of vouchers on 16 characters and I need to find all the documents where the voucherNumber starts with one of the list.It tried regex, clause IN etc, but I cannot manage to do it.Any help would be really appreciated.",
"username": "Gwen"
},
{
"code": "db.getCollection(\"Test\").aggregate([\n{\n $match:{\n $expr:{\n $in:[\n {\n $substrCP:['$voucherNumber', 0, 3]\n },\n ['ABC', 'ABF']\n ]\n }\n }\n}\n])\n",
"text": "How about this:Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you so much !!",
"username": "Gwen"
},
{
"code": "|^db.collection.aggregate([\n {\n $match: {\n voucherNumber: {\n $regex: \"^ABC|^ABF\"\n }\n }\n }\n])\n$expr$expr$in",
"text": "Hello @Gwen, Welcome to the MongoDB Community Forum,You can use $regex operator with pipe | sign for or condition and ^ sign to check matching string from the start of the string,@John_Sewell, if there is a regex operator then no need to use an extra expression operators to make this happen.And the other side, There are some restrictions in $expr operator about index use, $expr can’t use an index with $in operator.",
"username": "turivishal"
},
{
"code": "",
"text": "True, in this case if it’s just the prefix we’re searching for a straight regex does make it easier, although you’ll need to form the regex string in code (which is trivial).Be interesting to see the performance difference between the two though.",
"username": "John_Sewell"
},
{
"code": "",
"text": "I just created a quick collection with about 100,000 random voucher codes and ran the two approaches 10 times or so each.\nBoth come down to sub 200ms execution time (running on local workstation with lots of CPU and RAM, plus only one field in documents so the index fully covers the query in RAM)So either way should be good, if you want to compare two fields against each other you’ll want to use the $expr but in this case as Vishal said you can just form a regex.Note that unless you are using Atlas search indexes and doing something interesting, you need to anchor the start of a regex in order for Mongo to make use of an index when matching against it.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you both for your replies.I’m not sure to really understand what I should do concerning “Note that unless you are using Atlas search indexes and doing something interesting, you need to anchor the start of a regex in order for Mongo to make use of an index when matching against it.” ?",
"username": "Gwen"
},
{
"code": "",
"text": "I don’t really have a preference between the two solutions.The first solution of John is working well, and the one of Vishal works too (indeed the regex is trivial to build) and apparently there are no difference on performance side.",
"username": "Gwen"
},
{
"code": "",
"text": "And I have another question concerning the same need : the provided list can contain a lot of vouchers to compare, something like 300 vouchers.May be it can help to choose the solution between the two you proposed ?",
"username": "Gwen"
},
{
"code": "voucherNumber$expr$in",
"text": "Hi @Gwen,The first solution of John is working well, and the one of Vishal works too (indeed the regex is trivial to build) and apparently there are no difference on performance side.How did you measure the performance? I mean if you are checking just taken time is less then it does not mean it performed well, understand the explain() method to check the exact view of the analytics of your query.For more details. refer to the Performance best practices,Performance Best Practices for MongoDBAnd I have another question concerning the same need : the provided list can contain a lot of vouchers to compare, something like 300 vouchers.If you want to use the index in voucherNumber then I have already mentioned that in my previous post $expr with $in does not use an index it will do a collection scan, which means it will impact First Disk Speed, especially on a large collection, Second The size of your WiredTiger cache and the size of your overall RAM.On the other side, you can refer to the documentation about indexing strategies",
"username": "turivishal"
},
{
"code": "",
"text": "Vishal is correct, I didn’t notice that from prior use, running on a test collection, while fast on a large dataset it IS performing a colscan using $in embedded within the Expr. I’ve previously used this with other operators which WILL hit an index and so not noticed this restriction.So I agree, in this scenario don’t embed the $in within the expr and just use a regex, and something to watch out for, until they add this functionality!John",
"username": "John_Sewell"
}
] | How to perform an operation on a field and then compare this result with provided data? | 2023-06-30T19:29:55.084Z | How to perform an operation on a field and then compare this result with provided data? | 653 |
[
"queries"
] | [
{
"code": "",
"text": "I’m building a basic CRUD application using MongoDB atlas as my database. Using Postman, I’m able to send post requests and I can see that they are posting correctly because I see my document in there. (and I’m getting a http 201 created response\n\nScreenshot (25)2204×758 130 KB\n\n)But when I try to do a simple get response that is supposed to return all of my data, postman is returning nothing. Just an empty array.I tried looking online on youtube videos for help through tutorials, but all my code looks pretty parallel. Only main difference is that most tutorials use just mongoDB locally, not the atlas cloud version. But I dont think it would make much of a difference in code, especially since I am able to successfully do a post request",
"username": "James_Villamayor"
},
{
"code": "",
"text": "Hi @James_Villamayor and welcome in the MongoDB Community !It’s probably an error in your GET function. No reason to have a different behaviour between MongoDB localhost & MongoDB in Atlas unless you are using completely different versions.Maybe share the code and someone can help debug.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "\nScreenshot (27)711×1200 103 KB\n",
"username": "James_Villamayor"
},
{
"code": "",
"text": "\nScreenshot (28)698×799 48.8 KB\n",
"username": "James_Villamayor"
},
{
"code": "",
"text": "My bet is that you are missing the default constructor in your POJO but I could be wrong.Check out my equivalent starter project where I’m using Spring boot but not Spring Data (because it’s a useless proxy to the MongoDB driver in my opinion)MongoDB Blog Post: REST APIs with Java, Spring Boot and MongoDB - GitHub - MaBeuLux88/java-spring-boot-mongodb-starter: MongoDB Blog Post: REST APIs with Java, Spring Boot and MongoDB",
"username": "MaBeuLux88"
},
{
"code": "Player.find({\n full_name: { $regex: new RegExp('.*' + player_name + '.*', 'i') },\n }).sort({ rating: -1 })\n .then(players => {\n // console.log(players)\n res.status(200).send(JSON.stringify(players))\n })\n",
"text": "I may have related problem. The problem is with my search api\nWhen i queried directly on my db on atlas/compass, i got the result i wanted.\n\nimage1920×1080 216 KB\n\nBut when i used postman or search on my web, the result contains all my documents and the web got crashed.\nHere is my codePlease help me!\nThanks in advanced",
"username": "Hoang_Nguyen_Duy"
},
{
"code": "",
"text": "Hi @James_Villamayor, I faced the same issue. Generating getter and setter resolved my issue.\nI hope this helps.",
"username": "Bhargav_N_A"
},
{
"code": "",
"text": "Before\n",
"username": "Bhargav_N_A"
},
{
"code": "",
"text": "After\n",
"username": "Bhargav_N_A"
}
] | I can post to my database, but my get requests say that my database is empty on Postman | 2022-06-15T07:43:57.567Z | I can post to my database, but my get requests say that my database is empty on Postman | 5,322 |
|
null | [] | [
{
"code": "",
"text": "I’m login, correct project name: MDB_EDU selected.\nAll is normal then I click ‘Check’ so I can move on in the lab.Incorrect solution 1/1\nUnable to load tags for cluster myAtlasClusterEDU in 6…I have never had this issue before. I have done 14 course.Since I was in a diffrent location, I have updated my IP address in Atlas. It says my current address is on there.I did see this on Atlas website\nAn error occurred loading sample data: Target cluster does not have enough free space to import datasetI don’t use this atlas account for anything but MongoDB Uni",
"username": "Efrain_Davila"
},
{
"code": "",
"text": "Even I am also facing the same issue",
"username": "Manish_Arora"
},
{
"code": "",
"text": "I just added the random tag on the bottom of database tab to proceed.",
"username": "Default_Test"
},
{
"code": "",
"text": "Can you elaborate what is “random tag” and how to add it? I am still stuck ",
"username": "John_Siu"
},
{
"code": "",
"text": "I solved this issue deleting some clusters in my free tier",
"username": "javier_ga"
},
{
"code": "",
"text": "Hey @Efrain_Davila / @javier_ga / @John_Siu / @Manish_Arora / @Default_Test,Thank you for highlighting the issue. I suspect that you may be facing difficulties loading the sample dataset due to the limited space available in your free-tier shared Atlas cluster.However, could you please provide the link to the lab you are experiencing issues with? This will help us to investigate further and assist you accordingly.Thank you for your patience.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "For me, this happen when I was doing the following lab:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.I end up deleting the cluster and recreate it manually in atlas, and added tag in the process.",
"username": "John_Siu"
},
{
"code": "",
"text": "Hi, yes, It was the sample dataset, in chapter search index, free tier. Well I already solved it, I don’t have now the url, but if it happens in future I’ll share it. Mongodb with python",
"username": "javier_ga"
},
{
"code": "",
"text": "Hi @John_Siu / @javier_ga,We want to inform you that the team has implemented a fix to resolve the issue.We hope that you can now access the labs without any difficulties. If you have any further issues or questions, please don’t hesitate to reach out to us.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you for getting back with me. The Lab I am having this issue is this one below.Unit: MongoDB Atlas Search >>\nLesson 2: Creating a Search Index with Dynamic Mapping / Practice >>\nLab: Creating a Search Index with Dynamic Field MappingEdit: I didn’t look at your last reply. I’ll check it out, thank you.",
"username": "Efrain_Davila"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lab, Unable to load tags for cluster myAtlasClusterEDU in 6 | 2023-07-01T17:28:47.012Z | Lab, Unable to load tags for cluster myAtlasClusterEDU in 6 | 818 |
[
"node-js"
] | [
{
"code": "",
"text": "Hello everyone!Hopefully I’m in the right place to ask my question So I’m trying to resolve a problem for a while now but couldn’t find much that helped me on the internet.\nSo the problem is that I can’t connect to my Mongodb atlas using my cluster, it used to work very well before.Here is the error I receive :\n\nCapture d'écran 2023-06-23 1419151292×243 18.6 KB\nI have an “ECONNREFUSED” to a specific IP.What I understood is that I’m trying to connect to my cluster through a different IP each time ( Dynamic IP changing from Mongo db I believe )I’m using O2switch as my server ( Cpanel ) and I feel that I need to allow the IP that I’m connecting to which makes it work somehow, but then the IP change again and I can’t connect anymore.I thought of allowing a MongoDB cluster DNS resolver on my Cpanel to be able to connect even if the IP changes but without success until now.For your information as well if needed, I allowed all IPs to access my cluster, my password and username are correct in my cluster as well.Hope I was clear enough, I’m still a junior trying to figure it out things Thank you !",
"username": "Takayuki_Wada"
},
{
"code": "M0/M2/M5M10",
"text": "Hello @Takayuki_Wada ,Welcome to The MongoDB Community Forums! Could you share additional details for me to understand your use-case better?Attaching a few documents you can refer to troubleshoot/fix connection error.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Can't connect my nodeJS project to the mongodb atlas cluster | 2023-06-25T10:33:32.356Z | Can’t connect my nodeJS project to the mongodb atlas cluster | 501 |
|
null | [
"replication",
"sharding"
] | [
{
"code": "sh.addShard(\"test-shardsrv-replica/shard1:27018\");MongoServerError: Could not find host matching read preference { mode: \"primary\" } for set test-shardsrv-replica{\n set: 'test-shardsrv-replica',\n date: ISODate(\"2023-06-29T13:42:41.974Z\"),\n myState: 1,\n term: Long(\"4\"),\n syncSourceHost: '',\n syncSourceId: -1,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 1,\n writeMajorityCount: 1,\n votingMembersCount: 1,\n writableVotingMembersCount: 1,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1688046154, i: 1 }), t: Long(\"4\") },\n lastCommittedWallTime: ISODate(\"2023-06-29T13:42:34.937Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1688046154, i: 1 }), t: Long(\"4\") },\n appliedOpTime: { ts: Timestamp({ t: 1688046154, i: 1 }), t: Long(\"4\") },\n durableOpTime: { ts: Timestamp({ t: 1688046154, i: 1 }), t: Long(\"4\") },\n lastAppliedWallTime: ISODate(\"2023-06-29T13:42:34.937Z\"),\n lastDurableWallTime: ISODate(\"2023-06-29T13:42:34.937Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1688046144, i: 1 }),\n electionCandidateMetrics: {\n lastElectionReason: 'electionTimeout',\n lastElectionDate: ISODate(\"2023-06-28T07:46:23.215Z\"),\n electionTerm: Long(\"4\"),\n lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1687938379, i: 1 }), t: Long(\"3\") },\n numVotesNeeded: 1,\n priorityAtElection: 1,\n electionTimeoutMillis: Long(\"10000\"),\n newTermStartDate: ISODate(\"2023-06-28T07:46:23.216Z\"),\n wMajorityWriteAvailabilityDate: ISODate(\"2023-06-28T07:46:23.217Z\")\n },\n members: [\n {\n _id: 0,\n name: 'shard1:27018',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 107779,\n optime: { ts: Timestamp({ t: 1688046154, i: 1 }), t: Long(\"4\") },\n optimeDate: ISODate(\"2023-06-29T13:42:34.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-06-29T13:42:34.937Z\"),\n lastDurableWallTime: ISODate(\"2023-06-29T13:42:34.937Z\"),\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1687938383, i: 1 }),\n electionDate: ISODate(\"2023-06-28T07:46:23.000Z\"),\n configVersion: 59857,\n configTerm: -1,\n self: true,\n lastHeartbeatMessage: ''\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1688046154, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1688046154, i: 1 })\n}\n\n",
"text": "I am getting the error while adding shard as below:Command\nsh.addShard(\"test-shardsrv-replica/shard1:27018\");Error\nMongoServerError: Could not find host matching read preference { mode: \"primary\" } for set test-shardsrv-replicaMy Shard Replica Set StatusAny Help would be Appreciated, Thanks!",
"username": "Jay_Bhanushali1"
},
{
"code": "rs.add(hostname:Port)\nsh.add(hostname:Port) \nsh.add(name_replica/hostname:Port)\n",
"text": "Hi @Jay_Bhanushali1,\nWhat are you trying to do?\nYou’re trying to add a new member in a replica set of shard or you’ re trying to add a new shard?\nIn the first case, you should use the following command:In the second case, you should use the following command:and notRegards",
"username": "Fabio_Ramohitaj"
},
{
"code": "sh.addShard(\"test-shardsrv-replica/shard1:27018\");",
"text": "sh.addShard(\"test-shardsrv-replica/shard1:27018\");is “shard1” a resolvable name?",
"username": "Kobe_W"
},
{
"code": "MongoServerError: command createUser requires authentication",
"text": "Yes it is. Also, now I am not facing this issue (used security and authentication using keyFile within each instance of config server, shard and router) but another one.\nI am unable to run any mongosh command as it shows below\nMongoServerError: command createUser requires authentication",
"username": "Jay_Bhanushali1"
},
{
"code": "",
"text": "Have you initialised config servers?\nWhat does rs.statu() show?\nHave you created user and able to login with authenticate to admin db\nAre clusterrole param ok in your config file under sharding for config&shard servers",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Jay_Bhanushali1Your first user needs to be created on the config replicaset. As you have enabled authorization you need to use the localhost exception to create the first user.Run mongosh directly on the mongos server (it has to connect on localhost) and create your user in the admin database. Also create an admin user on each shard replicaset, the localhost exception will remain enabled until this is done(or explicitly disabled on the command line).",
"username": "chris"
},
{
"code": "MongoServerError: command createUser requires authenticationmongosh localhost",
"text": "Hi @chris,\nI am able to create first user in shard and config server, however unable to do so in mongos instance as it gives me the error:\nMongoServerError: command createUser requires authentication\nI am running below command to use localhost exception\nmongosh localhost",
"username": "Jay_Bhanushali1"
},
{
"code": "",
"text": "Hi @Kobe_W , yes it is. I tried to telnet and it was connecting to it",
"username": "Jay_Bhanushali1"
},
{
"code": "",
"text": "I think mongos uses user details from config server\nDid you try to login to mongos using the user you created on config db?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "@Ramachandra_Tummala is correct here. The config replicaSet is used by mongos to authenticate users using the cluster.You should be able to authenticate to mongos using the already created users and create additional users and roles for the cluster.",
"username": "chris"
},
{
"code": "MongoServerError: failed to run command { isMaster: 1 } when attempting to add shard shard1:27018 :: caused by :: HostUnreachable: Error connecting to shard1:27018 :: caused by :: Could not find address for shard1:27018: SocketException: Host not found (non-authoritative), try again latersslMode : disabled",
"text": "Hi @Ramachandra_Tummala , @chris ,\nYes i am able to use users from config server. I was using wrong password. Also, now I am getting ssl Handshake issue when i try to addShard from mongos.\nMongoServerError: failed to run command { isMaster: 1 } when attempting to add shard shard1:27018 :: caused by :: HostUnreachable: Error connecting to shard1:27018 :: caused by :: Could not find address for shard1:27018: SocketException: Host not found (non-authoritative), try again later\nI am able to telnet to shard1:27018. I tried using sslMode : disabled but still gives same error",
"username": "Jay_Bhanushali1"
},
{
"code": "sslMode : disabled",
"text": "Could not find address for shard1:27018: SocketException: Host not found (non-authoritative), try again laterLooks like you have moved to a host lookup issue.I am able to telnet to shard1:27018. I tried using sslMode : disabled but still gives same errorFrom your host or the config server? All shard and config servers need to be able to resolve and connect to each other the mongos needs to be able to resolve and connect to both the shards and the config.",
"username": "chris"
},
{
"code": "",
"text": "Hi @chris , thank you so much for your help. I wasn’t able to connect to shard from config server. I fixed it and now able to run the addShard.",
"username": "Jay_Bhanushali1"
}
] | Could not find host matching read preference { mode: "primary" } | 2023-06-29T14:04:49.430Z | Could not find host matching read preference { mode: “primary” } | 1,157 |
null | [
"connecting"
] | [
{
"code": "MongoServerSelectionError: Server selection timed out after 30000 msconst { MongoClient} = require('mongodb');\nconst client = new MongoClient(\n process.env.MONGODB_CONNECTION_STRING,\n {\n useNewUrlParser: true,\n useUnifiedTopology: true\n }\n);\nmodule.exports = client.connect();\n{\n \"errorType\":\"Runtime.UnhandledPromiseRejection\",\n \"errorMessage\":\"MongoServerSelectionError: Server selection timed out after 30000 ms\",\n \"reason\":{\n \"errorType\":\"MongoServerSelectionError\",\n \"errorMessage\":\"Server selection timed out after 30000 ms\",\n \"reason\":{\n \"type\":\"ReplicaSetNoPrimary\",\n \"servers\":{\n \n },\n \"stale\":false,\n \"compatible\":true,\n \"heartbeatFrequencyMS\":10000,\n \"localThresholdMS\":15,\n \"setName\":\"atlas-13tibr-shard-0\"\n },\n \"stack\":[\n \"MongoServerSelectionError: Server selection timed out after 30000 ms\",\n \" at Timeout._onTimeout (/var/task/node_modules/mongodb/lib/sdam/topology.js:330:38)\",\n \" at listOnTimeout (internal/timers.js:557:17)\",\n \" at processTimers (internal/timers.js:500:7)\"\n ]\n },\n \"promise\":{\n \n },\n \"stack\":[\n \"Runtime.UnhandledPromiseRejection: MongoServerSelectionError: Server selection timed out after 30000 ms\",\n \" at process.<anonymous> (/var/runtime/index.js:35:15)\",\n \" at process.emit (events.js:400:28)\",\n \" at processPromiseRejections (internal/process/promises.js:245:33)\",\n \" at processTicksAndRejections (internal/process/task_queues.js:96:32)\"\n ]\n}\n",
"text": "We have an API working with MongoDB Atlas, connection works 99% of the time, but we occasionally get a MongoServerSelectionError: Server selection timed out after 30000 ms on a specific endpoint for some reason. Like I said, it fails 2/3 times out of 100.We are connecting like so:This is running on Netlify Functions, and whenever the function needs to use Mongo, it just awaits the connection promise.Full error:",
"username": "Gonzalo_Hirsch"
},
{
"code": "",
"text": "We have the exact same issue. Were you able to resolve it?What’s really annoying is this didn’t happen to us in the free tier. It only started happening in the paid tier",
"username": "Alfonso"
},
{
"code": "",
"text": "Did you manage to solve it somehow?",
"username": "Ben_Hason"
},
{
"code": "",
"text": "This is a serious issue for our small startup. We are hosting on Vercel and are seeing roughly the same failure rates.Has anyone solved this? We need this fixed ASAP",
"username": "nathaniel_redmon"
},
{
"code": "",
"text": "Any solution to this?",
"username": "Kevin_Schmidt"
}
] | Inconsistent MongoServerSelectionError: Server selection timed out after 30000 ms | 2021-12-16T14:40:22.742Z | Inconsistent MongoServerSelectionError: Server selection timed out after 30000 ms | 4,413 |
[
"compass"
] | [
{
"code": "",
"text": "I’m completely new to Mongo so please bear with me. I’m using Compass and I’m able to view collections, query the data, etc. Everything is working fine.When I try to export a whole collection, on most collections, it works fine.When I tried to export 1 collection (all fields) to a CSV file, I get an error:Path collision at Contact.Email remaining portion EmailI’m not selecting individual fields, or doing anything crazy. I’m just trying to export the whole collection.I noticed that when I view the documents in the collection, I don’t even see the offending fields.\n\nScreen Shot 2021-07-20 at 9.46.52 PM1520×602 64.5 KB\n",
"username": "Carmen_Malangone"
},
{
"code": "",
"text": "Path collision atWhat is your Compass version?\nCould be bug with latest version\nCheck this linkhttps://jira.mongodb.org/browse/COMPASS-4723?jql=project%20%3D%20COMPASS%20AND%20fixVersion%20%3D%201.24.1",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I’m having the same issues and I’m running the most recent Compass version.",
"username": "NFTX_Tech"
},
{
"code": "",
"text": "Did you try with lower version of Compass?or try the fix given in that link-unselect the fields causing the error",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi,I am facing the same issue while exporting a particular collection from compass version 1.30.1, I tired to omit the offending column, it stuck at next column.\nHowever, I am able to export other collections.Hope you can suggest a solution.Thank you.\nSiva",
"username": "Sri_Sivapurapu"
},
{
"code": "",
"text": "Did you try with mongoexport from command line?\nor use a lower version of Compass\nShow the sample doc\nHow many records in this collection?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Ram,Thanks for the response.No, I did not try the command line option for export, I will try.If compass version is the issue, I was able to export another document “Grades”.The document that I am trying to export is “companies” under sample_training, it is having 9.500 records and of size 15.9MB.The document and fields are too long to give here.Cheers\nSiva",
"username": "Sri_Sivapurapu"
},
{
"code": "",
"text": "I’ve got the same issue.\nMy solution is:It’s strange that some object-type is ok but some has issue… I’m not sure why",
"username": "Arthur_Chan"
},
{
"code": "",
"text": "Also, encounter the same issue, the issue is the field showing the error is not consistent in the collection, uncheck the field showing that error and export, Don’t know how it works but it will still export the field if it is available.",
"username": "Emmanuel_MECHIE"
},
{
"code": "",
"text": "Thanks, Your solution helped for my testing purposes.\nI received the same error and I unselected the reviews_scores (along with all its fields.)",
"username": "0462fb6165ccb00bd8c3e7985903d34"
},
{
"code": "",
"text": "I think issue in compass filter export ,For Example\n{\n“user”: {\n“name”: “bbb”,\n“firstname”: “a”\n}\n“email”: “[email protected]”\n}if we are filter with “user.name” : “test” after that export its through the error, once you got errorThanks Arthur_Chan from your comments",
"username": "Vijay_A"
}
] | Path Collision trying to Export Collection | 2021-07-21T01:48:06.414Z | Path Collision trying to Export Collection | 34,777 |
|
null | [
"aggregation",
"queries",
"kotlin"
] | [
{
"code": "# PERSON COLLECTION SCHEMA\n{\n \"_id\":{\"$oid\":\"64a20e0c65e828693428a8c1\"},\n \"username\":\"username_0\",\n}\n\n# CONTACT COLLECTION SCHEMA\n{\n \"_id\":{\"$oid\":\"64a20e0c65e828693428a8c5\"},\n \"person_id\":{\"$oid\":\"64a20e0c65e828693428a8c1\"}, <--- Person reference\n \"data\":\"data_0\",\n}\n\n# MESSAGE COLLECTION SCHEMA\n{\n \"_id\":{\"$oid\":\"64a278f583364c2f4f3c81c8\"},\n \"contact_receiver_id\":{\"$oid\":\"64a20e0c65e828693\"}, <-- Contact reference\n \"contact_sender_id\":{\"$oid\":\"64a20e0c65e82869\"}, <--Contact reference\n}\n{\n \"user\": { <--- ALL USER INFOS WRAPED IN USER FIELD\n \"_id\": { \"$oid\": \"64a20e0c65e828693428a8c1\" },\n \"username\": \"username_0\",\n },\n \"contacts\": [\n {\n \"contact\": { <--- ALL CONTACT INFOS WRAPPED IN CONTACT FIELD\n \"_id\": { \"$oid\": \"64a20e0c65e828693428a8c5\" },\n \"person_id\": { \"$oid\": \"64a20e0c65e828693428a8c1\" },\n \"data\": \"data_0\"\n },\n \"messages\": [\n {\n \"_id\": { \"$oid\": \"64a20e0c65e828693428a8c5\" },\n \"contact_receiver_id\":{\"$oid\":\"64a20e0c65e828693428a8c1\"},\n \"contact_sender_id\":{\"$oid\":\"64a20e0c65e828693428a8d3\"}, \n },\n .......\n ]\n },\n .......\n ],\n}\n{\n \"_id\": { \"$oid\": \"64a20e0c65e828693428a8c1\" },\n \"username\": \"username_0\",\n \"contacts\": [\n {\n \"_id\": { \"$oid\": \"64a20e0c65e828693428a8c5\" },\n \"person_id\": { \"$oid\": \"64a20e0c65e828693428a8c1\" },\n \"data\": \"data_0\",\n \"messages\": [\n {\n \"_id\": { \"$oid\": \"64a20e0c65e828693428a8c5\" },\n \"contact_receiver_id\":{\"$oid\":\"64a20e0c65e828693428a8c1\"}, \n \"contact_sender_id\":{\"$oid\":\"64a20e0c65e828693428a8d3\"}, \n },\n .......\n ]\n },\n .......\n ],\n}\n\n@Serializable\ndata class DbContactsMessages(\n val contact: Contact, <---- Domain class\n val messages: List<Message>,\n)\n\n@Serializable\ndata class UserConctactsMessages(\n val person: Person, <---- Domain class\n val contacts: List<DbContactsMessages>,\n)\n",
"text": "",
"username": "Uros_Jarc"
},
{
"code": "$project[\n {\n $match: {\n _id: ObjectId(\"64a20e0c65e828693428a8c1\"),\n },\n },\n { $project: { person: \"$$ROOT\" } },\n {\n $lookup: {\n from: \"Contact\",\n localField: \"_id\",\n foreignField: \"person_id\",\n pipeline: [\n {\n $project: {\n contacts: \"$$ROOT\",\n },\n },\n {\n $lookup: {\n from: \"Messages\",\n localField: \"_id\",\n foreignField: \"contact_id\",\n as: \"messages\",\n },\n },\n ],\n as: \"persons\",\n },\n },\n]\n{\n \"user\": { ... all user informations ... },\n \"contacts\": [\n {\n \"contact\": { ... all contact informations ... },\n \"messages\": [\n { ... all message informations ... },\n { ... all message informations ... },\n { ... all message informations ... },\n .......\n ]\n },\n .......\n ],\n}\n",
"text": "I have solved the problem…You have to $project object before you do any lookup…\nThe following aggregation will produce desired solution…The result of the query…",
"username": "Uros_Jarc"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to create a lookup that will be nicer for typed serialization? | 2023-07-03T08:10:14.135Z | How to create a lookup that will be nicer for typed serialization? | 543 |
null | [
"indexes"
] | [
{
"code": "{\"sha256\": {\"$type\": \"string\"}}hashed",
"text": "Hello,I am working on a service which stores meta data for a collection of files.There is one document per file in the database. Each document has a field “sha256” which is the sha256 hash of the file. This field might be null for some documents (project requirement, unfortunately this cannot be changed). If the field is set to a value other than null, then it must be unique.A solution for this requirement is a unique index with a partial filter expression {\"sha256\": {\"$type\": \"string\"}}. This works as intended.One problem I have is, that this index is never used for queries. My solution for this is to add another index on the same field as hashed index.The current solution works. It is not possible to add sha256 duplicates, it is possible to add sha256 null values and it is possible to query for a sha256 value while utilizing an index.As this database is getting bigger and bigger, my problem is the cost of storing the indexes twice. The unique partial index is 68GB, the hashed index is 14GB.Is there any way to optimize this in terms of storage? To me it seems unefficient to index one field twice.Thank your for your help in advance!",
"username": "Philipp_Kratzer"
},
{
"code": "",
"text": "It is not possible to add sha256 duplicatewhy? did you add unique constraint somewhere?\nAnd a hashed index can’t be unique.Maybe you are looking for this.",
"username": "Kobe_W"
},
{
"code": "hashed{sha256:{$exists:1}}test> doca\n{\n sha256: 'afba684fd0fa05fd48fea823aa891af26c4072d02a914a6c8c6ae2c0a6436f8a'\n}\ntest> bsonsize(doca)\n82\ntest> docb\n{\n sha256: Binary(Buffer.from(\"afba684fd0fa05fd48fea823aa891af26c4072d02a914a6c8c6ae2c0a6436f8a\", \"hex\"), 0)\n}\ntest> bsonsize(docb)\n50\n\n\n",
"text": "One problem I have is, that this index is never used for queries. My solution for this is to add another index on the same field as hashed index.You would have to have a query filter matching the query filter or hint(really try to avoid hints) to use the existing index.Is there any way to optimize this in terms of storage? To me it seems unefficient to index one field twice.I would suggest NOT inserting a null value to the sha256 field. And then instead use a unique partial index of {sha256:{$exists:1}}The missing field is implicitly null when queried for or projected.You could also save a few bytes by storing the hash as a binary object as the digest is inherently binary, but possibly not worth the application changes:",
"username": "chris"
},
{
"code": "{sha256:{$exists:1}}",
"text": "Thank you very much for your input!Changing the partial index to {sha256:{$exists:1}} sounds like a reasonable change. I will definitely try this.Furthermore, thanks for the hint to store hashes as binary. The collection is quite large, so even small improvements have a large impact.Thanks again. I will report if the changes were a success, once I am done.",
"username": "Philipp_Kratzer"
}
] | Unique Partial Index for queries | 2023-06-30T12:28:10.667Z | Unique Partial Index for queries | 566 |
[] | [
{
"code": "",
"text": "Hi! I’m new in NoSQL world, I started with MongoDB, I have a doubtI Have 2 collectionsusers: { name: String,\nphone: String,\ndate_start_customer: Date}services: { name: String\nvalue: int}and I want to create a 3rd collection, called sales. The main idea, is this collection has info from users and services, but idk what info should insert in sales collection, for example:option 1:\nsales: {id_user: (foreignField from users),\nid_service: (foreignField from services),\ndate: Date}or maybe option 2:\nsales: {name_service: (foreignField from services),\nphone_user: (foreignFiel from users),\ndate: Date}or last option 3:\nsales: { service: (a object with info from services like name and value)\nuser: (a object with info from users like name and phone)\ndate: Date}I wanted to know what of this options is the best practice to create a collection with foreign fields. I watched a few videos, and NOSQL use nested documents, so i tought that doing this would work, but i get this error:\nimage862×195 4.39 KB\nso, i need help for this,thx ",
"username": "IGNACIA_YARITSA_RIVAS_FIGUEROA"
},
{
"code": "",
"text": "Your call to find is returning a cursor and not results, hence the error. Swap to findOne or call .toArray to get the actual records.",
"username": "John_Sewell"
}
] | Best practices to create a collection | 2023-07-03T06:38:27.878Z | Best practices to create a collection | 315 |
|
null | [
"java",
"ruby",
"mongoid-odm"
] | [
{
"code": "MONGODB | Server description for cdris-cluster-dev.cluster-clrxvva0yuqz.us-west-2.docdb.amazonaws.com:27017 changed from 'unknown' to 'unknown'.\nMONGODB | There was a change in the members of the 'Unknown' topology.\nMONGODB | Error checking cdris-cluster-dev.cluster-clrxvva0yuqz.us-west-2.docdb.amazonaws.com:27017: Java::JavaUtilConcurrent::RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3c76cafa[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@4913ed11[Wrapped task = org.jruby.ext.timeout.Timeout$TimeoutTask@62f85778]] rejected from java.util.concurrent.ScheduledThreadPoolExecutor@2b87cd3a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]\nMONGODB | Server description for cdris-cluster-dev.cluster-clrxvva0yuqz.us-west-2.docdb.amazonaws.com:27017 changed from 'unknown' to 'unknown'.\nMONGODB | There was a change in the members of the 'Unknown' topology.\nMONGODB | Error checking cdris-cluster-dev.cluster-clrxvva0yuqz.us-west-2.docdb.amazonaws.com:27017: Java::JavaUtilConcurrent::RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@6621f2b0[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@731d9b2e[Wrapped task = org.jruby.ext.timeout.Timeout$TimeoutTask@79b6aead]] rejected from java.util.concurrent.ScheduledThreadPoolExecutor@2b87cd3a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]\nMONGODB | Server description for cdris-cluster-dev.cluster-clrxvva0yuqz.us-west-2.docdb.amazonaws.com:27017 changed from 'unknown' to 'unknown'.\n",
"text": "Connecting to Amazon DocumentDB (mongodb 5.0) from EKS pods. Some pods connect and work, others fail to connect. The failing pods stay at “server description unknown.” Log below which keeps repeating. Downgrading JRuby to 9.2 works.Ruby driver version: 2.19.0 (also tried 2.18.2)\nJruby version: 9.3.9.0\nMongoid version: 7.5.3 (also tried 8)Debug log:",
"username": "Sudhir_Rao"
},
{
"code": "",
"text": "Hey @Sudhir_Rao,Thanks for letting us know about this issue. Though our Ruby Driver’s Compatibility matrix does list JRuby 9.3 as working with both 2.19 and 2.18, we can only guarantee this behavior when connecting to a MongoDB cluster.The Ruby Driver (as well as Mongoid) are tested extensively against JRuby, however DocumentDB is an external product that is managed by AWS. They have some documentation for troubleshooting connection issues, but aside from that if you have difficulties with DocumentDB you’d need to work with their support team to determine why these issues appear to persist with JRuby 9.3 when JRuby 9.2 works as expected.",
"username": "alexbevi"
}
] | Ruby driver fails to connect with JRuby 9.3 | 2023-06-30T18:57:42.019Z | Ruby driver fails to connect with JRuby 9.3 | 629 |
null | [
"aggregation",
"queries"
] | [
{
"code": "# A\n{\n_id: ObjectId(\"123\"),\ntimestamp: \"2023-01-01T09:00:00.000000Z\",\ncount: 5,\nimported_to: ObjectId(\"789\")\n},\n{\n_id: ObjectId(\"456\"),\ntimestamp: \"2023-01-01T13:00:00.000000Z\",\ncount: 5,\nimported_to: ObjectId(\"789\")\n}\n\n# B\n{\n_id: ObjectId(\"789\")\ndate: \"2023-01-01T00:00:00.000000Z\",\ncount: 10\n}\n",
"text": "I have two collections A and B. The data in B is aggregated from A and I want to have a field on A which stores which record in B the data was aggregated to e.g. an audit trailI want to:Is this possible in a single query? Or would I have to generate the aggregate to update B with, record the A _id’s then update by an array of those _id’s?",
"username": "didienfj"
},
{
"code": "[BSONError:](#) Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer\n",
"text": "I tried to play with your model and I get errors with your ObjectId’s:Please update your sample data so that we can experiment with it directly.",
"username": "steevej"
},
{
"code": "# A\n{\n_id: ObjectId(\"649e8ba5ae40de0c0f6c56b1\"),\ntimestamp: \"2023-01-01T09:00:00.000000Z\",\ncount: 5,\nimported_to: ObjectId(\"649e8c0aac10027a16b300a4\")\n},\n{\n_id: ObjectId(\"649e8bfffcfdd2a0980c8678\"),\ntimestamp: \"2023-01-01T13:00:00.000000Z\",\ncount: 5,\nimported_to: ObjectId(\"649e8c0aac10027a16b300a4\")\n}\n\n# B\n{\n_id: ObjectId(\"649e8c0aac10027a16b300a4\")\ndate: \"2023-01-01T00:00:00.000000Z\",\ncumulative_count: 10\n}\n",
"text": "Apologies, here is the corrected sample data:",
"username": "didienfj"
},
{
"code": "",
"text": "Ignore what I had written…it would involve an array of lookups which would be nasty…see what others come up with.",
"username": "John_Sewell"
},
{
"code": "ABBA",
"text": "No probs @John_Sewell, thanks anwyay. For context there will be ~3k A documents being aggregated into a single B document. So I didn’t feel like an array of that size linking back from B to multiple As was the right way to go (amongst other considerations).",
"username": "didienfj"
},
{
"code": "",
"text": "You could do it in a two stage process:I can’t imagine this is an unusual use-case but just something I’ve not needed to do before!",
"username": "John_Sewell"
},
{
"code": "db.getCollection(\"A\").update(\n{\n 'groupingID':{$exists:false}\n},\n[ \n {\n $set:{\n groupingID:{$substrCP:['$timestamp', 0,10]}\n }\n }\n],\n{multi:true})\n\ndb.getCollection(\"A\").aggregate([\n{\n $match:{\n 'groupingID':{$exists:true}\n } \n},\n{\n $group:{\n _id:'$groupingID',\n total:{$sum:'$count'}\n }\n},\n{\n $merge:{\n into:'B',\n } \n}\n])\n",
"text": "This is the kind of thing I was thinking of:Ending up with:\nObviously you end up with a non objectID as the ID in the second table in my example which may not be to your taste.I’d be very interested if there is a more elegant solution to this.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks John, I’ll give that one a think - the current structure uses objectIDs and that is my preference in general.Btw - what app is shown in the screenshots?",
"username": "didienfj"
},
{
"code": "",
"text": "Studio3T. There is a free edition but quite a lot of the good stuff is pay walled. Things like import and export as well as schema analysis etc.\nI use the import and export constantly through my day.\nThe screenshots were the output window which can run in table, tree or json view.Its not cheap, but its paid for by the company so excellent value for me!",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks, sounds good - seems like Studio3T is the go to solution. I’m currently using MongoDB Compass which is a bit lacking in features!",
"username": "didienfj"
}
] | Aggregation pipeline - returning aggregated result and updating original pre-aggregated documents | 2023-06-27T14:56:53.888Z | Aggregation pipeline - returning aggregated result and updating original pre-aggregated documents | 616 |
null | [
"aggregation",
"queries",
"graphql"
] | [
{
"code": "totalCount const filters = { ... } // only items after a timestamp & _id\n const rs = await this.mediaObjectModel.aggregate<MediaObject>([\n filters,\n {\n $sort: { createdAt: -1, _id: -1 }\n },\n {\n $limit: first // bucket size (items per page)\n }\n ])\n",
"text": "I recently added pagination to our GQL API using the bucket pattern (timestamp + ObjectID comparsion). The paging article on Mongo blog was very useful to help me get started.I also followed the GQL recommended Relay-pattern (see docs) which calls for returning a totalCount field (Ex: total number of products, or social media connections regardless of the pagination).How do I avoid sending two separate queries to the DB server, 1 to get the total count and 1 for the current page/bucket?My current query",
"username": "V11"
},
{
"code": "$unionWith",
"text": "There is no mechanism to send back both count and cursor with result set, so you can either do two queries like you are doing now, or you could play some tricks with $unionWith to get the same data in one pipeline but it seems like it won’t really save you much (and may make it more complex to parse out the result document that represents count vs the other documents which represent your page content).Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Can you try using a facet with one stage for a count and one for the initial records?",
"username": "John_Sewell"
},
{
"code": "$facet",
"text": "Thanks everyone. I didn’t know this is a common task when doing pagination. $facet and ‘pagination’ are the search keywords.",
"username": "V11"
},
{
"code": "",
"text": "Be wary of .skip with lots of results it can be slow. Keep track of the last id and get results after that one the next time this way an index can be used.",
"username": "John_Sewell"
},
{
"code": "$facet",
"text": "I would recommend against using $facet as performance-wise it’s going to be worse than just about any other option.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "$facet is also limited to 16MB document size so that may also create an issue if the number of documents you are returning is large enough.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Asya, I agree, you need to be wary of using Facets as only the data fed into a facet can make use of indexes and as they form the output of the facet into an array within one document you are limited to 16MB.I’ve run into the limit a few times, with a data checking report that used a first stage match then about 40 facets to run data validation checks. In this case is was pretty performant as opposed to making 40 calls over the same filtered set of data, but as you say the document size limit should be kept in mind if you have either a lot of documents or large ones.",
"username": "John_Sewell"
},
{
"code": "$facet",
"text": "I would recommend against using $facet as performance-wise it’s going to be worse than just about any other option.Can you please elaborate? As mentioned in the original question, the alternative would be making two separate calls, one to get the current page data, and another to get the total count. $facet allows you to send two queries as one. Provided the search fields are properly indexed, I’m not clear as to why it’s not as performant?@Asya_Kamsky",
"username": "V11"
},
{
"code": "db.getCollection(\"Test\").find({name:/^B/}).count()\ndb.getCollection(\"Test\").find({name:/^B/}).limit(1000)\n\ndb.getCollection(\"Test\").explain().aggregate([\n{\n $match:{\n name:/^B/\n }\n},\n{\n $facet:{\n totalDocs:[\n {\n $count:'count'\n },\n ],\n Docs:[\n {\n $limit:1000\n }\n ]\n }\n}\n])\n",
"text": "Have a play with your dataset, but testing locally with 1M records and a non-covering index that was used on the initial stage of the aggregate I get slightly different timings. Just shy of 200ms for a straight count and find vs about 250ms for the facet query.I imagine there is an overhead associated with the setting up of the facets and combining the data into the array, you then also have the overhead in the client of working with the documents within the array as opposed to just a list of docs.",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to get document count and paginated result in 1 single call | 2023-06-27T21:06:34.544Z | How to get document count and paginated result in 1 single call | 1,270 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "I’ve a datasource which is a streaming data source and I wish to make a streaming pipeline to write the data to the database using nodejs and mongoose. Is there any way i can write the data to the database constantly using streams or any other mechanism?",
"username": "deep_jagani"
},
{
"code": "",
"text": "Hi @deep_jagani and welcome to MongoDB community forums!!To understand the requirements in more details and help you provide with relevant resources, could you help me with some informations:There has been a similar question on stackoverflow which you can refer to for your application.\nHowever, please note that the answer posted in StackOverflow are from community experience so please ensure that it works for your use case.Let us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Sure,But constant writing to the database isnt’t a optimal option.DB Schema:-\nDevices :- _id,name\nVariables :- _id, name, deviceId\nValues :- _id, value, time, metadata : {deviceId, variableId} (timeseries collection)",
"username": "deep_jagani"
},
{
"code": "",
"text": "Hi @deep_jagani\nThank you for sharing the information.But constant writing to the database isnt’t a optimal option.Could you kindly help me understand your perspective on why constantly writing to the database may not be the most optimal solution?On the other hand, by utilising MongoDB’s bulkWrite feature, you can insert the data into the collection. For example, you can collect and then bulk write data for the day into the database if you prefer.Let us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Constant writing because it needs to be a real-time iot dashboard which needs a near realtime system.",
"username": "deep_jagani"
}
] | Writing to the database frequently | 2023-06-12T11:04:10.207Z | Writing to the database frequently | 727 |
null | [] | [
{
"code": "",
"text": "I have a collection which has an array defined in the schema (Realm App). It is NOT a required field…“imageServiceIds”: {\n“bsonType”: “array”,\n“items”: {\n“bsonType”: “string”\n}\n},Some of my documents have imageServiceIds defined with an array of strings, some do not have it defined at all. In the realm logs, I am getting the following error for those documents that do not have the array defined.Failed to convert MongoDB document with configured schema during initial sync…Detailed Error: field update at path { table: \"Item\", fullPath: \"imageServiceIds\" } should have been an array value but was objectI should also note the collection name is “items” but the schema has a title of “Item” (not sure if its an issue or not).How should I handle this scenario? I know I can ensure the underlying document has the field defined with an empty array but this won’t help me in the future when I want to add another array field and all the existing documents do not have it defined.Thanks",
"username": "Robert_Charest"
},
{
"code": "",
"text": "Assuming you’ve verified you created your validator correctly by taking a look at it in Compass, you are most likely are upserting something but not upserting an actual array. MongoDB tends to tell the truth, albeit in somewhat unhelpful terms.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thank you very much. Compas was showing a value of undefined for the property (and error on my part in the way I imported the data). Not sure why the web ui didn’t display that undefined data but at least I found it and will now use Compas going forward.",
"username": "Robert_Charest"
},
{
"code": "",
"text": "Good work @Robert_Charest",
"username": "Jack_Woehr"
}
] | MongoEncodingError with Array | 2023-07-02T14:43:02.449Z | MongoEncodingError with Array | 532 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "Hi TeamI am trying to write a script in Atlas trigger.but getting below error when i am executing simple function call.{“message”:“‘countDocuments’ is not a function”,“name”:“TypeError”}Service name, db and collection names are correct. Can anyone help on thisthis is the scriptexports = async function() {\nconst collection = context.services.get(“XXXXXXX”).db(“XXXxxx”).collection(“Xxxccxc”);\nconst count = await collection.countDocuments();\nconsole.log(“Count:”, count);\n};Regards\nDinesh",
"username": "Dinesh_S2"
},
{
"code": "count()",
"text": "Hi Dinesh,Have you tried Count Documents in the Collection count()?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi JasonThere is issue with any function (DeleteMany, DeleteOne etc) i am using in the script.Is there any class or library that has to be included in the script.regards\nDinesh",
"username": "Dinesh_S2"
},
{
"code": "countDocuments()deleteMany()deleteOne()",
"text": "There is issue with any function (DeleteMany, DeleteOne etc) i am using in the script.With regards to the original topic about countDocuments(), I was not able to find this in the Atlas App Services - MongoDB API reference. However, this contains the deleteMany() and deleteOne() functions. Can you further clarify what you mean by “issue”?",
"username": "Jason_Tran"
}
] | Atlas Trigger Issue | 2023-06-28T05:21:24.990Z | Atlas Trigger Issue | 793 |
null | [] | [
{
"code": "",
"text": "Basically I was wondering if there is any documentation that would help me create a driver for my use in a language not supported both officially or by the community.(May not be in the correct category)",
"username": "Aidan_Pow"
},
{
"code": "",
"text": "Hi @Aidan_PowCheck out the Driver Specification that will have all the specs you need for a fully featured driver.",
"username": "chris"
},
{
"code": "",
"text": "Thank you, for your help.",
"username": "Aidan_Pow"
},
{
"code": "",
"text": "You’re welcome.May I ask what language you are intending to create a driver for ?",
"username": "chris"
},
{
"code": "",
"text": "Uh I am trying to make one for Roblox Lua when I get round to it as I rather mongodb than any in-built or other external options.",
"username": "Aidan_Pow"
}
] | How to make a driver in an unsupported language? | 2023-06-24T20:04:37.761Z | How to make a driver in an unsupported language? | 366 |
[
"aggregation",
"queries",
"java"
] | [
{
"code": "externalIdcollidObjectlastUpdatedlastUpdatedidObject._idexternalIddb.getCollection(\"CollectionOne\").aggregate(\n [\n { $match: { 'externalId': <externalId>, 'coll.idObject._id': { $in: <array of IDs> } } },\n { $set : { 'coll' : { $filter : { 'input' : '$coll', as : 'collObject', cond : { $in : [ '$$collObject.idObject_id', <array of IDs> ] } } } } },\n { $unwind : '$coll' },\n { $group : { '_id' : '$coll.idObject._id', 'lastUpdated' : { $max : '$coll.lastUpdated' } } }\n ]\n)\nexternalIdarray of IDs{\n ...\n \"externalId\": \"1\"\n ...\n \"coll\": [\n {\n \"idObject\": {\n \"_id\": \"12345678901\"\n },\n \"lastUpdated\": { // object because it's java `OffsetDateTime` that have to be mapped manually as two fields\n \"dateTime\": \"2023-05-18T00:47:00.000+00:00\", // it's Date type\n \"offset\": \"Z\"\n },\n ...\n }\n ]\n}\nSTORAGE SIZE: 500.9MB\nLOGICAL DATA SIZE: 2.04GB\nTOTAL DOCUMENTS: 868 724\nINDEXES TOTAL SIZE: 130.92MB\nCollectionOneIDX_ONE: { externalId: 1, coll.idObject._id: 1 },\nIDX_TWO: { coll.idObject._id: 1 }\n{\n \"type\": \"command\",\n \"ns\": \"not_relevant\",\n \"command\": {\n \"aggregate\": \"CollectionOne\",\n \"pipeline\": [\n {\n \"$match\": {\n \"externalId\": 1,\n \"coll.idObject._id\": {\n \"$in\": [\n // 1000 objects\n ]\n }\n }\n }\n ]\n },\n \"planSummary\": \"IXSCAN { externalId: 1, coll.idObject._id: 1 }\",\n \"cursorid\": 4581154899614066000,\n \"keysExamined\": 72131,\n \"docsExamined\": 456,\n \"fromMultiPlanner\": true,\n \"replanned\": true,\n \"replanReason\": \"cached plan was less efficient than expected: expected trial execution to take 99 works but it took at least 990 works\",\n \"numYields\": 0,\n \"nreturned\": 32,\n \"queryHash\": \"EF6EF69E\",\n \"planCacheKey\": \"EFF6500A\",\n \"queryFramework\": \"classic\",\n \"reslen\": 2796,\n \"locks\": {},\n \"readConcern\": {\n \"level\": \"local\",\n \"provenance\": \"implicitDefault\"\n },\n \"storage\": {\n \"data\": {\n \"bytesRead\": 73979545,\n \"timeReadingMicros\": 492304\n }\n },\n \"remote\": \"xxx\",\n \"protocol\": \"op_msg\",\n \"durationMillis\": 3808,\n \"v\": \"6.0.5\"\n }\n$group$unwind$set$setdb.getCollection(\"CollectionOne\").aggregate(\n [\n { $match: { 'externalId': <externalId>, 'coll.idObject_id': { $in: <array of IDs> } } },\n { $unwind : '$coll' },\n { $group : { '_id' : '$coll.idObject._id', 'lastUpdated' : { $max : '$coll.lastUpdated' } } },\n ]\n)\nexplain(\"executionStats\"){\n explainVersion: '1',\n stages: [\n {\n '$cursor': {\n queryPlanner: {\n namespace: 'prod.CollectionOne',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { externalId: { '$eq': 1 } },\n {\n 'coll.idObject._id': {\n '$in': [\n // ids\n ]\n }\n }\n ]\n },\n queryHash: 'EF6EF69E',\n planCacheKey: 'EFF6500A',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'PROJECTION_SIMPLE',\n transformBy: { coll: 1, _id: 0 },\n inputStage: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { externalId: 1, 'coll.idObject._id': 1 },\n indexName: 'IDX_ONE',\n isMultiKey: true,\n multiKeyPaths: {\n externalId: [],\n 'coll.idObject._id': [ 'coll' ]\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n externalId: [ '[1, 1]' ],\n 'coll.idObject._id': [\n // ids\n ]\n }\n }\n }\n },\n rejectedPlans: [\n {\n stage: 'PROJECTION_SIMPLE',\n transformBy: { coll: 1, _id: 0 },\n inputStage: {\n stage: 'FETCH',\n filter: {\n 'coll.idObject._id': {\n '$in': [\n //ids\n ]\n }\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { externalId: 1, 'coll.idObject._id': 1 },\n indexName: 'IDX_ONE',\n isMultiKey: true,\n multiKeyPaths: {\n externalId: [],\n 'coll.idObject._id': [ 'coll' ]\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n externalId: [ '[1, 1]' ],\n 'coll.idObject._id': [ '[MinKey, MaxKey]' ]\n }\n }\n }\n },\n {\n stage: 'PROJECTION_SIMPLE',\n transformBy: { coll: 1, _id: 0 },\n inputStage: {\n stage: 'FETCH',\n filter: { externalId: { '$eq': 1 } },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'coll.idObject._id': 1 },\n indexName: 'IDX_TWO',\n isMultiKey: true,\n multiKeyPaths: { 'coll.idObject._id': [ 'coll' ] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'coll.idObject._id': [\n // ids\n ]\n }\n }\n }\n }\n ]\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 350,\n executionTimeMillis: 146,\n totalKeysExamined: 51835,\n totalDocsExamined: 350,\n executionStages: {\n stage: 'PROJECTION_SIMPLE',\n nReturned: 350,\n executionTimeMillisEstimate: 14,\n works: 51835,\n advanced: 350,\n needTime: 51484,\n needYield: 0,\n saveState: 61,\n restoreState: 61,\n isEOF: 1,\n transformBy: { coll: 1, _id: 0 },\n inputStage: {\n stage: 'FETCH',\n nReturned: 350,\n executionTimeMillisEstimate: 14,\n works: 51835,\n advanced: 350,\n needTime: 51484,\n needYield: 0,\n saveState: 61,\n restoreState: 61,\n isEOF: 1,\n docsExamined: 350,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 350,\n executionTimeMillisEstimate: 4,\n works: 51835,\n advanced: 350,\n needTime: 51484,\n needYield: 0,\n saveState: 61,\n restoreState: 61,\n isEOF: 1,\n keyPattern: { externalId: 1, 'coll.idObject._id': 1 },\n indexName: 'IDX_ONE',\n isMultiKey: true,\n multiKeyPaths: {\n externalId: [],\n 'coll.idObject._id': [ 'coll' ]\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n externalId: [ '[1, 1]' ],\n 'coll.idObject._id': [\n // ids\n ]\n },\n keysExamined: 51835,\n seeks: 687,\n dupsTested: 51148,\n dupsDropped: 50798\n }\n }\n }\n }\n },\n nReturned: Long(\"350\"),\n executionTimeMillisEstimate: Long(\"53\")\n },\n {\n '$unwind': { path: '$coll' },\n nReturned: Long(\"70000\"),\n executionTimeMillisEstimate: Long(\"63\")\n },\n {\n '$group': {\n _id: '$coll.idObject._id',\n lastUpdated: { '$max': '$coll.lastUpdated' }\n },\n maxAccumulatorMemoryUsageBytes: { lastUpdated: Long(\"367714\") },\n totalOutputDataSizeBytes: Long(\"575140\"),\n usedDisk: false,\n spills: Long(\"0\"),\n nReturned: Long(\"1146\"),\n executionTimeMillisEstimate: Long(\"126\")\n }\n ],\n serverInfo: {\n host: 'xxx',\n port: 12345,\n version: '6.0.5',\n gitVersion: 'xxx'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n command: {\n aggregate: 'CollectionOne',\n pipeline: [\n {\n '$match': {\n externalId: 1,\n 'coll.idObject._id': {\n '$in': [\n // ids\n ]\n }\n }\n },\n { '$unwind': '$coll' },\n {\n '$group': {\n _id: '$coll.idObject._id',\n lastUpdated: { '$max': '$coll.lastUpdated' }\n }\n }\n ],\n cursor: {},\n '$db': 'prod'\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1684360973, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"xxx\", \"hex\"), 0),\n keyId: Long(\"xxx\")\n }\n },\n operationTime: Timestamp({ t: 1684360973, i: 1 })\n}\n\nexplain",
"text": "Hello,\nrecently we’ve observed that one of our queries slowed down, I made a change, it helped a little, but I wanted to ask whether we can make any improvements in the aggregation pipeline query in the future.Collection stores the documents from many externalIds. Each document have a coll which contains the idObject and lastUpdated objects.The logic behind the query that executes is that we want to get the lastUpdated date for each idObject._id within all externalId documents.The query first looked like that:The params that are used as filters are the externalId and the array of IDs.Example document looks like that:Normally we execute the queries with the array of parameters around 1000.\nCollection size now looks like that:The queries started to take 2-4 seconds recently and we needed to make it a little bit faster.\n\nlonger_queries2324×1418 259 KB\nIndex that exists in CollectionOne is:Output for one of longer queries in Mongo Atlas profiler page:First I was blaming the $group stage as when I deleted it, the query to the $unwind step was pretty fast.\nLater I decided to try to delete the $set step, and it increased the time significantly.\nNow the query looks like below, I get more results than with $set aggregation step, but the final filtering of returned results is made in Java code now.And the explain(\"executionStats\") for above:If you would like to see the explain for the first query I can provide it but it will be in the afternoon ",
"username": "Bartosz_Skorka"
},
{
"code": "",
"text": "Do anybody has any suggestions here?",
"username": "Bartosz_Skorka"
},
{
"code": "{ 'input' : '$coll.idObject._id', as : 'id', cond : { $in : [ '$$id', <array of IDs> ] }\n",
"text": "Complicated use-case take more time and most of us here are not paid to do that while you probably are by your employer. So sometimes you need to be patient.1 - Doing $in on 1000 element is probably expensive no matter what\n2 - If this is a very frequent use-case, it might be worth while to keep lastUpdated per idObject in a separate collection, some kind of computed pattern\n3 - Do you really need the object idObject that holds the single field _id?\n4 - It might help simplifying the $filter to",
"username": "steevej"
},
{
"code": "$grouplastUpdatedexternalId_idexternalId$grouplastUpdatedexternalId",
"text": "Hello Steeve!\nI’m not pushing you to answer, just trying to find a solution for that, and wanted to refresh that topic after a month 1 - Doing $in on 1000 element is probably expensive no matter whatI was thinking about it, as lowering it up to 100 elements fastens up the query a little bit, but running it 10 times makes it similar in elapsed execution time.2 - If this is a very frequent use-case, it might be worth while to keep lastUpdated per idObject in a separate collection, some kind of computed patternThat’s good idea.\nEverything I do up to the $group stage is working extremely fast. Sometimes, after the penultimate step I have up to 120k records, and then we have a grouping which takes 10 seconds.\nI thought that maybe grouping it in Java would be more efficient.The main idea behind that query is to get lastUpdated for externalId grouped by _id.\nMaybe keeping it in different collection will be that I’m looking for. Or I will find another solution to get that information and keep it updated.3 - Do you really need the object idObject that holds the single field _id?It keeps the ID type information along with the ID in string, I have not mentioned it in example object body, sorry.4 - It might help simplifying the $filter toI will give it a go! I also tried to get that information without filtering, it is around 1kk documents to scan for some externalId before the $group stage, and it takes up to 10 seconds to group it by lastUpdated.\nGetting the documents is ~0.1/2 sec per externalId cause it is using index.\nAfter that, grouping slows down significantly. Thank you for your ideas, I’ll try something tomorrow.",
"username": "Bartosz_Skorka"
}
] | Slow $group or $set stage of aggregation pipeline | 2023-05-18T09:42:07.206Z | Slow $group or $set stage of aggregation pipeline | 750 |
|
null | [
"aggregation"
] | [
{
"code": "[\n {\n \"_id\": \"1\",\n \"name\": \"name1\"\n },\n {\n \"_id\": \"2\",\n \"name\": \"name2\"\n },\n {\n \"_id\": \"3\",\n \"name\": \"name1\"\n },\n {\n \"_id\": \"4\",\n \"name\": \"name3\"\n },\n {\n \"_id\": \"5\",\n \"name\": \"name4\"\n },\n {\n \"_id\": \"6\",\n \"name\": \"name3\"\n },\n {\n \"_id\": \"7\",\n \"name\": \"name5\"\n },\n {\n \"_id\": \"8\",\n \"name\": \"name5\"\n },\n {\n \"_id\": \"9\",\n \"name\": \"name5\"\n },\n {\n \"_id\": \"10\",\n \"name\": \"name6\"\n }\n]\ndb.collection.aggregate([\n {\n $sortByCount: \"$name\"\n }\n])\n[\n {\n \"_id\": \"name5\",\n \"count\": 3\n },\n {\n \"_id\": \"name3\",\n \"count\": 2\n },\n {\n \"_id\": \"name1\",\n \"count\": 2\n },\n {\n \"_id\": \"others\",\n \"count\": 3\n }\n]\n",
"text": "I want to get some sort of accumulation object (like Others) as last array item.Having this datai run this aggregation:As result i want to get:How can i do it?",
"username": "Stanislav_Kuznetsov"
},
{
"code": "db.getCollection(\"test\").aggregate([\n{\n $facet:{\n matchingItems:[\n {\n $match:{name:/^name/}\n },\n {\n $sortByCount: \"$name\"\n }\n ],\n otherItems:[\n {\n $match:{name:{$not:/^name/}}\n },\n {\n $group:{\n _id:'other',\n count:{$sum:1}\n }\n }\n ]\n }\n},\n{\n $project:{\n totalResults:{\n $setUnion:[\n '$matchingItems',\n '$otherItems'\n ]\n }\n }\n},\n{\n $unwind:'$totalResults'\n},\n{\n $replaceRoot:{newRoot:'$totalResults'}\n}\n])\n",
"text": "What goes into others? Do you want a sum for everything with a name prefix grouped by name and everything else in “others”?Like this?",
"username": "John_Sewell"
},
{
"code": "others[\n {\n \"_id\": \"name5\",\n \"count\": 3 // appeared 3 times in original set of documents\n \"percentage\": 30 // (3 / 10) 10 is total document number \n },\n {\n \"_id\": \"name3\",\n \"count\": 2,\n \"percentage\": 20\n },\n {\n \"_id\": \"name1\",\n \"count\": 2,\n \"percentage\": 20\n },\n {\n \"_id\": \"others\",\n \"count\": 3,\n \"percentage\": 30\n]\n",
"text": "Basically, i’m going to get top N mostly frequently appeared names, and others group should contain number of all other names. But ideally i want to have not just absolute numbers, but percentage of total numbers of elements",
"username": "Stanislav_Kuznetsov"
},
{
"code": "",
"text": "Ahh, gotcha. Shall have a play later.",
"username": "John_Sewell"
},
{
"code": "top_n = 10 \npipeline = [\n { \"$sortByCount : \"$name\" } ,\n { \"$facet\" : {\n \"top_n\" : [\n { \"$limit\" : top_n }\n ] ,\n \"others\" : [\n { \"$skip\" : top_n } ,\n { \"$group\" : {\n _id : null ,\n count : { \"$sum\" : \"$count\" } \n } }\n ] \n } } ,\n { \"$set\" : {\n \"total\" : { \"$sum\" : [ \n { \"$reduce\" : {\n \"input\" : \"$top_n.count\" ,\n \"initialValue\" : 0 ,\n \"in\" : { \"$sum\" : [ \"$$this\" , \"$$value\" ] }\n } } ,\n { \"$arrayElemAt\" : [ \"$others.count\" , 0 ] }\n ] }\n } } ,\n { \"$set\" : {\n \"top_n\" : { \"$map\" : {\n \"input\" : \"$top_n\" ,\n /* use total field to update each element with percentage */\n } }\n \"others\" : { \"$map\" :\n \"input\" : \"$others\" ,\n /* use total field to update each element with percentage */\n } }\n } } ,\n /* what ever cosmetic $project,$unwind stage to match desired output */\n]\n",
"text": "I would try with something like:",
"username": "steevej"
},
{
"code": "db.getCollection(\"test\").aggregate([\n{\n $sortByCount: \"$name\"\n},\n{\n $facet:{\n topItems:[\n {\n $limit:3\n }\n ],\n otherItems:[\n {\n $skip:3\n },\n {\n $group:{\n _id:'Others',\n count:{$sum:'$count'}\n }\n }\n ],\n totalitems:[\n {\n $group:{\n _id:null,\n total:{$sum:'$count'}\n }\n }\n ]\n }\n},\n{\n $project:{\n allData:{\n $concatArrays:[\n '$topItems',\n '$otherItems'\n ]\n },\n totalitems:{$arrayElemAt:['$totalitems.total', 0]}\n }\n},\n{\n $unwind:'$allData'\n},\n{\n $project:{\n Item:'$allData._id',\n Count:'$allData.count',\n Percentage:{\n $round:[\n {\n $multiply:[\n 100, \n { \n $divide:[\n '$allData.count', \n '$totalitems'\n ]\n }\n ]\n }, \n 0\n ]\n } \n }\n}\n])\n",
"text": "I wanted to have a play to see what I could come up with as well, it seems I’ve a different approach.Mongo playground: a simple sandbox to test and share MongoDB queries onlineWe start with a grouping and sort, then use a facets to perform multiple operations over the same data we’ve just calculated. We calculate:We then combine the top X items and the “others” into one array ready for the next stage.We unwind the field containing the data, which means each document has the data item as well as the total items.Then it’s a final project to calculate the percentages for each item and round it to 0DP",
"username": "John_Sewell"
},
{
"code": "",
"text": "I decided to calculate totalitems (count in my code) in a separated $set stage because doing it inside the $facet implies that all incoming documents are processed another time. With the $set after, only the top_n accumulation and others are processed. One version might be more efficient than the other but I cannot tell which one.I really like your idea of using $unwind on the result of $concatArrays.Computing percentage after the $unwind is also nice since it is much simpler than doing 2 $map.",
"username": "steevej"
},
{
"code": "",
"text": "I need to play more with the reduce, most of my normal work involves reporting or data fixes so nice to look at different things.\nShall try building a large dataset and checking performance but i dont think there will bemuch in it as we both reduce the document volume so massively so quickly.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Created a collection with 1M records and a covering index…both approaches are about the same, sub-600ms for the complete analysis of all 1M records showing top 3 items + others.",
"username": "John_Sewell"
},
{
"code": "name{\n $match: {} // some match\n},\n{\n $facet: {\n name: [] // topic of our discussion,\n category: [],\n type: [],\n ...\n }\n},\n{\n // some formatting steps of results from above steps.\n}\n{\n $facet: {\n name: [{ $sortByCount: \"$name\" }],\n ...\n }\n},\n{\n $set: {\n name: {\n $reduce: {\n input: {\n $map: {\n input: {\n $zip: {\n inputs: [\n \"$name\",\n {\n $range: [\n 0,\n {\n $size: \"$name\",\n },\n ],\n },\n ],\n },\n },\n as: \"item\",\n in: {\n $mergeObjects: [\n {\n $arrayElemAt: [\n \"$$item\",\n 0\n ],\n },\n {\n index: {\n $arrayElemAt: [\n \"$$item\",\n 1\n ],\n },\n },\n ],\n },\n },\n },\n initialValue: {},\n in: {\n $mergeObjects: [\n \"$$value\",\n {\n $cond: [\n {\n $gt: [\n \"$$this.index\",\n 3\n ],\n },\n {\n other: {\n $add: [\n {\n $cond: [\n \"$$value.other\",\n \"$$value.other\",\n 0\n ],\n },\n \"$$this.count\",\n ],\n },\n total: {\n $add: [\n {\n $cond: [\n \"$$value.total\",\n \"$$value.total\",\n 0\n ],\n },\n \"$$this.count\",\n ],\n },\n },\n {\n top: {\n $concatArrays: [\n {\n $cond: [\n \"$$value.top\",\n \"$$value.top\",\n []\n ],\n },\n [\n {\n _id: \"$$this._id\",\n count: \"$$this.count\",\n },\n ],\n ],\n },\n total: {\n $add: [\n {\n $cond: [\n \"$$value.total\",\n \"$$value.total\",\n 0\n ],\n },\n \"$$this.count\",\n ],\n },\n },\n ],\n },\n ],\n },\n },\n },\n },\n},\n{\n $set: {\n name: {\n $concatArrays: [\n {\n $cond: [\n \"$name.top\",\n {\n $map: {\n input: \"$name.top\",\n as: \"item\",\n in: {\n $setField: {\n field: \"percentage\",\n input: \"$$item\",\n value: {\n $multiply: [\n {\n $divide: [\n \"$$item.count\",\n \"$name.total\"\n ],\n },\n 100,\n ],\n },\n },\n },\n },\n },\n [],\n ],\n },\n {\n $cond: [\n \"$name.other\",\n [\n {\n _id: \"__other__\",\n count: \"$name.other\",\n percentage: {\n $multiply: [\n {\n $divide: [\n \"$name.other\",\n \"$name.total\"\n ],\n },\n 100,\n ],\n },\n },\n ],\n [],\n ],\n },\n ],\n },\n },\n}\n",
"text": "Thank you for your help. $facet step with $limit, $skip and $group does good job. But unfortunately i can’t use it because in fact this computation is part of another higher level $facet step. Original documents have much more fields than just name, and i have to calculate different statistics on same amount of matched docs.\nThis is how it looks in general:I found an option to get this sort of statistics that works in my case:It can be not as optimal as possible, but for now it works. Feel free to give your feedback",
"username": "Stanislav_Kuznetsov"
},
{
"code": "",
"text": "Be wary of the data that feeds into a facet…as they canot make use of indexes.I would love to be able to embed facets within facets though…",
"username": "John_Sewell"
}
] | How to accumulate values of array items starting from specific index | 2023-06-30T04:58:16.034Z | How to accumulate values of array items starting from specific index | 444 |
null | [
"field-encryption"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-07-02T20:37:23.863+09:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread2\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.873+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread2\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.874+09:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread2\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread2\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread2\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread2\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread2\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":75751,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"sagwakeompyuteo.local\"}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.5.0\"}}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.876+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.877+09:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.877+09:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.877+09:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.878+09:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.878+09:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.878+09:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.878+09:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.878+09:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.878+09:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.878+09:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.878+09:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-07-02T20:37:23.879+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n> [email protected] dev\n> nodemon --exec babel-node src/init.js\n\n[nodemon] 2.0.22\n[nodemon] to restart at any time, enter `rs`\n[nodemon] watching path(s): *.*\n[nodemon] watching extensions: js,mjs,json\n[nodemon] starting `babel-node src/init.js`\n✅ Server is listening!\n✅ Connected to DB!\nundefined\nGET / 304 104.595 ms - -\nGET /videos/upload 200 29.552 ms - 732\n",
"text": "I can’t run mongodb with those codes.in npm, says succesfully connected to DB.I can’t find any solutions on google, mongoDB docs, etc. communities…Please help me…",
"username": "kim6217"
},
{
"code": "",
"text": "Your mongodb is up & running\nLooks like you tried to start mongod again as your logs indicate address already in use which means another mongod is already up on same port\nWhich code is not working?",
"username": "Ramachandra_Tummala"
}
] | Mongod noob problem | 2023-07-02T11:56:14.577Z | Mongod noob problem | 522 |
null | [
"node-js",
"transactions"
] | [
{
"code": "…\n$set: {\n 'questions.$[question].conditionnelle': {\n $gt: [{ $size: '$questions.$[question].conditions' }, 0],\n },\n },\n…\n\"conditionnelle\": {\n \"$gt\": [\n {\n \"$size\": \"$questions.$[question].conditions\"\n },\n 0\n ]\n },\n",
"text": "Hi,\nWith Node driver, inside an operation to be passed to a transaction, I try to run a condition in order to store a boolean as a result of the condition, but what is stored is the condition query itself.The part of the query:The result inside my collection:What am I missing here?",
"username": "Christophe"
},
{
"code": "",
"text": "It is hard to tell because you did not share where and how you use $set.But most likely you are missing [ ] around $set as documented in update with aggregation.",
"username": "steevej"
},
{
"code": "operationsconst operations = [\n\t{\n\t\tcollection: 'questionnaires',\n\t\toperation: 'updateMany',\n\t\tfiltres: {\n\t\t\t_id: filtres.questionnaire_id,\n\t\t},\n\t\tchamps: {\n\t\t\t$pull: {\n\t\t\t\t'questions.$[question].conditions': {\n\t\t\t\t\tparent_id: champs.question_id,\n\t\t\t\t},\n\t\t\t},\n\t\t\t$set: {\n\t\t\t\t'questions.$[question].conditionnelle': {\n\t\t\t\t\t$gt: [{ $size: '$questions.$[question].conditions' }, 0],\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\toptions: {\n\t\t\tarrayFilters: [\n\t\t\t\t{\n\t\t\t\t\t'question._id': {\n\t\t\t\t\t\t$in: questions_conditionnees,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t],\n\t\t},\n\t},\n\t{\n\t\tcollection: 'questionnaires',\n\t\toperation: 'updateOne',\n\t\tfiltres: { _id: filtres.questionnaire_id },\n\t\tchamps: { $pull: { questions: { _id: champs.question_id } } },\n\t},\n]\nfunction requete(operations) {\n\tconst session = client.startSession()\n\n\ttry {\n\t\tconst resultats = []\n\n\t\tawait session.withTransaction(async () => {\n\t\t\tfor (const operation of operations) {\n\t\t\t\tconst db = client.db(process.env.DB_NAME)\n\t\t\t\tconst col = db.collection(operation.collection)\n\t\t\t\tawait col[operation.operation](\n\t\t\t\t\toperation.filtres,\n\t\t\t\t\toperation.champs,\n\t\t\t\t\toperation.options\n\t\t\t\t).then(resultat => {\n\t\t\t\t\tresultats.push(resultat)\n\t\t\t\t})\n\t\t\t}\n\t\t})\n\n\t\treturn resultats\n\t} catch (error) {\n\t\tlogger.error(error, {\n\t\t\trequete: 'transaction.js',\n\t\t\toperations: operations,\n\t\t})\n\n\t\treturn error\n\t} finally {\n\t\tawait session.endSession()\n\t}\n}\nchamps: [\n\t{\n\t\t$pull: {\n\t\t\t'questions.$[question].conditions': {\n\t\t\t\tparent_id: champs.question_id,\n\t\t\t},\n\t\t},\n\t},\n\t{\n\t\t$set: {\n\t\t\t'questions.$[question].conditionnelle': {\n\t\t\t\t$gt: [\n\t\t\t\t\t{ $size: '$questions.$[question].conditions' },\n\t\t\t\t\t0,\n\t\t\t\t],\n\t\t\t},\n\t\t},\n\t},\n],\n",
"text": "Thank you @steevej for your answer.Here is the complete operations array building:This array is then passed to this function that makes use of a transaction:I have tried to change the following part, according to the doc you linked:But I then cannot use a $pull method, here is the error I get: “Unrecognized pipeline stage name: ‘$pull’”Is there any solution to combine?",
"username": "Christophe"
}
] | Condition query is stored as an object instead of being run to store a boolean | 2023-07-01T12:50:16.735Z | Condition query is stored as an object instead of being run to store a boolean | 579 |
[
"queries",
"node-js",
"mongodb-shell"
] | [
{
"code": " let result = db.command({\n listCollections: 1,\n });\n \n result = Promise.resolve(result).then(result => console.log(result.firstBatch));\n",
"text": "So I have been stuck on this problem for a while. I have just started exploring mongodb and am trying to figure out loads of things. So I wanted to use the command “listCollections” to see things like the validators. But when I do this It doesn’t even show the firstBatch and only shows me that there is an Object. I was able to solve this problem in the shell (mongosh) with the “inspectDepth” configuration but I can’t find a solution in Node with the MongoDB driver.So I have this:And as a response I get this:\nI would like it to show the whole Object (and all nested Objects) instead of just telling me that there is an Object. Can someone help?",
"username": "WalkingWater21"
},
{
"code": "",
"text": "As opposed to console.log try JSON.stringify",
"username": "John_Sewell"
},
{
"code": "",
"text": "\nimage1946×151 20.1 KB\nThat seems to have done the Job. Does that imply that MongoDB provides the data and the terminal just happens to format it that way (i.e. leave out the actual Object)?",
"username": "WalkingWater21"
},
{
"code": "",
"text": "Console.log will not output a nested structure well, you can also format it a touch better withJSON.stringify(theVariable, null, 2)Where 2 is the indent level.Of course if you know the structure you can create logic to output in a cusom format.",
"username": "John_Sewell"
},
{
"code": "console.dir(Object, {depth: null})",
"text": "Edit: I just found a second way. I don’t know if this i elegant but one could also use console.dir(Object, {depth: null}) .",
"username": "WalkingWater21"
},
{
"code": "",
"text": "Excellent, shall also try that next time I need to do that.",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Command response depth | 2023-07-01T20:14:44.979Z | Command response depth | 381 |
|
null | [] | [
{
"code": "",
"text": "I am following this tutorial : https://www.mongodb.com/docs/atlas/app-services/tutorial/backend/When GitHub tries to call the webhook I get an error due to an invalid secret. I can see the errors in the logs at both ends, and they both say that the secret is invalid.The steps I am using to set up the secret:Some things I have tried:Any ideas what else I can try?\nCheers!",
"username": "Paul_Wilkinson"
},
{
"code": "",
"text": "I have the same problem. how did you resolved it?",
"username": "Adan_Aguilar"
},
{
"code": "secret=tutorialhttps://yourAtlasRealmUrl?secret=mysecret",
"text": "I know this is old but may help other users.You may have missed this line from the tutorial:This requires all incoming requests to include the query parameter secret=tutorial in the request URL.So the actual secret that GH stores isn’t very important, the essential bit is to write your url as https://yourAtlasRealmUrl?secret=mysecret. Now from my understanding this isn’t a good idea. See this SO discussion.I don’t know of any other way to be honest. @Paul_Wilkinson @Adan_Aguilar",
"username": "santimir"
},
{
"code": "",
"text": "Yes, this is what I also end up doing.But as you already mentioned this is not very secure.Don’t know why MongoDB did not corrected this still.",
"username": "Rahul_Pathak1"
}
] | Invalid Secret Errors | 2022-07-17T14:24:17.555Z | Invalid Secret Errors | 2,644 |
null | [] | [
{
"code": "",
"text": "I have installed Mongo db on pc in windows powershell run as admin Mongo code is not working but Mongod is working how can we solve it",
"username": "vitthal_pandit"
},
{
"code": "",
"text": "“It’s not working” is not a problem report! ",
"username": "Jack_Woehr"
}
] | Mongod installation | 2023-07-01T17:12:22.864Z | Mongod installation | 240 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.