image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null |
[
"compass"
] |
[
{
"code": "",
"text": "Hello. I am currently trying to prevent duplicate data in one particular field from being entered into my database. My data is a comment feed from a web scraping application and I need to prevent duplicate entries from being added. I tried creating a unique index in MongoDB compass, but it seems that the data input stops upon finding the duplicate data. Is there a way in Compass to set up an Index so that duplicate entries are ignored and not added? Thanks!",
"username": "HD_Roofers"
},
{
"code": "",
"text": "data input stops upon finding the duplicate dataThat is the normal behavior. To let other inserts happen you have to set ordered to false.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for the clarification. Is this something that can be done through the Compass UI?",
"username": "HD_Roofers"
}
] |
Preventing/Removing duplicate entries in MongoDB Compass
|
2022-05-17T19:01:16.559Z
|
Preventing/Removing duplicate entries in MongoDB Compass
| 2,148 |
null |
[
"java",
"python",
"spark-connector"
] |
[
{
"code": "",
"text": "I am trying to connect mongo db from pyspark . My url starts with mongodb+srv , though it is throwing an error\njava.lang.IllegalArgumentException: requirement failed: Invalid uri: 'mongodb+srvI have used below jar files:\nbson-3.4.2 , mongo-java-driver-3.4.2, mongodb-driver-core-3.4.2",
"username": "Anurag_Mishra2"
},
{
"code": "",
"text": "I’m having the same issue, did you solve it?",
"username": "Jorge_Macos_Martos"
},
{
"code": "",
"text": "If you get the same error, that isInvalid uri: 'mongodb+srvYou have to set the URI correctly. If you are unsure how to do it, share the URI you are using so that we could help.",
"username": "steevej"
},
{
"code": " \"spark.mongodb.connection.uri\",\"mongodb+srv:// USERNAME:[email protected]/somedatabasename?retryWrites=true&w=majority\"",
"text": "Which version of the spark connector are you using?FWIW, we released a new version 10.0 Maven Central Repository Search.Announcement is here Introducing the Newest Version of the MongoDB Spark Connector | MongoDB BlogYour URU should resemble the something like , \"spark.mongodb.connection.uri\",\"mongodb+srv:// USERNAME:[email protected]/somedatabasename?retryWrites=true&w=majority\"",
"username": "Robert_Walters"
},
{
"code": "",
"text": "I am using spark - 2.4.3 , also can you please provide one sample code with the connector you have mentioned in the post here for reading data from mongodb using pyspark . where connection for mongodb is with srv .thanks",
"username": "Anurag_Mishra2"
},
{
"code": "",
"text": "my URL exactly resembles like this and I had tried to read data using pymongo and it had worked with same url . URL is not incorrect .",
"username": "Anurag_Mishra2"
},
{
"code": "",
"text": "not yet !! for work around i have used pymongo lib , but not in spark …i am exploring the option with spark .",
"username": "Anurag_Mishra2"
},
{
"code": "",
"text": "URL is correct …I have used pymongo python lib and able to pull the data from mongodb . but with spark issue is coming and it is due to mainly mongodb + srv . I need correct jar and one sample code for data read from mongodb using spark .",
"username": "Anurag_Mishra2"
}
] |
How to connect with mongodb+srv from pyspark?
|
2022-03-24T13:35:45.241Z
|
How to connect with mongodb+srv from pyspark?
| 5,597 |
null |
[
"monitoring",
"configuration"
] |
[
{
"code": "05015",
"text": "I’d like to disable NETWORK logs but can’t find a way. As I can see in the docs, without using quiet I am already at the least verbose level with 0.The verbosity level can range from 0 to 5 :Is there a way to remove NETWORK logs but keep useful logs such as slow queries and the likes ?",
"username": "Vincent_Fiset"
},
{
"code": "systemLog.quietmongosmongodsystemLog.quietfilequietsyslogrsyslogsyslog-ng",
"text": "Welcome to the MongoDB Community @Vincent_Fiset !I believe the option you are looking for is systemLog.quiet:Type : booleanDefault : falseRun mongos or mongod in a quiet mode that attempts to limit the amount of output. systemLog.quiet is not recommended for production systems as it may make tracking problems during particular connections much more difficult.As noted in the documentation, enabling this option is not recommended for production systems as it limits visibility and diagnostics for issues based on the originating connection. This option reduces but does not completely suppress all log messages.If you are using the default logging to a file destination, I recommend setting up Log Rotation to adjust log file size and retention options to be relevant for your requirements.If you want to limit log output more aggressively than the quiet outcome, you can send logs to a syslog destination and use a logging service supporting filtering conditions. For example, on Linux you could use logging services like rsyslog, syslog-ng or Fluentd.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X ! I am under the impression that quiet would not be a good fit for me since:Is there a way to remove NETWORK logs but keep useful logs such as slow queries and the likes ?I feel like quiet will remove slow queries too. Is it ? I will test it.",
"username": "Vincent_Fiset"
}
] |
Disable NETWORK logs
|
2022-05-13T19:01:08.996Z
|
Disable NETWORK logs
| 5,386 |
null |
[
"data-modeling"
] |
[
{
"code": "",
"text": "Hello,I’m designing a mongodb with a friend for a small project and I was wondering if there is any good habits concerning storing the phone number?\nShould I just put a String or divide the phone number into 2 the country code + phone number ?",
"username": "Yoni_Obia"
},
{
"code": "",
"text": "Hi @Yoni_40785,Typically phone numbers are stored as text and in my experience, the storage format depends on a couple of requirements/variables:So if you have answers to these questions you’ll be able to come up with a solution that meets those requirements. If you don’t have all the answers yet, you can begin with creating two fields; first field to store the original input string and the second string that strips out any non numeric characters (i.e. convert the first “+” to 00, remove hyphens, parenthesis, dots etc), and index the second. This way you can drop/refine one column if the requirements are a lot less.In terms of actual schema format, you could use a sub-document of arrays with labels for each phone type (mobile, tel 1, home, work etc). The Attribute Pattern can come in handy here too… depending on the requirements.Some of the other guys (@steevej-1495 and @Ramachandra_37567) may have their own ways of dealing with phone numbers.",
"username": "007_jb"
},
{
"code": "",
"text": "I have use string and the Attribute Pattern for phone numbers.",
"username": "steevej"
},
{
"code": "\"phones\":[\n{\n \"countryCode\": \"<String>\",\n \"phoneNumber\": \"<String>\"\n}]\n\"phones\":[\n {\"k\":\"countryCodePhone1\", \"v\": \"<String>\"},\n {\"k\":\"phoneNumber1\", \"v\":\"<String>\"}\n]\n",
"text": "And what have you done with the country code ?I would be storing 3 phones per userI was thinking of using :Could I use the attribute pattern this way:and then iterate over the name ?",
"username": "Yoni_Obia"
},
{
"code": "",
"text": "No special treatment for the country code as the phone numbers were for human to read only.example",
"username": "steevej"
},
{
"code": "",
"text": "Oh okay I see ! Thanks for your helpI intend to do a verification check via the phone number by sending a message to the phone and letting the user enter the code he just received, it’ll be use only once to verify the user.",
"username": "Yoni_Obia"
}
] |
Advice storing phone number
|
2020-01-06T22:15:58.147Z
|
Advice storing phone number
| 12,423 |
null |
[
"queries",
"golang"
] |
[
{
"code": "quickstartDatabase := client.Database(\"quickstart\")\npodcastsCollection := quickstartDatabase.Collection(\"podcasts\")\n\n\tpodcastResult, err := podcastsCollection.InsertOne(ctx, bson.D{\n\t\t{\"title\", \"The Polyglot Developer Podcast\"},\n\t\t{\"author\", \"Nic Raboy\"},\n\t\t{\"tags\", bson.A{\"development\", \"programming\", \"coding\"}},\n\t\t{\"nested0\",\n\t\t\tbson.D{\n\t\t\t\t{\"nested1\", bson.D{\n\t\t\t\t\t{\"val1\", \"test12\"},\n\t\t\t\t\t{\"val2\", \"test212\"},\n\t\t\t\t},\n\t\t\t\t},\n\t\t\t}},\n\t})\n\n\tqueryWord := \"1$\"\n\tquery := bson.M{\"nested0.nested1.val1\": bson.M{\"$regex\": queryWord, \"$options\": \"im\"}}\n\tcursor, err := podcastsCollection.Find(ctx, query)\n\tif err != nil {\n\t\tlog.Println(err)\n\t}\n\n var sites []bson.M\n\tif err = cursor.All(ctx, &sites); err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfor _, rec := range sites {\n\t\tfmt.Println()\n\t\tfmt.Println(rec)\n\t}\nquery := bson.A{\"/1$/\", \"2$\"}\n\nfilter := bson.D{\n\t\t{\n\t\t\tKey: \"nested0.nested1.val1\",\n\t\t\tValue: bson.E{\n\t\t\t\tKey: \"$in\",\n\t\t\t\tValue: query,\n\t\t\t},\n\t\t},\n\t}\n\n\tcursor, err := podcastsCollection.Find(ctx, filter)\n\n\tvar sites []bson.M\n\tif err = cursor.All(ctx, &sites); err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfor _, rec := range sites {\n\t\tfmt.Println()\n\t\tfmt.Println(rec)\n\t}\n",
"text": "Morning everyone,I hoping that i can seek some advice from this comunity…Im trying to search accros fields in documents with an array of search terms to match values that im looking for using the $in model. But i cant seem to get this working…This is what my test data looks like, it connects ok and inserts the data to the atlas DB multiple times with differing values like “test1211” etc:I have got it working using the $regex model for a single search term to find records with the value endng with 1, from my current data set it would return 3 records:My issue is with this search against the same data when trying to do a search with $in and an array of terms to find records with values ending in 1 or 2:Is there something im missing here?Thanks in advance",
"username": "Stuart_Packham"
},
{
"code": "$regex|$in$in",
"text": "Hello @Stuart_Packham, welcome to the forum!You can perform the search using $regex and the regular expression boolean operator or, the | (See: Regular expression syntax cheat sheet - JavaScript | MDN). This you can use without specifying the $in.Also, see similar Stack Overflow post with an answer using $in:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "thanks @Prasad_Saya, i will take a look at that.But there seems to be an issue the the $in statement i have, does the golang implementation for it look correct? as i cant seem to find an exampleThanks",
"username": "Stuart_Packham"
},
{
"code": "$inqueryWord := \"^(Smith|John)\"\nquery := bson.M{\"name\": bson.M{\"$regex\": queryWord, \"$options\": \"i\"}}\n\"name\" : \"Johnson\" }\n\"name\" : \"John\" }\n\"name\" : \"Smithy\" }\n\"name\" : \"James\" }\n$inrgx1 := primitive.Regex{Pattern: \"^Smith\", Options: \"i\"}\nrgx2 := primitive.Regex{Pattern: \"^John\", Options: \"i\"}\nquery := bson.D{{\"name\", bson.D{{\"$in\", bson.A{rgx1, rgx2}}}}}\n",
"text": "@Stuart_Packham, I don’t have an example with using the $in , regex and golang driver. But, you can do something like this using the regex or operator:This will find 3 matching names from:EDIT ADD: The following query filter returns the same result as the above query - this one uses the $in operator with regex:",
"username": "Prasad_Saya"
}
] |
Searching for values in documents with multiple values with $in
|
2022-05-18T09:02:37.047Z
|
Searching for values in documents with multiple values with $in
| 9,440 |
null |
[
"aggregation",
"queries",
"compass"
] |
[
{
"code": "",
"text": "Hi,I wanted to parse, filter, and visualize MongoDB log files using Mtools, but I haven’t figured out how to do that?",
"username": "Viswateja_b"
},
{
"code": "jq",
"text": "Welcome to the MongoDB Community @Viswateja_b!mtools, Compass, and Atlas all have different purposes:mtools is a set of Python scripts for standing up local test deployments and working with log files. The current version of mtools (1.7.0) doesn’t support the newer JSON structured log format in MongoDB 4.4+ and its log parsing options are possibly obviated by other tools which have more comprehensive insights (for example, Keyhole). mtools was created in era when MongoDB log files had to be parsed with regular expressions and many many assumptions. With modern JSON logging, specialised parsing tools are no longer a necessity. There are some helpful examples using jq in the MongoDB manual: Parsing Structured Log Messages.MongoDB Compass is an interactive GUI application for working with MongoDB deployments, and has features more focused around exploring and manipulating data rather than server diagnostics. There is a Performance Tab that provides some real-time performance metrics including identifying slow operations in a MongoDB cluster, but this isn’t a tool for working with log files.MongoDB Atlas is a multi-cloud application data platform with an integrated suite of cloud database and data services. Atlas has built-in charts, alerts, and integrations to help you Monitor Your Database Deployments. You can also View and Download MongoDB Logs if you prefer offline analysis using other tools.If you are using Atlas, I would start by learning the available integrated tools and documentation such as Deployment Metrics, Analyzing Slow Queries, and Improving Schema. These provide insight into the most common performance problems and are generally much faster (and less effort) than digging through log files.Are there specific types of diagnostic problem you are trying to solve?If you can share some more details about your common diagnostic challenges and environment (version of MongoDB; deployment type: standalone, replica set, sharded cluster, Atlas Serverless; O/S version) there may be some more relevant suggestions on tools or approach.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to integrate Mtools with mongodb compass/atlas
|
2022-05-18T10:52:33.510Z
|
How to integrate Mtools with mongodb compass/atlas
| 1,633 |
null |
[
"java",
"android"
] |
[
{
"code": " val config = SyncConfiguration.Builder(user!!, \"user=${user!!.id}\").build()\n Realm.getInstanceAsync(config, object : Realm.Callback() {\n override fun onSuccess(realm: Realm) {\n val result: RealmResults<Task> = realm.where(Task::class.java).findAll()\n Log.i(\"MongoDB\",\"Resulting... ${result.asJSON()}\")\n // Till this line everything is working we are getting results\n\n // But the below line is not even executing\n result.addChangeListener(RealmChangeListener<RealmResults<Task>> {\n Log.i(\"GoogleIO\",\"We are calling\")\n if (it.isNotEmpty()) {\n it.forEach { task ->\n realm.copyFromRealm(task).apply {\n Log.i(\"GoogleIO\",\"${task.name} ${task.owner}\")\n }\n }\n }\n })\n }\n\n })\nclass MainActivity : AppCompatActivity() {\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n val app = App(AppConfiguration.Builder(\"application-0-btwju\").build())\n var user = app.currentUser()\n if (user == null) {\n app.loginAsync(Credentials.anonymous()) {\n if (it.isSuccess) user = it.get()\n else return@loginAsync\n }\n }\n\n Log.i(\"GoogleIO\", \"LoggedIn\")\n val config = SyncConfiguration.Builder(user!!, \"user=${user!!.id}\").build()\n Realm.getInstanceAsync(config, object : Realm.Callback() {\n override fun onSuccess(realm: Realm) {\n val result: RealmResults<Task> = realm.where(Task::class.java).findAll()\n Log.i(\"MongoDB\",\"Resulting... ${result.asJSON()}\")\n\n result.addChangeListener(RealmChangeListener<RealmResults<Task>> {\n Log.i(\"MongoDB\",\"We are calling\")\n if (it.isNotEmpty()) {\n it.forEach { task ->\n realm.copyFromRealm(task).apply {\n Log.i(\"GoogleIO\",\"${task.name} ${task.owner}\")\n }\n }\n }\n })\n }\n\n })\n} }\n",
"text": "Hello, I am trying to attach RealmResults listener for observing realtime changes from database but this change Listener code is not even executingThe Whole Code look likesI am not even getting logs",
"username": "Neeraj_42037"
},
{
"code": "class MyActivity : Activity {\n\n private var person: Person?\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n person = realm.where<Person>().findFirst()\n person?.addChangeListener(RealmChangeListener { person ->\n // React to change\n })\n }\n}\n",
"text": "I self solved this problem the issue was registering a change listener will not prevent the underlying RealmObject from being garbage collected. If the RealmObject is garbage collected, the change listener will stop being triggered. To avoid this, keep a strong reference for as long as appropriate e.g. in a class variable.Thus making result to global variable will helpreference - addChangeListener - kotlin-extensions (mongodb.com)",
"username": "Neeraj_42037"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
AddChangeListener not listening for changes Android Java SDK
|
2022-05-16T12:45:32.364Z
|
AddChangeListener not listening for changes Android Java SDK
| 2,380 |
null |
[
"aggregation",
"queries",
"node-js",
"atlas-search"
] |
[
{
"code": "const { search } = req.params\nlet orders = await Orders.aggregate([\n {\n '$search': {\n 'index': 'ranges',\n 'text': {\n 'query': search,\n 'path': {\n 'wildcard': '*'\n }\n }\n }\n }\n ])\n orders = await Orders.populate(orders, { path: \"user productsBought.range\" })\n",
"text": "Hi there,I have two collections:The orders have a key user data type ObjectId referencing a user.I would like to do an aggregate search through the orders AND the users within.Something like this ( doesn’t work ):But like this I am populating after finding, which is not finding orders by user name.Any help is appreciated.Thanks!",
"username": "Toni_Enguix"
},
{
"code": "orders",
"text": "Hi @Toni_Enguix I never saw this one. Sorry for being so late. Also, welcome to the forum. We intend to be more timely in the future.I have a few questions to ensure I can give you the best answer.",
"username": "Marcus"
},
{
"code": "",
"text": "I ended up including the user in the order’s model so that’s sorted. But I’ll try to answer this! I think it’s worth a look. Thanks for answering, better late than never. I closed this project but I’ll look for it and try to answer (Y)",
"username": "Toni_Enguix"
}
] |
Populate before aggreagte
|
2022-02-15T08:53:03.121Z
|
Populate before aggreagte
| 2,713 |
null |
[
"dot-net"
] |
[
{
"code": "",
"text": "In order to use atlas search in C# application I must refer to the MongoDB.Labs.Search library which is currently in beta state. Any idea when will it be released?",
"username": "Prajakta_Sawant1"
},
{
"code": "",
"text": "Hi @Prajakta_Sawant1, You can use that library or the standard aggregation pipeline. Some people use a mixture of the two. AAre there particular features you would like to see?",
"username": "Marcus"
},
{
"code": "",
"text": "Thank you @Marcus for replying.I am not sure if I can use aggregation pipeline while querying through C# application. I have an e-commerce website with search functionality developed in C#. This search feature fetches the data from MongDB collection on which I have created text index using MongoDB Atlas. Now in order to use features like Score, Synonyms, fuzzy maxEdits, AutoComplete, etc. and query atlas search index using C# code I must refer to the MongoDB.Labs.Search library, which turns out to be in beta state. I need to know if this library extension will be released soon. Please refer to GitHub - mongodb-labs/mongo-csharp-search: C# driver extension providing support for Atlas SearchP. S. - Apologies for lengthy message.",
"username": "Prajakta_Sawant1"
},
{
"code": "",
"text": "I will check with the creator and report back.",
"username": "Marcus"
},
{
"code": "",
"text": "@Prajakta_Sawant1 I have some great news. It will be GA within the next month! ",
"username": "Marcus"
},
{
"code": "",
"text": "Awesome! Me and my team is really happy to hear this. Thank you so much @Marcus for checking on this. Appreciate it.",
"username": "Prajakta_Sawant1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Atlas Search in C# application using MongoDB Driver Extension
|
2022-05-05T09:31:53.412Z
|
Atlas Search in C# application using MongoDB Driver Extension
| 3,608 |
null |
[
"queries",
"node-js"
] |
[
{
"code": "Posts.find({_id: { \"$lt\": lastPageLastDocIdOFPrevReq }) .sort({ _id: -1 }).limit(15)",
"text": "i have a quey where we were using pagination with skip and sorting the data with _id, but now when i am adding pagination with _id then the results are not accurate\ne.gPosts.find({_id: { \"$lt\": lastPageLastDocIdOFPrevReq }) .sort({ _id: -1 }).limit(15)\nis there any problem with the query, or in case of sorting with _id: -1 we need to use skip overe here, which will be expensive i guess.",
"username": "sajan_kumar"
},
{
"code": "",
"text": "@sajan_kumar Please go through this link Mongodb Pagination",
"username": "Sudhesh_Gnanasekaran"
},
{
"code": "",
"text": "the results are not accurateHow are they inacurate? What are the issues you faces?",
"username": "steevej"
},
{
"code": "",
"text": "my bad, the results are fine but I am not confident with the query that, is it going to work properly or not.",
"username": "sajan_kumar"
}
] |
Pagination and sorting with _id
|
2022-05-13T08:06:16.646Z
|
Pagination and sorting with _id
| 6,054 |
null |
[] |
[
{
"code": "{\"error\":\"invalid session: error finding user for incoming webhook\",\"error_code\":\"InvalidSession\",\"link\":\"https://realm.mongodb.com/groups/....\"}{\"error\":\"user not found\",\"link\":\"https://realm.mongodb.com/groups/...\"}",
"text": "HiI have converted over an application from using the legacy Webhooks to the new HTTPS endpoints. There were some issues documented in this post: HTTPS Endpoint can't access querystring - #3 - any news on resolution of this?Something else came up today, that a webhook would return something like this, if not correctly authenticated…\n{\"error\":\"invalid session: error finding user for incoming webhook\",\"error_code\":\"InvalidSession\",\"link\":\"https://realm.mongodb.com/groups/....\"}this is consistent with other error types where the calling application could use the error_code to take corrective action.With the new HTTPS endpoints, a failed authentication returns something like…\n{\"error\":\"user not found\",\"link\":\"https://realm.mongodb.com/groups/...\"}as the no error_code has gone missing, the calling application has not reacted correctly. Is it possible to get the error_code back for HTTPS endpoints? Relying on the text of the error doesn’t look robust to me.thanks.",
"username": "ConstantSphere"
},
{
"code": "",
"text": "Hi Simon,as the no error_code has gone missing, the calling application has not reacted correctly. Is it possible to get the error_code back for HTTPS endpoints?I’ve raised this with the team and it is reported and confirmed as a bug to be planned for a fix.\nI’ll keep track of it and will try to update this thread when there’s an update.There is also an improvement ticket to update the https endpoint form UI to show the function auth configuration that existed in webhooks, but I don’t have any ETA to provide.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Thanks for looking into this Manny. Much appreciated!",
"username": "ConstantSphere"
}
] |
HTTPS endpoints return different from Webhooks
|
2022-05-16T20:13:16.298Z
|
HTTPS endpoints return different from Webhooks
| 1,607 |
null |
[
"aggregation",
"queries",
"transactions"
] |
[
{
"code": "",
"text": "I have two documents (Party, Ledger) referencing via party_id like below.\nParty {\nparty_id: 1001\nparty_title: ‘abc’\n}Ledger {\nvocno, 1\ndate: 12-02-2022\ntransactions: [\n{\nparty_id: 1001\ndebit;: 1000\ncredit: null\n},\n{\nparty_id: 1002\ndebit;: null\ncredit: 1000\n}\n]\n}I want to query using aggregation framework using $lookup to get results like this.\n{\nvocno,\ndate,\ntransactions: [\n{\nparty_id: 1001,\nparty_title: ‘abc’,\ndebit;: 1000,\ncredit: null\n},\n{\nparty_id: 1002,\nparty_title: ‘xyz’,\ndebit;: null,\ncredit: 1000\n}\n]how to write query using $lookup",
"username": "Asif_Rehman"
},
{
"code": "localField:\"party_id\"foreignField:\"transactions.party_id\"",
"text": "There is nothing special to do.A normal $lookup with localField:\"party_id\" and foreignField:\"transactions.party_id\" should be good.",
"username": "steevej"
},
{
"code": "",
"text": "But I want results like this\nvocno,\ndate,\ntransactions: [\n{\nparty_id: 1001,\nparty_title: ‘abc’,\ndebit;: 1000,\ncredit: null\n},it gives result in separate array like this\nvocno,\ndate,\ntransactions: [\n{\nparty_id: 1001,\ndebit;: 1000,\ncredit: null\n},\nparties: [\nparty_title: 'abc\n]",
"username": "Asif_Rehman"
},
{
"code": "",
"text": "I misunderstood your requirements.You do not want to find the transactions of a party. You want to fill the transactions array with the field party_title from the party collection? Right?Please Formatting code and log snippets in posts and publish you sample documents. Include both parties referred in your sample ledger document.Also publish what you have tried and indicate how it fails so that we do not waste time and propose, like I just did, a solution that does not match your requirements.",
"username": "steevej"
}
] |
How to query with aggregate lookup where foriegn key is in array of document
|
2022-05-17T20:38:27.056Z
|
How to query with aggregate lookup where foriegn key is in array of document
| 3,691 |
null |
[
"dot-net",
"crud"
] |
[
{
"code": "using MongoDB.Bson;\nusing MongoDB.Driver;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class BeginFindOneAndUpdate : BaseClass\n{\n\n \n public class Task4\n {\n public ObjectId _id;\n public int state = 0;\n public DateTime start;\n }\n\n // Example about several node received a message about a new task with a certain id\n",
"text": "Hi,I try to use FindOneAndUpdate in C# with linq expression but I don’t find how to do it.public class Task4\n{\npublic ObjectId _id;\npublic int state = 0;\npublic DateTime start;\n}var t4 = await collection.FindOneAndUpdateAsync(\n_ => _._id == id && .state == searchState, <= Problem here\nBuilders.Update\n.Set( => .state, state)\n.CurrentDate( => _.start, UpdateDefinitionCurrentDateType.Date),\nnew FindOneAndUpdateOptions<Task4, Task4>() <= or here ?\n{\nReturnDocument = ReturnDocument.After\n},\ncancellationToken: cancel\n);But it’s impossible and I don’t find how to fix that.\nI get the errorerror CS0121: The call is ambiguous between the following methods or properties: ‘IMongoCollectionExtensions.FindOneAndUpdateAsync(IMongoCollection, Expression<Func<TDocument, bool>>, UpdateDefinition, FindOneAndUpdateOptions<TDocument, TDocument>, CancellationToken)’ and ‘IMongoCollectionExtensions.FindOneAndUpdateAsync<TDocument, TProjection>(IMongoCollection, Expression<Func<TDocument, bool>>, UpdateDefinition, FindOneAndUpdateOptions<TDocument, TProjection>, CancellationToken)’If you want the full project get it hereThanks for your help.",
"username": "Remi_Thomas"
},
{
"code": "FindOneAndUpdateAsyncTask<TDocument> FindOneAndUpdateAsync<TDocument>( ... params ... )\nTask<TProjection> FindOneAndUpdateAsync<TDocument, TProjection>( ... params ... )\nvar t4 = await collection.FindOneAndUpdateAsync<Task4>(\n_ => _._id == id && .state == searchState,\n... additional params ...\n",
"text": "Hi, @Remi_Thomas,Welcome back to the MongoDB Community Forums. I understand that you’re receiving an ambiguity between two FindOneAndUpdateAsync overloads based on the compiler warning. The problem is that there are two similar overloads, which the compiler cannot differentiate between. These two overloads are:Often the C# compiler can infer the type arguments based on the method parameters, but in this case the method parameters are identical. To resolve this ambiguity, you must explicitly specify the type parameter:Hope this helps.Sincerely,\nJames",
"username": "James_Kovacs"
}
] |
FindOneAndUpdateAsync C# usage with linq expression
|
2022-05-17T09:06:07.563Z
|
FindOneAndUpdateAsync C# usage with linq expression
| 4,851 |
null |
[
"queries",
"dot-net"
] |
[
{
"code": " var query = _repository.GetAll().Select(x => new CollectiveDataDto\n {\n Id = x.Id,\n Prop1 = xProp1,\n Prop2 = x.Prop2,\n AdditionalData = x.AdditionalData,\n });\n\nin repo i just return collection as queryable\nvar proje = Builders<SomeEntity>.Projection.Include(x => x.AdditionalData);\nvar test = await _repository.GetAllWithProjections(proje);\n\nin repo i have:\nreturn await _collection.Find(new BsonDocument()).Project<SomeEntity>(projection).ToListAsync();\n",
"text": "Hi, I have this problem that I need to pull only data from a property marked as ExtraElements, in this case it is “AdditionalData”. When I use .Select or Builder.Projection it always returns null, when it comes to other properties it returns fine. But when I don’t do any projection and I just do ToListAsync it returns AdditionalData correctly. How can I solve this?or",
"username": "thebaku_N_A"
},
{
"code": "SelectBsonExtraElementsAttributeSelect$projectSelecttest.coll.Aggregate([{ \"$project\" : { \"_id\" : \"$_id\", \"Prop1\" : \"$Prop1\", \"Prop2\" : \"$Prop2\", \"AdditionalData\" : \"$AdditionalData\" } }])\nAdditionalData$AdditionalDatanullBsonExtraElements$FieldNamevar query = _repository.GetAll().Where(predicate);\nvar results = query.ToList();\n\n// map returned data to DTOs using LINQ-to-Objects\nvar dtoResults = results.Select(x => new CollectiveDataDto {\n Id = x.Id,\n Prop1 = x.Prop1,\n Prop2 = x.Prop2,\n AdditionalData = x.AdditionalData\n});\n",
"text": "Hi, @thebaku_N_A,Welcome to the MongoDB Community Forums. I understand that you are unable to use Select with BsonExtraElementsAttribute.When you perform a Select operation, this maps to a $project in MQL. The MQL for your above Select will look something like this:There is no AdditionalData field and thus $AdditionalData returns null. While we probably shouldn’t render a field marked with BsonExtraElements as $FieldName, it is a moot point as there is no way to easily express “place all unknown fields in a name-value collection” in MQL. What you really want is to return all fields back to the client and let deserialization handle the unknown fields in whatever manner is configured in the BSON class mappings.The easiest way to solve this is to leverage LINQ-to-Objects. First query back the data of interest using the MongoDB .NET/C# driver and then use LINQ-to-Objects to map the returned data into your DTO classes.Hopefully this helps.Sincerely,\nJames",
"username": "James_Kovacs"
}
] |
Does not return property through manual projection
|
2022-05-16T09:54:27.699Z
|
Does not return property through manual projection
| 2,097 |
[
"100daysofcode"
] |
[
{
"code": "",
"text": "Hello Everyone, I had heard a lot on #100DaysOfCode and really wanted to work together with friends on this. I would be posting every day about my 100 days of progress in this post and I hope I can convince you to follow along with me… Below I am adding progress from previous days. but will now regularly update from today. If you are game, do join me on this adventure challenge Cheers \nHenna | Twitter | LinkedIn",
"username": "henna.s"
},
{
"code": "",
"text": "100DaysOfCode Log\nReading time: 1 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1490818405431185410",
"username": "henna.s"
},
{
"code": "",
"text": "I could not spend a lot of time on code as it was not one of my best days :-/ so I spent some time learning about UI/UX from the Frontend…\nReading time: 1 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1491203210945908736",
"username": "henna.s"
},
{
"code": "",
"text": "I am following the Frontend Track on Jetbrains Academy and continuing with concepts for the 2nd stage of the Flashcard project.\nReading time: 2 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1491550126430691339",
"username": "henna.s"
},
{
"code": "",
"text": "Today, I learned about Interface Components and what part color plays in the UI design of a web application\nReading time: 1 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1491926747113263105",
"username": "henna.s"
},
{
"code": "",
"text": "In CSS, colors can be defined in several different ways.\nReading time: 3 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1492265941694042114",
"username": "henna.s"
},
{
"code": "",
"text": "Images\nReading time: 3 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1492612762849136643",
"username": "henna.s"
},
{
"code": "",
"text": "Before Flexbox Layout Module, the layout of documents on an HTML page were controlled by position, float, display and clear CSS properties…\nReading time: 6 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1493009161621323783",
"username": "henna.s"
},
{
"code": "",
"text": "Ok, I confess Flexbox has not been an easy ride. The most difficult part has been figuring out the flex-basis, flex-grow, flex-shrink…\nReading time: 3 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1493363667252961281",
"username": "henna.s"
},
{
"code": "",
"text": "I took a break from Flexbox and did some Realm today. I will not say this was easy but I have been trying to understand this topic for a…\nReading time: 3 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1493731861771739136",
"username": "henna.s"
},
{
"code": "",
"text": "Ok, I am back on Flexbox Layout :D I have started to get the hang of it and I am liking it. Only If magically I can get some design skills…\nReading time: 4 min read\nShow on Twitter: https://twitter.com/henna_dev/status/1494083921256058881",
"username": "henna.s"
},
{
"code": "",
"text": "Flexbox Layout was a difficult layout when I was reading about it and it is maintaining consistency, still being a difficult one while I…\nReading time: 1 min read\nShow on Twitter: https://twitter.com/henna_dev/status/1494418156810612738?",
"username": "henna.s"
},
{
"code": "",
"text": "I very much believe in the power of working together, no matter what tech you are learning or want to learn, if you like to work with me…\nReading time: 3 min read\nShow on Twitter: https://twitter.com/henna_dev/status/1494822081162321924",
"username": "henna.s"
},
{
"code": "",
"text": "I am absolutely in love with 100Days and I am so glad I started this with my colleague Kushagra who inspires me every day and we both…\nReading time: 2 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1495170877184483328",
"username": "henna.s"
},
{
"code": "",
"text": "Today I spent time learning Navigation Architecture Library of Jetpack Navigation so that I can implement the same in my BookLog…\nReading time: 3 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1495548266922053633",
"username": "henna.s"
},
{
"code": "",
"text": "It was a tiring day today but I am happy I still made some progress and in turn discovered more errors… Today has been a slow and lazy day. My brain went on leave and I could not finish what I had originally planned. It is OK to be not OK…\nReading time: 2 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1495908425141735424Cheers ",
"username": "henna.s"
},
{
"code": "",
"text": "Today was a long day but it was fun. I am pretty much happy and satisfied with how the work is going oh GOSH!!, I was missing development soo much , so glad to be back at it…This app idea may turn into an article once I am finished with it and may write a full-fledged piece rather than broken into days as it is currently Today has been a long day and I realized I put my head into stuff where I can avoid :P Lol…. Although I am absolutely happy with where…\nReading time: 3 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1496235660516241416",
"username": "henna.s"
},
{
"code": "",
"text": "Today was a long day, the second half was full of meetings, something I have started to dislike lately… My head is zooming out in the zoom Today has been a looooooonnnnggggg day. I could not make a lot of progress on the BookLog App as much as I wanted to but I worked on a…\nReading time: 2 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1496631546715021315",
"username": "henna.s"
},
{
"code": "",
"text": "Today was a mix of Flexbox and Realm Sync. I am still struggling to make a 3 * 3 square matrix I wish I was good at maths but there is a new Realm Byte today…Share on Twitter: https://twitter.com/henna_dev/status/1496965241724600320",
"username": "henna.s"
},
{
"code": "",
"text": "Worked on my BookLog Application trying to resolve some more errors that lead to more… PhewI had a fun day today. It began with a morning walk, breakfast at my favorite place, and talking to my favorite person before the weekend…\nReading time: 2 min read\nShare on Twitter: https://twitter.com/henna_dev/status/1497358898625499147",
"username": "henna.s"
}
] |
The Journey of #100DaysOfCode (@henna_dev)
|
2022-02-15T05:51:56.703Z
|
The Journey of #100DaysOfCode (@henna_dev)
| 25,087 |
|
null |
[
"dot-net"
] |
[
{
"code": " return new CosResourceResponse<Student>(entity);\n }\n catch (Exception ex)\n {\n var message = $\"An error occurred when insert the Person: {ex.Message}\";\n _logger.Error(message, ex);\n return new CosResourceResponse<Student>(message);\n }\n }\n",
"text": "[BsonCollection(“Student”)]\npublic class Student\n{\npublic ObjectId Id {get;set;}\npublic string StudentId { get; set; }\npublic IList Courses{get;set;}\n}\n[BsonCollection(“Course”)]\npublic class Course\n{\npublic ObjectId Id {get;set;}\npublic string CourseName{get; set;}\n}public async Task<CosResourceResponse> InsertAsync(Student entity)\n{\ntry\n{\nawait _unitOfWork.Compounds.InsertOneAsync(entity);How i will make primary key and foreignkey between these table so pls give example through c# into the mongod databse code and how to insert into the database",
"username": "murugan_m"
},
{
"code": "ObjectIdObjectId.GenerateNewId()_id_idIIdGenerator_id",
"text": "Hi, @murugan_m,Welcome to the MongoDB Community Forums. I see that you are trying to model parent-child relationships with MongoDB. I would recommend reading our data modelling guide, especially the section Model Tree Structures for examples of how to design your object model.Depending on your needs, you may wish to model your parent-child relationship using nesting in which case the primary/foreign key is implied through the nesting. If you do decide to model the parent-child relationship using separate collections for the parent and child objects, you can simply create new ObjectId values client-side using ObjectId.GenerateNewId() and wire together the relationships yourself. The driver does the same thing when you try to insert a new object and the _id field has a default value. It generates a new _id using the configured IIdGenerator and then performs the insert with the client-side generated _id.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": " Thanks for your reply can you please give me any sample code for that to continue. I am inserting student collection inside courses as a list in mongodb it's like embedded but i need to store separately to see student data and courses data.\n",
"text": "Hi James,Thanks & Regards\nMurugansilvers",
"username": "murugan_m"
},
{
"code": "",
"text": "Hi, @murugan_m,I’m glad that you found my response helpful. The code required will be highly dependent on which solution you choose to pursue and the structure of your persistence code.If you wish to keep separate parent-child collections, then your persistence code would have to generate the ObjectIds and assign them to the appropriate properties on the parent and child objects. If you choose to nest the child objects as subdocuments, then the driver should take care of the serialization automatically.Sincerely,\nJames",
"username": "James_Kovacs"
}
] |
How to insert parent and child data like sql server or any difference?
|
2022-05-12T06:08:06.959Z
|
How to insert parent and child data like sql server or any difference?
| 4,165 |
null |
[
"java",
"compass",
"android"
] |
[
{
"code": "RealmEventStreamAsyncTask<RealmLogEntry> watcher3 = mongoCollection\n .watchWithFilterAsync(new Document(\"fullDocument.userId\", \"18\"));\n watcher3.get(result3 -> {\n try {\n if (result3.isSuccess()) {\n Log.v(\"EXAMPLE\", \"Event type watcher 3: \" +\n result3.get().getOperationType() + \" full document: \" +\n result3.get().getFullDocument());\n } else {\n Log.e(\"EXAMPLE\",\n \"failed to subscribe to filtered changes in the collection with : \",\n result3.getError());\n }\n } catch (NullPointerException npe) {\n Log.d(TAG, \"onCreate: potential delete by frig ???????\");\n }\n });\nreturn collection.findOneAndReplace(query, replacement, options)\n .then(replacedDocument => {\n if(replacedDocument) {\n console.log(`Successfully replaced the following document: ${replacedDocument}.`);\n response.setStatusCode(200); // Set an HTTP Status code like \"201 - created\"\n response.setBody(JSON.stringify({ \"code\": 200, \"message\": \"update ok\", \"timestamp\": dateBob }));\n } else {\n console.log(\"No document matches the provided query.\")\n response.setStatusCode(400); // Set an HTTP Status code like \"201 - created\"\n response.setBody(JSON.stringify({ \"code\": 400, \"message\": \"update error A\", \"timestamp\": dateBob }));\n }\n // return updatedDocument\n })\n .catch(err => {\n console.error(`Failed to find and replace document: ${err}`)\n response.setStatusCode(400); // Set an HTTP Status code like \"201 - created\"\n response.setBody(JSON.stringify({ \"code\": 400, \"message\": \"update error B\", \"timestamp\": dateBob }));\n }\n )\n\n",
"text": "This watcher that I coded (using Android app coded with java ) will detect an update using MongoDB compass but not when I run the same update from a function using a http endpoint from postman.I don’t know why. Any help would be appreciated, Thanks.the function that I am running, that is not being detected by the watcher is :",
"username": "Robert_Benson"
},
{
"code": "",
"text": "How are you setting up your watcher? Is it watching on all event types? And are you specifying the FullDocumentLookup field? See here: https://www.mongodb.com/docs/manual/changeStreams/#lookup-full-document-for-update-operationsThe difference is likely that using Data Explorer synthesizes “Replace” events but findOneAndUpdate() synthesizes an “Update” event which will not have the FullDocument field unless you ask for it",
"username": "Tyler_Kaye"
},
{
"code": "console.log(`Successfully replaced the following document: ${replacedDocument}.`);\n",
"text": "Please share replacedDocument.More or less the output line produced by",
"username": "steevej"
},
{
"code": "\"Successfully replaced the following document: [object Object].\"\n\n",
"text": "Sorry, but its unhelpful.",
"username": "Robert_Benson"
},
{
"code": "JSON.stringify",
"text": "You seem to already know aboutJSON.stringifytry with it.",
"username": "steevej"
},
{
"code": "\nexports = async function (request, response) {\n const bodyJson = JSON.parse(request.body.text());\n console.log(\"json body = \", bodyJson);\n const query = {_id: BSON.ObjectId(bodyJson._id)}\n\n console.log(\"query = \", bodyJson._id);\n dateBob = new Date();\n\n // Replace it with a new document\n // const replacement = {\n // \"expire_log\": dateBob,\n // \"locfrom\" : \"deleted\",\n // \"remarks\" : \"deleted\"\n // };\n \n \n const replacement = {\n \"locfrom\" : \"deleted\",\n \"remarks\" : \"deleted\"\n };\n \n\n try{\n response.addHeader(\"Content-Type\", \"application/json\"); // Modify the response headers\n const collection = context.services.get(\"mongodb-atlas\").db(\"logitxp\").collection(\"log-entries\");\n\n // Return the original document as it was before being replaced\n // const options = { \"returnNewDocument\": false };\n \n \n\n return collection.replaceOne(query, replacement)\n .then(replacedDocument => {\n if(replacedDocument) {\n console.log(\"Successfully replaced the following document: replacedDocument = \" + JSON.stringify(replacedDocument));\n response.setStatusCode(200); // Set an HTTP Status code like \"201 - created\"\n response.setBody(JSON.stringify({ \"code\": 200, \"message\": \"update ok\", \"timestamp\": dateBob }));\n } else {\n console.log(\"No document matches the provided query.\")\n response.setStatusCode(400); // Set an HTTP Status code like \"201 - created\"\n response.setBody(JSON.stringify({ \"code\": 400, \"message\": \"update error A\", \"timestamp\": dateBob }));\n }\n // return updatedDocument\n })\n .catch(err => {\n console.error(`Failed to find and replace document: ${err}`)\n response.setStatusCode(400); // Set an HTTP Status code like \"201 - created\"\n response.setBody(JSON.stringify({ \"code\": 400, \"message\": \"update error B\", \"timestamp\": dateBob }));\n }\n )\n }\n catch (e) {\n console.log(\"catch e: \",e);\n response.setBody(JSON.stringify({ \"code\": 400, \"message\": \"update error C\", \"timestamp\": dateBob }));\n }\n}\n\n\nLogs:\n[\n \"query = 6282099a7f1e25def6383b49\",\n \"Successfully replaced the following document: replacedDocument = {\\\"matchedCount\\\":1,\\\"modifiedCount\\\":1}\"\n]\n{\n \"name\": \"log_replace_expire\"\n}\n\n",
"text": "I’ve change the function to a replaceOne to simplify:one document was matched and updated, as expected.\nIt is not being detected by the watcher.",
"username": "Robert_Benson"
},
{
"code": "\"fullDocument.userId\", \"18\"const replacement = {\n \"locfrom\" : \"deleted\",\n \"remarks\" : \"deleted\"\n };\n",
"text": "You have a watcher that filters\"fullDocument.userId\", \"18\"and you do a replaceOne withI think that is why you watcher is not called. Your replacement document does not have a field named userId that contains the string value 18.",
"username": "steevej"
},
{
"code": "",
"text": "Bingo , that worked !!!you are a star, thank you so much I’m guessing the compass update was detected because the entire document was updated.",
"username": "Robert_Benson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
My Watcher detects replace when document is updated by mongoDB compass but not with findOneAndReplace
|
2022-05-17T15:16:07.656Z
|
My Watcher detects replace when document is updated by mongoDB compass but not with findOneAndReplace
| 3,231 |
null |
[
"aggregation",
"queries",
"data-modeling"
] |
[
{
"code": "productsproductratingsproductsproductsratingsratingsproductsproducts",
"text": "I have a collection that stores products.Each product can be rated by users.There will be another collection to hold the ratings of products by users.In the products collection, there will be a field to store the average of the last N rating.The ratings collection will be indexed by date added, of course.option 1 - when a new rating is added for a product, go to the ratings collection, retrieve the last N ratings, compute the average and then replace the current average in the products collection.option 2 - store an array of the last N ratings in a field of the products collection. When a new rating is added, push this into this array, pop out the oldest, and then compute the average from this array.",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Option three - don’t worry about this happening in real time/synchronously and just update the average periodically… It all depends on how many ratings are coming in, how fast, do you have other uses for most recent ratings inside products, etc.Asya",
"username": "Asya_Kamsky"
}
] |
Computed pattern to store average for most recent N documents
|
2022-05-17T18:29:53.998Z
|
Computed pattern to store average for most recent N documents
| 1,569 |
[
"queries"
] |
[
{
"code": "db.collection(\"logs_t�est\").drop()",
"text": "I made a mistake when create a collection with name: logs_téstWhen list collections it show: “logs_t�est”\nScreen Shot 2022-05-17 at 14.53.261184×276 58.9 KB\nBut I can’t drop that collection. db.collection(\"logs_t�est\").drop() is not working.",
"username": "Company_HolaLabs"
},
{
"code": "db.getCollection('logs_tést').drop()",
"text": "db.collection(“logs_t�est”).drop()Try thisdb.getCollection('logs_tést').drop()",
"username": "Sudhesh_Gnanasekaran"
},
{
"code": "collection-4-4969674282599216290.wt",
"text": "It still not workI know that collection in mongodb storage is collection-4-4969674282599216290.wtCan I delete file wt in mongodb path?",
"username": "Company_HolaLabs"
},
{
"code": "",
"text": "Can I delete file wt in mongodb path?Never ever handle files in there.",
"username": "steevej"
},
{
"code": "mongosh> db.getCollectionNames()\n[\n 'text', 'col2',\n 'collectionA', 'player',\n 'c_b', 'push_new_field',\n 'col1', 'Tournament',\n 'computers', 'logs_t�est',\n 'dates', 'collectionB',\n 'c_a', 't',\n 'test', 'logs_tést'\n]\nmongosh> db.getCollection( \"logs_t�est\").insertOne( {})\n{ acknowledged: true,\n insertedId: ObjectId(\"6283b5c5d2acef6da4352078\") }\nmongosh> db.getCollection( \"logs_t�est\").find()\n{ _id: ObjectId(\"6283b5c5d2acef6da4352078\") }\nmongosh> db.getCollection( \"logs_t�est\").drop()\ntrue\nmongosh> db.getCollectionNames()\n[\n 'text',\n 'col2',\n 'collectionA',\n 'player',\n 'c_b',\n 'push_new_field',\n 'col1',\n 'Tournament',\n 'computers',\n 'dates',\n 'collectionB',\n 'c_a',\n 't',\n 'test',\n 'logs_tést'\n]\n",
"text": "Thislogs_t�estlooks like you have cut-n-paste and e-acute from a Mac to create the collection.The collection name will probably shows as logs_tést on a Mac but as logs_t�est everywhere else.As seen here you can have both but they are different.I don’t think you will be able to type in the � anywhere else other than on a Mac UI.So try again what has been recommended:db.getCollection(‘logs_tést’).drop()But do not type the collection name, cut-n-paste it. With cut-n-paste it seems to work:",
"username": "steevej"
},
{
"code": "mongosh> db.getCollection( \"logs_t�est\").insertOne( {})\n{ acknowledged: true,\n insertedId: ObjectId(\"6283b5c5d2acef6da4352078\") }\nmongosh> db.getCollection( \"logs_t�est\").find()\n{ _id: ObjectId(\"6283b5c5d2acef6da4352078\") }\nmongosh> db.getCollection( \"logs_t�est\").drop()\ntrue\nmongosh> db.getCollectionNames()\n[\n 'text',\n 'col2',\n 'collectionA',\n 'player',\n 'c_b',\n 'push_new_field',\n 'col1',\n 'Tournament',\n 'computers',\n 'dates',\n 'collectionB',\n 'c_a',\n 't',\n 'test',\n 'logs_tést'\n]\n",
"text": "I tried your way. But it still does not work\nScreen Shot 2022-05-17 at 22.39.24858×454 132 KB\n",
"username": "Company_HolaLabs"
},
{
"code": "",
"text": "It did work. You got true as the output of your drop().Can you share some of the documents from that collection.",
"username": "steevej"
},
{
"code": "",
"text": "It did work. You got true as the output of your drop().I think when insertOne it create a new collection same name. But it differentYou can see the log\nimage1284×547 65.6 KB\n",
"username": "Company_HolaLabs"
},
{
"code": "",
"text": "I think when insertOne it create a new collection same name. But it differentIndeed and very interesting. B-(Try deleting with Compass.",
"username": "steevej"
},
{
"code": "mongodumpdropDatabase()mongorestore",
"text": "Is this database reasonably small? If dropping the collection in Compass or another tool doesn’t work, you can use mongodump to dump it out, then dropDatabase(), then delete the physical file in the dump directory corresponding to this collection, and then mongorestore the dump (without the problematic collection)…Asya",
"username": "Asya_Kamsky"
},
{
"code": "db.getCollectionNames()BSONError: Invalid UTF-8 string in BSON document",
"text": "Try deleting with Compass.I tried use Compass in the first time. But Compass can read non UTF-8 stringIn Compass my database don’t show collections and size\nScreen Shot 2022-05-18 at 00.12.471310×1214 118 KB\nI used _MONGOSH tab in Compass and run the command\ndb.getCollectionNames()\nAnd it show:\nBSONError: Invalid UTF-8 string in BSON document",
"username": "Company_HolaLabs"
},
{
"code": "enableUtf8Validation: falseenableUtf8Validation: false",
"text": "Is this database reasonably small?My database have a collection with 3.2B documents (size: 1.3 TB). It’s not collection “logs_t�est” but this collection is in that databaseAlthough I still query normally in that database with mongoshell and Nodejs with option enableUtf8Validation: false. But I can’t use Compass to interacting with my dataIt annoys me.Do you know anyway to pass option enableUtf8Validation: false in Compass?",
"username": "Company_HolaLabs"
},
{
"code": " array = db.getCollectionNames();\n array[4] // check which one is the 'bad' one\n db.getCollection(array[4]).drop();\n",
"text": "I think I know how you can do this in the shell. At least the old shell …I think passing the name through without any cutting and pasting may successfully access the collection…Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to drop collection with name include character "�"?
|
2022-05-17T07:57:18.761Z
|
How to drop collection with name include character “�”?
| 4,615 |
|
[
"mdbw22-hackathon"
] |
[
{
"code": "Lead Developer AdvocateSenior Developer AdvocateStaff Developer Advocate",
"text": "So come, join in and ask questions. We will be sharing details and guidelines about the submission process and also the hackathon Prizes! We’d love for these sessions to be very participatory this week - so, if you have a demo to share, please reply here and we’ll send you an invite link. All participants get SWAG!!We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer AdvocateStaff Developer Advocate",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "We have moved the time of this event slightly - by 75 mins - hope that’s ok by all.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "You can re-watch it here -Looking forward to seeing all your projects!!",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Hackathon Office Hours & Demos! Final Week! US/EMEA Session 2
|
2022-05-17T12:00:10.028Z
|
Hackathon Office Hours & Demos! Final Week! US/EMEA Session 2
| 3,067 |
|
null |
[
"100daysofcode"
] |
[
{
"code": "",
"text": "Hello Everyone, I had seen a lot on #100DaysOfCode on various Social platforms and recently got inspired by @henna.s to work together on this. I would be posting every day about my 100 days of progress in this post and I hope I can convince you to follow along with me… Below I am adding progress from previous days. but will now regularly update from today.Cheers \nKushagra | Twitter | LinkedIn",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: About transitionEnd, Template Literals, Playing audio files in JavaScript, The difference between addEventListener!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3oY5e1x",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: Setting inline styles using Element.style, Transforming an element by default at the center, & Setting…\nReading time: 3 min read\nShare on Twitter: https://bit.ly/3HX8K3T",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: forEach, addEventListener — change, mousemove, CSS variable and handling it with JS, element.style.setProperty!\nReading time: 1 min read\n\nShare on Twitter: https://bit.ly/3rXDgoK",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: Arrow functions, map() and sort() will always return the same amount of items, accumulator is just a fancy word for total!\nReading time: 5 min read\nShare on Twitter: https://bit.ly/3oVd5Np",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: Little about Flexbox, toggle() method, Using includes() we can check certain word or character that we want and Logger!\nReading time: 4 min read\nShare on Twitter: https://bit.ly/3gUwPMC",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: fetch() method, regex, & pattern-matching!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3H9jSJP",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: 4 more array methods, More ES6 style code, & Some cool HTML tag!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3GWZCKY",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: getContext(), Mother-effing, & HTML Canvas!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3HWe3Ah",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: Various Dev Tools Tricks!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3BtIJ9Z",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: Tracking key-presses like shiftKey && checked-boxes & QuerySelectorAll!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3LFM0Yj",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: Create HTML5 video player and addEventListener() is better vs adding onclick=”myFunc()” to a DOM element!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3rXTWfF",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: How to Detect sequence of Key pressed, also what is Konami Code. It is nothing but Up, Up, Down, Down, Left, Right, Left..\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3LMiXCk",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: Debounce function, element.classList.add() & element.classList.remove()✨\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3I6PKzM",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: String, Number and Boolean, Objects and Arrays in JS, Difference b/w referencing and copying, About Embedded object!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3sT0Xh8",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I learned: About Local Storage of Browser Event Delegation!\nReading time: 3 min read\nShared on Twitter: https://bit.ly/3s6IGNX",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I Learned: About offsetWidth and offsetHeight, Dynamic Text Shadow using Javascript!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/36AifIn",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I learned: Sorting in JS in a really different way, Difference between toString() and join()!\nReading time: 3 min read\nShared on Twitter: https://bit.ly/355l0kn",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I learned: Array.prototype.reduce(), Nodelist vs Array!\nReading time: 2 min read\nShared on Twitter: https://bit.ly/3sekjhv",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Today I learned: Creating our own live photo booth, Tweaking the RGBA pixel in video using JavaScript & saw Image in text format! 😆\nReading time: 3 min read\nShared on Twitter: https://bit.ly/3M3QYOU",
"username": "Kushagra_Kesav"
}
] |
The Journey of #100DaysOfCode (@im_Kushagra)
|
2022-02-16T08:24:14.970Z
|
The Journey of #100DaysOfCode (@im_Kushagra)
| 19,097 |
null |
[
"aggregation",
"queries"
] |
[
{
"code": " },\n \n },\n \n {\n $group: {\n _id: {\n username: \"$username\",\n },\n videoSeconds: {\n $push: \"$videoSeconds\",\n },\n audioSeconds: {\n $push: \"$audioSeconds\",\n },\n breakSeconds: {\n $push: \"$breakSeconds\",\n },\n session: {\n $push: {\n $divide: [\n {\n $dateDiff: {\n startDate: \"$createdAt\",\n endDate: \"$exitAt\",\n unit: \"second\",\n },\n },\n 60,\n ],\n },\n },\n disengagedEmotionCount: {\n $push: \"$disengagedEmotionCount\",\n },\n engagedEmotionCount: {\n $push: \"$engagedEmotionCount\",\n },\n lunchSeconds: {\n $push: \"$lunchSeconds\",\n },\n otherEmotionsCount: {\n $push: \"$otherEmotionsCount\",\n },\n screenSeconds: {\n $push: \"$screenSeconds\",\n },\n screenTwoSeconds: {\n $push: \"$screenTwoSeconds\",\n },\n roomname: {\n $push: \"$roomname\",\n },\n //Sum of Seconds\n videoSecondsSum: {\n $sum: \"$videoSeconds\",\n },\n audioSecondsSum: {\n $sum: \"$audioSeconds\",\n },\n breakSecondsSum: {\n $sum: \"$breakSeconds\",\n },\n lunchSecondsSum: {\n $sum: \"$lunchSeconds\",\n },\n screenSecondsSum: {\n $sum: \"$screenSeconds\",\n },\n screenTwoSecSum: {\n $sum: \"$screenTwoSeconds\",\n },\n engagedEmotionCountSum: {\n $sum: \"$engagedEmotionCount\",\n },\n disengagedEmotionCountSum: {\n $sum: \"$disengagedEmotionCount\",\n },\n otherEmotionsCountSum: {\n $sum: \"$otherEmotionsCount\",\n },\n },\n \n },\n {$sort:{username:1}},\n //Seconds to Minutes\n {\n $project: {\n videoMint: { $divide: [\"$videoSecondsSum\", 60] },\n audioMint: { $divide: [\"$audioSecondsSum\", 60] },\n breakMint: { $divide: [\"$breakSecondsSum\", 60] },\n lunchMint: { $divide: [\"$lunchSecondsSum\", 60] },\n screenMint: { $divide: [\"$screenSecondsSum\", 60] },\n screenTwoMint: { $divide: [\"$screenTwoSecSum\", 60] },\n roomname: \"$roomname\",\n session: { $sum: [\"$session\"] },\n //multiply Count with 5\n engagedEmotionSeconds: {\n $multiply: [\"$engagedEmotionCountSum\", 5],\n },\n disengagedEmotionSeconds: {\n $multiply: [\"$disengagedEmotionCountSum\", 5],\n },\n otherEmotionsSeconds: {\n $multiply: [\"$otherEmotionsCountSum\", 5],\n },\n },\n },\n //Rounding Data\n {\n $project: {\n videoMintTotal: { $round: [\"$videoMint\", 0] },\n audioMintTotal: { $round: [\"$audioMint\", 0] },\n breakMintTotal: { $round: [\"$breakMint\", 0] },\n lunchMintTotal: { $round: [\"$lunchMint\", 0] },\n screenMintTotal: { $round: [\"$screenMint\", 0] },\n screenTwoMintTotal: { $round: [\"$screenTwoMint\", 0] },\n roomname: \"$roomname\",\n session: \"$session\",\n //convert count to minutes\n engagedEmotionMint: { $divide: [\"$engagedEmotionSeconds\", 60] },\n disengagedEmotionMint: {\n $divide: [\"$disengagedEmotionSeconds\", 60],\n },\n otherEmotionsMint: { $divide: [\"$otherEmotionsSeconds\", 60] },\n },\n },\n {\n $project: {\n videoMintTotal: \"$videoMintTotal\",\n audioMintTotal: \"$audioMintTotal\",\n breakMintTotal: \"$breakMintTotal\",\n lunchMintTotal: \"$lunchMintTotal\",\n screenMintTotal: \"$screenMintTotal\",\n screenTwoMintTotal: \"$screenTwoMintTotal\",\n roomname: \"$roomname\",\n session: { $round: [\"$session\", 0] },\n engagedEmotionMintTotal: { $round: [\"$engagedEmotionMint\", 0] },\n disengagedEmotionMintTotal: {\n $round: [\"$disengagedEmotionMint\", 2],\n },\n otherEmotionsMintTotal: { $round: [\"$otherEmotionsMint\", 0] },\n },\n },\n ]);",
"text": "I am trying to sort the data in al alphabectival order but the results keep changing everytime I run the query.\nNot sure if that some something to with aggregation or what is going one. Here is my query\ndb.getCollection(“reportings”).aggregate\n([\n{\n$match: {\ncreatedAt: {\n$gte: ISODate(“2022-05-01T22:58:18.987+0000”),\n$lte: ISODate(“2022-05-13T22:58:18.987+0000”)\n},",
"username": "Sameer_maini"
},
{
"code": "",
"text": "You sort with {username:1} but you do not have a field named username after the $group stage.The field name _id.username after the $group stage.",
"username": "steevej"
},
{
"code": "",
"text": "Make sense. what would be the syntax of the field in the sort.\nI tried {$sort:{_id.username:-1}}, but get and error",
"username": "Sameer_maini"
},
{
"code": "{ \"$sort\" : { \"_id.username\" : -1 } }\n",
"text": "When using dot notations you need quotes. So try",
"username": "steevej"
},
{
"code": "",
"text": "That did fix the syntax error , how ever the sory is still not working.\n\nHere is the results I am getting so not sure why it would sort it alphabetically.\nAlso I would amiss if I didnt thank you for helping out",
"username": "Sameer_maini"
},
{
"code": "",
"text": "try to sort in the last stage",
"username": "steevej"
},
{
"code": "_id.usernameusername",
"text": "The field you sorted by was called _id.username - it appears to be called username in your screenshot. Do you have other stages that you didn’t include in the original pipeline you posted?What are you using to run this aggregation and look at its results? Could the client be the actual problem?Asya",
"username": "Asya_Kamsky"
}
] |
Aggregate Query and Sort OPerations
|
2022-05-13T16:35:32.979Z
|
Aggregate Query and Sort OPerations
| 1,470 |
null |
[
"aggregation",
"data-modeling",
"ruby",
"mongoid-odm"
] |
[
{
"code": "class Bill\n ...\n belongs_to :customer, index: true\n ...\nend\n\nclass Customer\n ....\n has_many :bills\n ...\nend\n[55] pry(main)> c_slow.class\n=> Customer\n[58] pry(main)> c_slow.bills.count\nMONGODB | pro-app-mongodb-05:27017 req:1030 conn:1:1 | db_api_production.aggregate | STARTED | {\"aggregate\"=>\"bills\", \"pipeline\"=>[{\"$match\"=>{\"deleted_at\"=>nil, \"customer_id\"=>BSON::ObjectId('60c76b9e21225c002044f6c5')}}, {\"$group\"=>{\"_id\"=>1, \"n\"=>{\"$sum\"=>1}}}], \"cursor\"=>{}, \"$db\"=>\"db_api_production\", \"$clusterTime\"=>{\"clusterTime\"=>#...\nMONGODB | pro-app-mongodb-05:27017 req:1030 | db_api_production.aggregate | SUCCEEDED | 0.008s\n=> 523\n[59] pry(main)> c_fast.bills.count\nMONGODB | pro-app-mongodb-05:27017 req:1031 conn:1:1 | db_api_production.aggregate | STARTED | {\"aggregate\"=>\"bills\", \"pipeline\"=>[{\"$match\"=>{\"deleted_at\"=>nil, \"customer_id\"=>BSON::ObjectId('571636f44a506256d6000003')}}, {\"$group\"=>{\"_id\"=>1, \"n\"=>{\"$sum\"=>1}}}], \"cursor\"=>{}, \"$db\"=>\"db_api_production\", \"$clusterTime\"=>{\"clusterTime\"=>#...\nMONGODB | pro-app-mongodb-05:27017 req:1031 | db_api_production.aggregate | SUCCEEDED | 0.135s\n=> 35913\n[60] pry(main)> c_slow.bills.excludes(_id: BSON::ObjectId('62753df4a54d7584e56ea829')).order(reference: :desc).limit(1000).pluck(:reference, :_id)\nMONGODB | pro-app-mongodb-05:27017 req:1083 conn:1:1 | db_api_production.find | STARTED | {\"find\"=>\"bills\", \"filter\"=>{\"deleted_at\"=>nil, \"customer_id\"=>BSON::ObjectId('60c76b9e21225c002044f6c5'), \"_id\"=>{\"$ne\"=>BSON::ObjectId('62753df4a54d7584e56ea829')}}, \"limit\"=>1000, \"sort\"=>{\"reference\"=>-1}, \"projection\"=>{\"reference\"=>1, \"_id\"=>1...\nMONGODB | pro-app-mongodb-05:27017 req:1083 | db_api_production.find | SUCCEEDED | 10.075s\nMONGODB | pro-app-mongodb-05:27017 req:1087 conn:1:1 | db_api_production.getMore | STARTED | {\"getMore\"=>#<BSON::Int64:0x0000558bcd7ba5f8 @value=165481790189>, \"collection\"=>\"bills\", \"$db\"=>\"db_api_production\", \"$clusterTime\"=>{\"clusterTime\"=>#<BSON::Timestamp:0x0000558bcd7a4b90 @seconds=1652511506, @increment=1>, \"signature\"=>{\"hash\"=><...\nMONGODB | pro-app-mongodb-05:27017 req:1087 | db_api_production.getMore | SUCCEEDED | 1.181s\n\n[61] pry(main)> c_fast.bills.excludes(_id: BSON::ObjectId('62753df4a54d7584e56ea829')).order(reference: :desc).limit(1000).pluck(:reference, :_id)\nMONGODB | pro-app-mongodb-05:27017 req:1091 conn:1:1 | db_api_production.find | STARTED | {\"find\"=>\"bills\", \"filter\"=>{\"deleted_at\"=>nil, \"customer_id\"=>BSON::ObjectId('571636f44a506256d6000003'), \"_id\"=>{\"$ne\"=>BSON::ObjectId('62753df4a54d7584e56ea829')}}, \"limit\"=>1000, \"sort\"=>{\"reference\"=>-1}, \"projection\"=>{\"reference\"=>1, \"_id\"=>1...\nMONGODB | pro-app-mongodb-05:27017 req:1091 | db_api_production.find | SUCCEEDED | 0.004s\nMONGODB | pro-app-mongodb-05:27017 req:1092 conn:1:1 | db_api_production.getMore | STARTED | {\"getMore\"=>#<BSON::Int64:0x0000558bcd89c4d0 @value=166614148534>, \"collection\"=>\"bills\", \"$db\"=>\"db_api_production\", \"$clusterTime\"=>{\"clusterTime\"=>#<BSON::Timestamp:0x0000558bcd88eab0 @seconds=1652511516, @increment=1>, \"signature\"=>{\"hash\"=><...\nMONGODB | pro-app-mongodb-05:27017 req:1092 | db_api_production.getMore | SUCCEEDED | 0.013s\n[1] pry(main)> Customer.all.collect do |c|\n[1] pry(main)* starting = Process.clock_gettime(Process::CLOCK_MONOTONIC)\n[1] pry(main)* c.bills.excludes(_id: BSON::ObjectId('62753df4a54d7584e56ea829')).order(reference: :desc).limit(1000).pluck(:reference_string, :id);nil\n[1] pry(main)* ending = Process.clock_gettime(Process::CLOCK_MONOTONIC)\n[1] pry(main)* [c.acronym, ending - starting]\n[1] pry(main)* end\n [23] pry(main)> h_slow[\"queryPlanner\"][\"parsedQuery\"]\n=> {\"$and\"=>\n [{\"customer_id\"=>{\"$eq\"=>BSON::ObjectId('60c76b9e21225c002044f6c5')}},\n {\"deleted_at\"=>{\"$eq\"=>nil}},\n {\"$nor\"=>[{\"_id\"=>{\"$eq\"=>BSON::ObjectId('62753df4a54d7584e56ea829')}}]}]}\n[24] pry(main)> h_fast[\"queryPlanner\"][\"parsedQuery\"]\n=> {\"$and\"=>\n [{\"customer_id\"=>{\"$eq\"=>BSON::ObjectId('571636f44a506256d6000003')}},\n {\"deleted_at\"=>{\"$eq\"=>nil}},\n {\"$nor\"=>[{\"_id\"=>{\"$eq\"=>BSON::ObjectId('62753df4a54d7584e56ea829')}}]}]}\n\n\"inputStage\": {\n \"advanced\": 1000,\n \"direction\": \"backward\",\n \"dupsDropped\": 0,\n \"dupsTested\": 0,\n \"executionTimeMillisEstimate\": 0,\n \"indexBounds\": {\n \"reference\": [\n \"[MaxKey, MinKey]\"\n ]\n },\n \"indexName\": \"reference_1\",\n \"indexVersion\": 2,\n \"invalidates\": 0,\n \"isEOF\": 0,\n \"isMultiKey\": false,\n \"isPartial\": false,\n \"isSparse\": false,\n \"isUnique\": false,\n \"keyPattern\": {\n \"reference\": 1\n },\n \"keysExamined\": 1000,\n \"multiKeyPaths\": {\n \"reference\": []\n },\n \"nReturned\": 1000,\n \"needTime\": 0,\n \"needYield\": 0,\n \"restoreState\": 10,\n \"saveState\": 10,\n \"seeks\": 1,\n \"seenInvalidated\": 0,\n \"stage\": \"IXSCAN\",\n \"works\": 1000\n },\n \"invalidates\": 0,\n \"isEOF\": 0,\n \"nReturned\": 1000,\n \"needTime\": 0,\n \"needYield\": 0,\n \"restoreState\": 10,\n \"saveState\": 10,\n \"stage\": \"FETCH\",\n \"works\": 1000\n },\n \"invalidates\": 0,\n \"isEOF\": 1,\n \"limitAmount\": 1000,\n \"nReturned\": 1000,\n \"needTime\": 0,\n \"needYield\": 0,\n \"restoreState\": 10,\n \"saveState\": 10,\n \"stage\": \"LIMIT\",\n \"works\": 1001\n },\n \"executionSuccess\": true,\n \"executionTimeMillis\": 7,\n \"nReturned\": 1000,\n \"totalDocsExamined\": 1000,\n \"totalKeysExamined\": 1000\n }\n\n \"inputStage\": {\n \"advanced\": 604411,\n \"direction\": \"backward\",\n \"dupsDropped\": 0,\n \"dupsTested\": 0,\n \"executionTimeMillisEstimate\": 320,\n \"indexBounds\": {\n \"reference\": [\n \"[MaxKey, MinKey]\"\n ]\n },\n \"indexName\": \"reference_1\",\n \"indexVersion\": 2,\n \"invalidates\": 0,\n \"isEOF\": 1,\n \"isMultiKey\": false,\n \"isPartial\": false,\n \"isSparse\": false,\n \"isUnique\": false,\n \"keyPattern\": {\n \"reference\": 1\n },\n \"keysExamined\": 604411,\n \"multiKeyPaths\": {\n \"reference\": []\n },\n \"nReturned\": 604411,\n \"needTime\": 0,\n \"needYield\": 0,\n \"restoreState\": 6138,\n \"saveState\": 6138,\n \"seeks\": 1,\n \"seenInvalidated\": 0,\n \"stage\": \"IXSCAN\",\n \"works\": 604412\n },\n \"invalidates\": 0,\n \"isEOF\": 1,\n \"nReturned\": 523,\n \"needTime\": 603888,\n \"needYield\": 0,\n \"restoreState\": 6138,\n \"saveState\": 6138,\n \"stage\": \"FETCH\",\n \"works\": 604412\n },\n \"invalidates\": 0,\n \"isEOF\": 1,\n \"limitAmount\": 1000,\n \"nReturned\": 523,\n \"needTime\": 603888,\n \"needYield\": 0,\n \"restoreState\": 6138,\n \"saveState\": 6138,\n \"stage\": \"LIMIT\",\n \"works\": 604412\n},\n\"executionSuccess\": true,\n\"executionTimeMillis\": 9472,\n\"nReturned\": 523,\n\"totalDocsExamined\": 604411,\n\"totalKeysExamined\": 604411\n}\n",
"text": "I’m using mongoid gem ’mongoid’, ’~> 7.2.4’ (mongoDB 3.6) with rails (5) and I have a database with customer collections and bills with this relation:then in a pry console I test with two clients:until this moment it seems correct but when I execute this query:The slow customer is taking 10 seconds and the fast one is taking 0.004s in the same query. and the slow customer has less than 600 documents and the fast client more than 35000. it has no sense for me.We did on the bills collection a Reindex, we take the query over all customers and it seems too work at the beginnign but in thre second query it went slow again but the same customers are always slow than the fastest oneI cannot apply explain on pluck query. I reviewd the index and it worked correctly placed in the collection\nbut doing explain it is slow on the same queryMONGODB | pro-app-mongodb-05:27017 req:1440 | dbapiproduction.explain | SUCCEEDED | 10.841s\nMONGODB | pro-app-mongodb-05:27017 req:2005 | dbapiproduction.explain | SUCCEEDED | 0.006sobviously time, but also docsExaminedthe query is the same, changing obyously de ids:Why happen this differences, and what I can do to correct this collection",
"username": "Antonio_Juan_Querol_Giner"
},
{
"code": "mongodmongod",
"text": "It’s hard to tell what’s going on but it may be easier if you can access the mongod log file and include the log lines generated when the slow query runs - it will show exactly what’s happening during the query which is different than when you run explain.Is it possible for you to access the mongod logs? Also MongoDB 3.6 is really old - are you able to upgrade to a more recent version?Asya",
"username": "Asya_Kamsky"
}
] |
Poor perfomance in mongoid rails queries depending on index
|
2022-05-15T09:43:54.998Z
|
Poor perfomance in mongoid rails queries depending on index
| 3,115 |
null |
[
"aggregation",
"queries",
"node-js",
"atlas-search"
] |
[
{
"code": "const searchOption = [\n {\n $search: {\n text: {\n query: \"hi\",\n path: \"english\",\n },\n },\n },\n { $project: { _id: 0, french: 1, english: 1, score: { $meta: \"searchScore\" } } },\n { $limit: 5 },\n];\n\nconst result = await Greetings.aggregate(searchOption, { cursor: { batchSize: 5 } }).toArray();\n\n[\n {\n english: \"it’s his\",\n french: \"c'est le sien\",\n score: 2.362138271331787,\n },\n {\n english: \"hi\",\n french: \"salut\",\n score: 2.362138271331787,\n },\n {\n english: \"his\",\n french: \"le sien\",\n score: 2.362138271331787,\n },\n {\n english: \"it’s his failure to arrange his\",\n french: \"c'est son incapacité à organiser son\",\n score: 2.2482824325561523,\n },\n {\n english: \"it’s his failure to arrange his time\",\n french: \"c'est son incapacité à organiser son temps\",\n score: 2.0995540618896484,\n },\n];\n\n",
"text": "I’m having issues getting the right/best translation for ‘hi’ from English to French. After some debugging I discovered that the first three(3) documents returned from my aggregation has the same score of ‘2.362138271331787’ each.I’m expecting ‘hi’ to have a higher score since it has an exact match with the same search query, but ‘it’s his’ and ‘his’ seems to have the same score too with ‘hi’.",
"username": "Chukwuemeka_Maduekwe"
},
{
"code": "Index AnalyserSearch AnalyserDynamic Mappings",
"text": "Hi @Chukwuemeka_Maduekwe,Thanks for sharing the pipeline details and the output you’re currently receiving.Can you confirm the following details:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"english\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"type\": \"string\"\n }\n ],\n \"french\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"analyzer\": \"lucene.french\",\n \"searchAnalyzer\": \"lucene.french\",\n \"type\": \"string\"\n }\n ],\n \"spanish\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"analyzer\": \"lucene.spanish\",\n \"searchAnalyzer\": \"lucene.spanish\",\n \"type\": \"string\"\n }\n ]\n }\n }\n}\n",
"text": "Index Analyzer: lucene.standard\nSearch Analyzer: lucene.standard\nDynamic Mapping On",
"username": "Chukwuemeka_Maduekwe"
}
] |
Multiple documents having equal search score in Atlas Search
|
2022-05-10T15:40:06.058Z
|
Multiple documents having equal search score in Atlas Search
| 2,356 |
[
"aggregation",
"queries",
"monitoring"
] |
[
{
"code": "match\nproject\nfacet -> {\n resA -[ group by A],\n resB -[ group by B],\n resC -[ group by C]\n }\n{uid: 1}$match$facet \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n ??\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": { \"uid\": 1 },\n \"indexName\": \"uid_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": { \"uid\": [] },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"uid\": [\n \"[\\\"cdc67cf2-0c23-4d32-b103-f78503824b18\\\", \\\"cdc67cf2-0c23-4d32-b103-f78503824b18\\\"]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\": [\n ??\n ]\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 8308,\n \"executionTimeMillis\": 1397,\n \"totalKeysExamined\": 8308,\n \"totalDocsExamined\": 8308,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 8308,\n \"executionTimeMillisEstimate\": 82,\n \"works\": 8309,\n \"advanced\": 8308,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 10,\n \"restoreState\": 10,\n \"isEOF\": 1,\n \"transformBy\": {\n ??\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 8308,\n \"executionTimeMillisEstimate\": 15,\n \"works\": 8309,\n \"advanced\": 8308,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 10,\n \"restoreState\": 10,\n \"isEOF\": 1,\n \"docsExamined\": 8308,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 8308,\n \"executionTimeMillisEstimate\": 4,\n \"works\": 8309,\n \"advanced\": 8308,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 10,\n \"restoreState\": 10,\n \"isEOF\": 1,\n \"keyPattern\": { \"uid\": 1 },\n \"indexName\": \"uid_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": { \"uid\": [] },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"uid\": [\n \"[\\\"cdc67cf2-0c23-4d32-b103-f78503824b18\\\", \\\"cdc67cf2-0c23-4d32-b103-f78503824b18\\\"]\"\n ]\n },\n \"keysExamined\": 8308,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n },\n \"allPlansExecution\": [\n {\n \"nReturned\": 101,\n \"executionTimeMillisEstimate\": 0,\n \"totalKeysExamined\": 101,\n \"totalDocsExamined\": 101,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 101,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 101,\n \"advanced\": 101,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"transformBy\": {\n ??\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 101,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 101,\n \"advanced\": 101,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"docsExamined\": 101,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 101,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 101,\n \"advanced\": 101,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"keyPattern\": { \"uid\": 1 },\n \"indexName\": \"uid_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": { \"uid\": [] },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"uid\": [\n \"[\\\"cdc67cf2-0c23-4d32-b103-f78503824b18\\\", \\\"cdc67cf2-0c23-4d32-b103-f78503824b18\\\"]\"\n ]\n },\n \"keysExamined\": 101,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n }\n } ,\n \n ]\n }\n },\n \"nReturned\": 8308,\n \"executionTimeMillisEstimate\": 94\n},\n{\n \"$facet\": {\n \"resA\": [\n {\n \"$teeConsumer\": {},\n \"nReturned\": 8308,\n \"executionTimeMillisEstimate\": 1229\n },\n {\n \"$match\": { \"resA\": { \"$not\": { \"$eq\": null } } },\n \"nReturned\": 8308,\n \"executionTimeMillisEstimate\": 1248\n },\n {\n \"$group\": {\n \"_id\": \"$resA\",\n \"count\": { \"$sum\": { \"$const\": 1 } }\n },\n \"nReturned\": 374,\n \"executionTimeMillisEstimate\": 1250\n },\n {\n \"$sort\": { \"sortKey\": { \"count\": -1 } },\n \"nReturned\": 374,\n \"executionTimeMillisEstimate\": 1250\n },\n {\n \"$project\": {\n \"_id\": true,\n \"label\": \"$_id\",\n \"count\": \"$count\",\n \"percent\": {\n \"$round\": [\n {\n \"$ifNull\": [\n {\n \"$multiply\": [\n { \"$divide\": [\"$count\", { \"$const\": 8308 }] },\n { \"$const\": 100 }\n ]\n },\n { \"$const\": 0 }\n ]\n }\n ]\n }\n },\n \"nReturned\": 374,\n \"executionTimeMillisEstimate\": 1250\n }\n ],\n.\n.\n.\n.\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 1388\n}\nexecutionStatsnreturned : 8308$facetnreturned : 1examined:returnednreturned : 1examined:returned$facetfacetexamined:returnedQuery Targeting: Scanned Objects / Returned has gone above 1000",
"text": "I have a query with following stages -The index i have used is {uid: 1} which is basically filtering out in $match stage with 8308 records below. I have to group on 8308 records in $facet with different fields in each result set(resA, resB…).When i do explain i get following results -My Question is how nreturned gets evaluated and i have gone through the Docs . But its not clear to me that which nreturned gets considered at the end. Suppose in the above executionStats the nreturned : 8308 but the one at the last below $facet shows nreturned : 1 . So to determine the examined:returned ratio. Which param is being considered. I have checked my mongo atlas profiler stats. It shows nreturned : 1 , that makes the examined:returned ratio to 8308.Is this because of $facet stage? Because i’d need processed results in facet stage, as i have mutiple grouping separate in facet as resA, resB … If the examined:returned is 8308. Is this problematic? My query needs to group on the 8308 records with multiple fields in each facet stage. Also, the atlas throws an alert Query Targeting: Scanned Objects / Returned has gone above 1000 . Is this the cause?",
"username": "pkp"
},
{
"code": "",
"text": "Looks like the alert is incorrect and you can safely ignore it - your query is as efficient as it can possibly be from what you post here.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How does "nreturned" work in mongo explain() in $facet/$group to determine "examined:returned" ratio?
|
2022-05-17T05:05:15.327Z
|
How does “nreturned” work in mongo explain() in $facet/$group to determine “examined:returned” ratio?
| 2,976 |
|
null |
[
"compass"
] |
[
{
"code": "",
"text": "After being able to start mongod server and mongo shell in WSL2, I wish to use MongoDB Compass installed on Windows to connect with the mongod server in WSL2. However I haven’t been able to do so. I tried to connect with URI “mongodb://localhost:27017” in Compass after starting mongod in WSL2 at port 27017, but the attempt was not successful. This is surprising because WSL2 supports local host forwarding (i.e., 127.0.0.1 on WSL2 is the same 127.0.0.1 on Windows).Any suggestion will be appreciated. Thanks!",
"username": "Tianzhi_Li"
},
{
"code": "",
"text": "Check these links.May help",
"username": "Ramachandra_Tummala"
},
{
"code": "python -m http.server --bind 127.0.0.1 8000",
"text": "Thanks Ramachandra. Unfortunately, all methods in the first link did not work (e.g. I tried wsl --shutdown and disabling fast startup), and the second link answered the opposite of my question (connect to Windows MongoDB from WSL).As a guess, is the issue related to how mongodb protocol is configured? For example, If I run python -m http.server --bind 127.0.0.1 8000 in WSL and visit “http://127.0.0.1:8000” in Firefox on Windows, everything works. I can say that localhost forwarding of WSL works for HTTP protocol. Is it possible that MongoDB network protocol does have the same support?",
"username": "Tianzhi_Li"
},
{
"code": "$ mongod --bind-ip-all\n$ ip addr | grep eth0 \n",
"text": "After much tweaking, I found a workaround but it may have security risk. I would appreciate if anyone could recognize the risk and recommend a better solution.Step 1: (in WSL2) Start mongod but bind to all IP, not just localhostStep 2: (in WSL2) Find the IP address of WSL2Copy the IP address right after “inet”. For example, if the result returns “inet 162.38.25.44/20 brd 162.38.30.255 scope global eth0”, then the IP address is 162.38.25.44.Step 3: (in Windows) Suppose my WSL2 IP address is 162.38.25.44, then run MongoDB compass and connect to the address “mongodb://162.38.25.44:27017”What I don’t understand is why the first step requires binding to all IP, while binding to just localhost or 127.0.0.1 will not work. Any suggestions?",
"username": "Tianzhi_Li"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Connect MongoDB Compass (Windows) to WSL2 mongod server
|
2022-05-03T18:10:16.923Z
|
Connect MongoDB Compass (Windows) to WSL2 mongod server
| 13,020 |
null |
[
"aggregation"
] |
[
{
"code": "db.messages.aggregate([\n\t{ \n\t$match: \n\t\t{\n\t\t$and: [\n {user_id: \"256f5280-fb49-4ad6-b7f5-65c4329d46e0\"},\n\t\t\t{time: {$gt: 0, $lt: 1652471890}}\n\t\t]\n\t\t}\n\t},\n\t{ \n\t$project: \n\t\t{\n\t\tamount: $count,\n\t\tmoreThanZero: \n\t\t\t{\n\t\t\t$cond: [ { $gt: [ \"$amount\", 0 ] }, 1, 0]\n\t\t\t}\n }\n\t}\n])\nuncaught exception: ReferenceError: $count is not defined : @(shell):6:35",
"text": "Hello everybody,I am trying to count how much messages exists on my db collection and return a custom key: value, If this greater than 1 countable it’s return true(1) else return false(0).I tried the shell below, but not working yet… I think I am near to correct aggregation, but I am don’t know.I am trying to get a $count as variable to compare it, if it’s greater than 1 my return will be 1 else 0. But it isnt working.uncaught exception: ReferenceError: $count is not defined : @(shell):6:35",
"username": "Rick_Dias"
},
{
"code": "",
"text": "You get$count is not definedbecause it is on the right side of a colon. It is a value. A value has to be in quote if it is not a symbol or a number. You quoted $amount correctly.But putting $count would not do what you want to do. This is not how you count documents. You may use a $group stage with the $sum accumulator or a $count stage.Another thing that would not work is using $amount in the same $project stage that creates it. A field created by $project or $set is only available in the next stage.",
"username": "steevej"
},
{
"code": "",
"text": "This answer was quite enlightening, I got help with my Mongo Query, but it also helped me a lot to understand some concepts! Thanks.",
"username": "Rick_Dias"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Count messages and return a custom field (key/value)
|
2022-05-16T19:58:43.522Z
|
Count messages and return a custom field (key/value)
| 2,586 |
null |
[
"java",
"android"
] |
[
{
"code": "{\n \"dateTime\": {\n \"$date\": {\n \"$numberLong\": \"1644483300000\"\n }\n },\n \npublic class RealmLogEntry extends RealmObject {\n//public class RealmLogEntry implements RealmModel {\n\n @PrimaryKey\n private String _id;\n\n String userId;\n\n String dateTime;\norg.bson.codecs.configuration.CodecConfigurationException: An exception occurred when decoding using the AutomaticPojoCodec.\n Decoding into a 'RealmLogEntry' failed with the following exception:\n \n Failed to decode 'RealmLogEntry'. Decoding 'dateTime' errored with: readString can only be called when CurrentBSONType is STRING, not when CurrentBSONType is DATE_TIME.\n",
"text": "Hi,\nI’m trying to use realm from an android app. I’m working my way through the tutorial. I’ve setup a simple watch. It appears that I need to convert a mongo date to a string , but the question is how do I do this ?\nThanksThis is the field in the mongo docThis is part of the RealmObject layout for this (java)I have setup a watch on the collection, but when I change the collection (via postman) , I get this error when running the android app using Java in android studio. I understand the error, I’m trying to figure out how to move past it. i.e. convert the dateTime on the mongoDB doc to a string, I’ve tried changing from a String to a Date type in the Realm Object layout but I can’t get that to work either.",
"username": "Robert_Benson"
},
{
"code": "",
"text": "I’m sorry I can’t help you. I did not progress with this field , I commented it out. I am using mongoDB as a beginner, to evaluate for further use.Good luck.",
"username": "Robert_Benson"
},
{
"code": "",
"text": "If your dates are stored using the date data type you should manipulate them as data object in your code.If your dates are not stored using the date data type you should migrate your data to store the date as date data type rather than string.You then format the date, using the user’s LOCALE, only when you present it to the user.",
"username": "steevej"
}
] |
How do I convert a mongo date to a string
|
2022-05-11T12:55:27.409Z
|
How do I convert a mongo date to a string
| 4,786 |
null |
[
"node-js",
"crud"
] |
[
{
"code": "",
"text": "Hi I have looked around and am currently going over the docs but wanted to ask.I’m trying to avoid duplicate entries into by db. My application crawls the web and returns arrays of objects which when I use the insertMany() function. updates my db with all the objects with new id’s generated as Id like, but the problem is when the crawl run again it’ll return a “new” array of objects which will likely be identical to the last one. Using insert as I have will create duplicate entries. I tried using updateMany with upsert: true but it wont accept my array of objects in the same way. I can do a single object which then has the array of objects but that’s not desirableHow can I pass in many objects at once using upsert like with insert([{},{}.{}]) creating new id’s for objects that don’t exist and updating existing ones?\nWondering am I better off storing the value in some variable to use as a filter to check new arrays coming in. and just use insertMany()?",
"username": "Billy_Best"
},
{
"code": "",
"text": "You will need bulkWrite filled with one updateOne: document per object from crawler.Each updateOne: will use upsert:true in order to insert the document if it does not exist and to update it or not if it does.The filter: will contains fields from you input object that determine the uniqueness of your documents. The update: will contains fields that you want to modify.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the reply. I kind of thought so just wanted to check. Often in these situations I gravitate to doing something more complicated and someone comes around and is like why don’t you just do this lol.",
"username": "Billy_Best"
},
{
"code": "on",
"text": "There is another option and that’s inserting new documents into a new (temp) collection, then doing an aggregation on that collection into your “real” collection with appropriate options for cases where the document already exists and nothing has changed. The on field(s) would be the immutable fields that determine that it’s “the same” document…Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thank you for this incite. The initial conception of the idea I am working on had a tmp data storage stage but I have not implemented this yet and kind of forgot about it til now. Just a nooby using the mern stack for the first time haha.\nWould you make your tmp collection programatically then drop it after? Or would tmp be an actual collection in the db?",
"username": "Billy_Best"
},
{
"code": "",
"text": "I’m using the term “temp” figuratively, i.e. your application would create it, populate it and then after merging the results into the “regular” collection it would then be dropped by same code.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Gotcha. Thanks again.",
"username": "Billy_Best"
},
{
"code": "",
"text": "What would be the pros of this alternative?I feel it would be more resource intensive compared to bulkWrite.",
"username": "steevej"
},
{
"code": "",
"text": "From my perspective in this case bulkWrite is the more elegant solution and what I am using to ensure no duplicates are entered. I think Asya was pointing out it was possible. Which reminded me that you can run data through some pipeline before entering it. Not sure exactly when I would do this as for my usecase I have been formatting my data before entering it so its already clean I guess you can say and any new data I want to derive from that I would just do aggregation on my existing collections I presume. I haven’t got that far haha",
"username": "Billy_Best"
},
{
"code": "",
"text": "I am all for knowing the alternatives.At this point, I can see a pro for the temporary collection alternative. You have a log or history of the input, specially if you do not delete the documents inserted right after you processed them. You can maintained status information about when and how they are processed. And you can use a TTL index to eventually automatically delete them.I wanted to fire the discussion to see if there are more pros or cons that I do not see.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Batch insert/upsert. Avoiding duplicates
|
2022-05-15T00:55:05.672Z
|
Batch insert/upsert. Avoiding duplicates
| 8,429 |
null |
[
"aggregation"
] |
[
{
"code": "{\n\t\"_id\" : ObjectId(\"606b7031a0ccf722226a85ae\"),\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2021-04-05T20:16:49.893343Z\",\n\t\"isActive\" : true,\n\t\"staffId\" : [\n\t\t\"606b6b44a0ccf72222ce375a\"\n\t],\n\t\"subjectName\" : \"English\",\n\t\"teamId\" : ObjectId(\"6069a6a9a0ccf704e7f4b537\"),\n\t\"updatedAt\" : \"2021-04-05T20:16:49.893382Z\",\n\t\"syllabus\" : [\n\t\t{\n\t\t\t\"chapterId\" : \"627f4e05ae6cd20cefbe3bb1\",\n\t\t\t\"chapterName\" : \"chap1\",\n\t\t\t\"topicsList\" : [\n\t\t\t\t{\n\t\t\t\t\t\"topicId\" : \"627f4e05ae6cd20cefbe3bb2\",\n\t\t\t\t\t\"topicName\" : \"1.1\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"topicId\" : \"627f4e05ae6cd20cefbe3bb3\",\n\t\t\t\t\t\"topicName\" : \"2.5\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"chapterId\" : \"627f4e05ae6cd20cefbe3bb4\",\n\t\t\t\"chapterName\" : \"chap2\",\n\t\t\t\"topicsList\" : [\n\t\t\t\t{\n\t\t\t\t\t\"topicId\" : \"627f4e05ae6cd20cefbe3bb5\",\n\t\t\t\t\t\"topicName\" : \"1\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"topicId\" : \"627f4e05ae6cd20cefbe3bb6\",\n\t\t\t\t\t\"topicName\" : \"2\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t],\n\t\"updateAt\" : \"2022-05-14T06:36:53.981765Z\"\n}\n\ndb.subject_staff_database.aggregate({$project: { numberOfCourses: { $size: \"$syllabus\" }}})\nuncaught exception: Error: command failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"The argument to $size must be an array, but was of type: missing\",\n\t\"code\" : 17124,\n\t\"codeName\" : \"Location17124\"\n} : aggregate failed :\n\n",
"text": "mongo queryResult",
"username": "Prathamesh_N"
},
{
"code": "subject_staff_database",
"text": "subject_staff_databaseIn your case, the syllabus property might be missing in the document so only the issue is coming.db.subject_staff_database.aggregate([{$project: {\nnumberOfCourses :{$cond: [{$ne: [{$type:’$syllabus’}, ‘missing’]}, {$size :’$syllabus’}, 0 ]}\n}}])",
"username": "Sudhesh_Gnanasekaran"
},
{
"code": "",
"text": "Thanks a lot Sudhesh_Gnanasekaran the query is working fine",
"username": "Prathamesh_N"
},
{
"code": "{ _id: 0, syllabus: 369 }\n{ _id: 1, syllabus: [ 369 ] }\n{ _id: 2 }\ndb.subject_staff_database.aggregate([{$project: {\nnumberOfCourses :{$cond: [{$eq: [{$type:\"$syllabus\"}, \"array\"]}, {$size :\"$syllabus\"}, 0 ]}\n}}])\n{$eq: [{$type:\"$syllabus\"}, \"array\"]}\n{$ne: [{$type:\"$syllabus\"}, \"missing\"]}\n",
"text": "There is an issue with the query.db.subject_staff_database.aggregate([{$project: {\nnumberOfCourses :{$cond: [{$ne: [{$type:’$syllabus’}, ‘missing’]}, {$size :’$syllabus’}, 0 ]}\n}}])Consider the following documents:The query will work fine with _id:1 and _id:2 but will fail with _id:0. This case seems to be absent from your data sincethe query is working fineThe following variation will work in all the cases.The difference beingvs@Sudhesh_Gnanasekaran, please read Formatting code and log snippets in posts before posting your next code snippet. This will ensure we can cut-n-paste your query and use it directly without editing the fancy html quotes we get when it is not marked up correctly.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to get size of the syllabus array
|
2022-05-17T06:58:53.399Z
|
How to get size of the syllabus array
| 4,099 |
null |
[
"queries",
"dot-net",
"crud"
] |
[
{
"code": "*{*\n* \"_id\" : NUUID(\"5fbd7fe7-fe25-41d1-9152-0e49eba04d3b\"),*\n* \"Department\" : {*\n* \"DepartmentId\" : \"FS-11-140\",*\n* \"DepartmentName\" : \"IT\",*\n* \"DepartmentHead\" : \"XYZ\",*\n* },*\n* \"Location\" : \"India\",*\n* \"CollegeName\" : \"MMCOE\",*\n*}*\n *await _clg.UpdateOneAsync(*\n* new FilterDefinitionBuilder<ClgDbModel>()*\n* .Where(x => x.id == id),*\n* new UpdateDefinitionBuilder<ClgDbModel>()*\n* .Set(_ => _.studentCount, 100),*\n* new UpdateOptions()*\n* {*\n* IsUpsert = false*\n* });*\n* await _clg.UpdateOneAsync(*\n* new FilterDefinitionBuilder<ClgDbModel>()*\n* .Where(x => x.id == id),*\n* new UpdateDefinitionBuilder<ClgDbModel>()*\n* .Set(_ => _.Department.Title, \"ABCD\"),*\n* new UpdateOptions()*\n* {*\n* IsUpsert = false*\n* });*\n",
"text": "I have one object which is present inside another object. In parent object it is possible to set element if the element is not exist. If same thing if we try to do with child object mongo drivers throws error \"Unable to determine the serialization information for … \".Ex :\nmongo documentQuery 1: [ studentCount element will get added]Query 2 :[Mongo driver throws exception]",
"username": "Bhavana_Shrotri"
},
{
"code": "using MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Driver;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class BeginUpdate : BaseClass\n{\n public class Department\n { \n public string DepartmentId;\n public string DepartmentName;\n public string DepartmentHead;\n [BsonIgnoreIfNull]\n public string Title;\n }\n public record College(\n",
"text": "Hi,I can’t reproduce the problem.\nYour code works.",
"username": "Remi_Thomas"
}
] |
Unable to update object element inside another object
|
2022-05-02T13:45:17.667Z
|
Unable to update object element inside another object
| 2,310 |
null |
[
"queries"
] |
[
{
"code": "strawberry",
"text": "I have sucessfully implemented the atlas search feature to our application with fuzzy search enabled. But I am facing a minor issue.\nSuppose I am trying to search strawberry term in our collection. I have entered ‘straw’ in the query. Since it is fuzzy match. I am getting ‘Steak’, ‘Stew’ etc. But ‘strawberry’ is not found in the list. Seems like it is not scored if the query is more in length. Is there any better way to handle this situation",
"username": "Shanmuga_Sabareesh_Esaiselvam"
},
{
"code": "",
"text": "A picture for reference\n\nimage1487×380 30.9 KB\n",
"username": "Shanmuga_Sabareesh_Esaiselvam"
}
] |
Atlas Search Partial Match Feature
|
2022-05-17T11:22:40.435Z
|
Atlas Search Partial Match Feature
| 1,647 |
[
"mdbw22-hackathon"
] |
[
{
"code": "Lead Developer AdvocateSenior Developer Advocate",
"text": "So come, join in and ask questions. We will be sharing details and guidelines about the submission process and also the hackathon Prizes! We’d love for these sessions to be very participatory this week - so, if you have a demo to share, please reply here and we’ll send you an invite link. All participants get SWAG!!We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "…and here’s the recording if you didn’t get to tune in Live",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Hackathon Office Hours & Demos! Final Week! APAC/EMEA Session
|
2022-05-17T07:24:42.952Z
|
Hackathon Office Hours & Demos! Final Week! APAC/EMEA Session
| 2,916 |
|
null |
[
"aggregation",
"performance"
] |
[
{
"code": "db.getCollection(\"pairpercentages\").explain('executionStats').aggregate([\n {\n $lookup: {\n from: \"filtereditems\",\n let: {\n pair_src: \"$pair\",\n buy_src: \"$buy\",\n sell_src: \"$sell\",\n percentage_src: \"$percentage\",\n buycontractaddress_src: \"$buycontractaddress\",\n sellcontractaddress_src: \"$sellcontractaddress\",\n },\n pipeline: [\n {\n $match: {\n $and: [\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$pair_src\", \"$pair\"],\n },\n {\n $eq: [\"$pair\", \"ALL\"],\n },\n ],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$buy_src\", \"$buy\"],\n },\n {\n $eq: [\"$buy\", \"ALL\"],\n },\n ],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$sell_src\", \"$sell\"],\n },\n {\n $eq: [\"$sell\", \"ALL\"],\n },\n ],\n },\n },\n {\n $expr: {\n $lt: [\"$$percentage_src\", \"$percentage\"],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$contractaddress\", \"\"]\n },\n {\n $eq: [\"$$buycontractaddress_src\", \"$contractaddress\"],\n },\n {\n $eq: [\"$$sellcontractaddress_src\", \"$contractaddress\"],\n },\n {\n $eq: [\"$contractaddress\", \"ALL\"],\n },\n ],\n },\n },\n ],\n },\n },\n ],\n as: \"filtered\",\n },\n },\n {\n $match: {\n filtered: {\n $eq: [],\n },\n },\n },\n {\n $lookup: {\n from: \"alarmitems\",\n let: {\n pair_src: \"$pair\",\n buy_src: \"$buy\",\n sell_src: \"$sell\",\n percentage_src: \"$percentage\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n {\n $eq: [\"$$pair_src\", \"$pair\"],\n },\n {\n $eq: [\"$$buy_src\", \"$buy\"],\n },\n {\n $eq: [\"$$sell_src\", \"$sell\"],\n },\n ],\n },\n },\n },\n ],\n as: \"specialAlarmfilterCounter\",\n },\n },\n {\n $addFields: {\n specialAlarmFilterExists: {\n $cond: {\n if: { $gt: [{ $size: \"$specialAlarmfilterCounter\" }, 0] },\n then: true,\n else: false,\n },\n },\n },\n },\n {\n $lookup: {\n from: \"alarmitems\",\n let: {\n pair_src: \"$pair\",\n buy_src: \"$buy\",\n sell_src: \"$sell\",\n percentage_src: \"$percentage\",\n specialAlarmFilterExists: \"$specialAlarmFilterExists\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n {\n $eq: [\"$$pair_src\", \"$pair\"],\n },\n {\n $eq: [\"$$buy_src\", \"$buy\"],\n },\n {\n $eq: [\"$$sell_src\", \"$sell\"],\n },\n {\n $gt: [\"$$percentage_src\", \"$percentage\"],\n },\n ],\n },\n },\n },\n ],\n as: \"specialAlarmfilter\",\n },\n },\n {\n $addFields: {\n specialAlarm: {\n $cond: {\n if: {\n $and: [\n { $gt: [{ $size: \"$specialAlarmfilter\" }, 0] },\n { $eq: [\"$specialAlarmFilterExists\", true] },\n ],\n },\n then: 1,\n else: 0,\n },\n },\n },\n },\n {\n $lookup: {\n from: \"alarmitems\",\n let: {\n pair_src: \"$pair\",\n buy_src: \"$buy\",\n sell_src: \"$sell\",\n percentage_src: \"$percentage\",\n specialAlarmFilterExists: \"$specialAlarmFilterExists\",\n },\n pipeline: [\n {\n $match: {\n $and: [\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$specialAlarmFilterExists\", false],\n },\n ],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$pair_src\", \"$pair\"],\n },\n {\n $eq: [\"$pair\", \"ALL\"],\n },\n ],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$buy_src\", \"$buy\"],\n },\n {\n $eq: [\"$buy\", \"ALL\"],\n },\n ],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$sell_src\", \"$sell\"],\n },\n {\n $eq: [\"$sell\", \"ALL\"],\n },\n ],\n },\n },\n ],\n },\n },\n ],\n as: \"alarmfilterCounter\",\n },\n },\n {\n $addFields: {\n allAlarmFilterExists: {\n $cond: {\n if: { $gt: [{ $size: \"$alarmfilterCounter\" }, 0] },\n then: true,\n else: false,\n },\n },\n },\n },\n {\n $lookup: {\n from: \"alarmitems\",\n let: {\n pair_src: \"$pair\",\n buy_src: \"$buy\",\n sell_src: \"$sell\",\n percentage_src: \"$percentage\",\n specialAlarmFilterExists: \"$specialAlarmFilterExists\",\n allAlarmFilterExists: \"$allAlarmFilterExists\",\n },\n pipeline: [\n {\n $match: {\n $and: [\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$specialAlarmFilterExists\", false],\n },\n ],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$pair_src\", \"$pair\"],\n },\n {\n $eq: [\"$pair\", \"ALL\"],\n },\n ],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$buy_src\", \"$buy\"],\n },\n {\n $eq: [\"$buy\", \"ALL\"],\n },\n ],\n },\n },\n {\n $expr: {\n $or: [\n {\n $eq: [\"$$sell_src\", \"$sell\"],\n },\n {\n $eq: [\"$sell\", \"ALL\"],\n },\n ],\n },\n },\n {\n $expr: {\n $gt: [\"$$percentage_src\", \"$percentage\"],\n },\n },\n {\n $expr: {\n $eq: [\"$$allAlarmFilterExists\", true],\n },\n },\n ],\n },\n },\n ],\n as: \"alarmfilter\",\n },\n },\n {\n $addFields: {\n allAlarm: {\n $cond: {\n if: { $gt: [{ $size: \"$alarmfilter\" }, 0] },\n then: 1,\n else: 0,\n },\n },\n },\n },\n {\n $match: {\n $or: [\n {\n $and: [\n { percentage: { $gt: 0 } },\n { specialAlarmFilterExists: { $eq: false } },\n { allAlarmFilterExists: { $eq: false } },\n ],\n },\n { specialAlarm: { $gt: 0 } },\n { allAlarm: { $gt: 0 } },\n ],\n },\n },\n {\n $project: {\n alarmfilter: 0,\n filtered: 0,\n specialAlarmfilter: 0,\n specialAlarm: 0,\n specialAlarmFilterExists: 0,\n allAlarm: 0,\n alarmfilter: 0,\n allAlarmFilterExists: 0,\n specialAlarmfilterCounter: 0,\n alarmfilterCounter: 0,\n updatedate: 0,\n buydate: 0,\n selldate: 0,\n buycontractaddress: 0,\n sellcontractaddress: 0,\n buymultiple: 0,\n sellmultiple: 0\n },\n },\n { $sort : { percentage : 1} }\n ])\n",
"text": "I am trying to make an aggregate complex query. But the query takes 4-5 seconds. Actually there is not much data(5k record) but i have to use 5 lookups.My query is as follows;I created indexes for the pairpercentages, filtered and alarm documents as follows.db.pairpercentages.createIndex( { percentage: 1, updatedate: 1, pair: 1, buy: 1, sell: 1} );\ndb.pairpercentages.createIndex( { percentage: 1, pair: 1, buy: 1, sell: 1} );\ndb.pairpercentages.createIndex( { percentage: 1, updatedate: 1} );\ndb.filtereditems.createIndex( { buy: 1, sell: 1, pair: 1, user: 1, percentage: 1, contractaddress:1} );\ndb.alarmitems.createIndex( { buy: 1, sell: 1, pair: 1, user:1, percentage: 1, contractaddress:1} );But when I look at the execution stats, I see that it takes too much time as follows.“executionStats” : {\n“executionSuccess” : true,\n“nReturned” : 3298.0,\n“executionTimeMillis” : 4432.0,\n“totalKeysExamined” : 3298.0,\n“totalDocsExamined” : 3298.0,Where am i doing wrong? How can i solve it?",
"username": "orhan_gencer"
},
{
"code": "",
"text": "@orhan_gencer Can you try to filter out the records before the lookup stage if possible?",
"username": "Sudhesh_Gnanasekaran"
}
] |
Aggregation lookup poor query performance
|
2020-12-07T19:38:04.011Z
|
Aggregation lookup poor query performance
| 2,623 |
null |
[
"aggregation",
"performance"
] |
[
{
"code": "",
"text": "Hi Team,i am running aggregation pipelines ,it’s looks good and fetching data very fast within seconds but while using lookup to match data from another collection the results also very quick .when doing groupBy after lookup it’s take 14 seconds jus for 9k records.is there any solution for it ?i have tried many approach but did not success .",
"username": "Sunil_Yadav"
},
{
"code": "",
"text": "@Sunil_Yadav Can you share the sample schema and the query which you tried?",
"username": "Sudhesh_Gnanasekaran"
}
] |
GroupBy taking time After lookup Aggregation pipeline
|
2020-10-03T09:14:35.864Z
|
GroupBy taking time After lookup Aggregation pipeline
| 2,380 |
null |
[] |
[
{
"code": "",
"text": "Hi everyone,I saw the \" How MongoDB’s Journaling Works\" in this link: How MongoDB's Journaling Works | MongoDB BlogBut i have some questions i don’t understand:What is the purpose of remapped between shared view and private view in the sentence ?\n\" The last step is that mongod remaps the shared view to the private view. This prevents the private view from getting too “dirty” (having too many changes from the shared view it was mapped from).\"\n=> Can you show me example of this ? as detailed as possibleAfter shared view remapped to private view in the last step. With the next write operation, the same process still repeat ?\nmongod map to private view → private view write changes to journal log → journal log replay changes into shared view → shared view flush changes to data file with 60s default by OS → shared view remap to private viewHelp me,Thank you so much",
"username": "D_i_Nguy_n"
},
{
"code": "",
"text": "Hi @D_i_Nguy_nUnfortunately that article is waaay outdated as it was published in 2012 which is 10 years ago as of today.The method described in that post is for the MMAPv1 storage engine which was removed in recent versions of MongoDB (since version 4.2). MongoDB exclusively uses WiredTiger now, which is a very different storage engine. Notably, WiredTiger allows you to use compression, allows a lot more concurrency, and made possible the inclusion of multi-document transactions, among many. Those are features that MMAPv1 cannot ever add due to its underlying design.If you are still using MMAPv1, you are using MongoDB versions that is out of support. I would encourage you to migrate to WiredTiger and supported versions.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Dear kevinadi,Thanks for your respond, i using mongoDB version 5.0, i thought mongoDB version 5.0 has the same mechanism about journal log and how it work So, can you have some documents describe about how a statement work in mongoDB with WiredStorage engine ? it maybe like a document that i posted above or whatever Thank you so much !",
"username": "D_i_Nguy_n"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
},
{
"code": "",
"text": "Apologies for the delay in responding. The WiredTiger Storage Engine page should explain much of what WiredTiger does at the higher levels. Hopefully you’ll find it useful Best regards\nKevin",
"username": "kevinadi"
}
] |
What is the purpose of last step in journaling?
|
2022-04-11T09:32:06.985Z
|
What is the purpose of last step in journaling?
| 1,590 |
null |
[
"queries",
"replication",
"atlas-cluster",
"php"
] |
[
{
"code": "<?php \n require_once 'vendor/autoload.php';\n $con = new MongoDB\\Client(\"mongodb://user_name:[email protected]:27017,cluster0-shard-00-01.2twek.mongodb.net:27017,cluster0-shard-00-02.2twek.mongodb.net:27017/test?ssl=true&replicaSet=atlas-6wx29r-shard-0&authSource=admin&w=majority\");\n $db = $con->selectDatabase('test'); \n $col = $db->selectCollection('user');\n $sql = $col->find();\n foreach($sql as $cols)\n{\n var_dump($cols);\n}\n?> \n",
"text": "But it shows this error. Pls, help me to solve it.",
"username": "singa_raj"
},
{
"code": "mongosh",
"text": "Hi @singa_raj,Welcome to the community But it shows this error. Pls, help me to solve it.I cannot see that you have posted any error unless I have missed it. Can you provide the following information:Regards,\nJason",
"username": "Jason_Tran"
}
] |
Cluster mongo Db Not Connecting in php
|
2022-05-12T12:47:24.728Z
|
Cluster mongo Db Not Connecting in php
| 2,726 |
null |
[
"aggregation",
"python",
"compass"
] |
[
{
"code": "{\n \"some_key\": \"some_value\",\n \"categorization\": {\n \"edit_category\": \"edit_category\",\n \"category\": \"category\",\n }\n}\nmy_collection.update_many(\n filter={},\n update={\n \"$set\": {\n \"categorization.edit_category\": \"$categorization.category\",\n }\n },\n)\nmy_collection.aggregate(\n pipeline=[\n {\n \"$set\": {\n \"categorization.edit_category\": \"$categorization.category\",\n }\n },\n ]\n)\n{\n \"some_key\": \"some_value\",\n \"categorization\": {\n \"edit_category\": \"category\",\n \"category\": \"category\",\n }\n}\n{\n \"some_key\": \"some_value\",\n \"categorization\": {\n \"edit_category\": \"$categorization.category\",\n \"category\": \"category\",\n }\n}\n",
"text": "I have mongodb version 5+ and latest version of pymongo from pipI have a query it works fine in mongodb shell and compassBut when i use it in python pymongo in my code it does not work\nthe value is being put as “$categorization.category” as a literal string\ninstead it should copy the value from that field to edit fieldI tried using both update_many and pipelineHere is the document structureAfter the query expected resultBut actual result in pythonBut as is said the same pipeline query works in mongodb compassplease helprelated to\nhttps://www.mongodb.com/community/forums/t/how-can-we-assign-one-field-value-to-another/16396/4?u=rohit_krishnamoorthy",
"username": "Rohit_Krishnamoorthy"
},
{
"code": "my_collection.aggregate",
"text": "my_collection.aggregatewill not alter the collection my_collection unless you have a $merge stage.If I understand correctly, what you want to do is:The following page provides examples of updates with aggregation pipelines.",
"username": "steevej"
},
{
"code": " update={\n \"$set\": {\n \"categorization.edit_category\": \"$categorization.category\",\n }\n },\n[]$set",
"text": "This should be wrapped in an array [] to indicate this is aggregation syntax and not regular update modifier $set.Asya",
"username": "Asya_Kamsky"
},
{
"code": "my_collection.update_many(\n filter={},\n update=[\n {\n \"$set\": {\n \"categorization.edit_category\": \"$categorization.category\",\n }\n },\n ],\n)\n",
"text": "Wow this works\nthanks you so muchi saw the type for in the update_many functionupdate: Mapping[str, Any] | _Pipeline,i guess i missed it can also _Pipeline type",
"username": "Rohit_Krishnamoorthy"
},
{
"code": "my_collection.aggregate(\n pipeline=[\n {\n \"$set\": {\n \"categorization.edit_category\": \"$categorization.category\",\n }\n },\n {\n \"$merge\": \"my_collection_name\"\n },\n ]\n)\n",
"text": "Yes you are correct\nI did this and it works",
"username": "Rohit_Krishnamoorthy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Update, aggregation pipeline in pymongo not recognising field_path and taking value as literal string
|
2022-05-16T16:56:23.046Z
|
Update, aggregation pipeline in pymongo not recognising field_path and taking value as literal string
| 3,202 |
null |
[
"atlas-online-archive"
] |
[
{
"code": "",
"text": "Hi, I want to use Atlas Online Archive custom query to remove all items which are older than a specific date as below{“createdDate”:{\"$lt\":ISODate(‘2021-01-01’)}}And because the query must be valid JSON, I cannot use any functions such as ISODate, Date, etc.Could you please give me some advice on this?Thank you",
"username": "Tuan_Pham_Minh"
},
{
"code": "",
"text": "Hi Tuan Pham Minh,Please try your custom archival rule with $expr. Also, we recommend to first check the query execution plan to make sure your query is going to be efficient with the $expr.Thanks,\nPrem",
"username": "Prem_PK_Krishna"
},
{
"code": "",
"text": "Try with extended JSON as specified at",
"username": "steevej"
},
{
"code": "{ \"$expr\" : {\n \"$lt\" : [\n \"$createdDate\" ,\n { \"$dateAdd\" : {\n \"startDate\" : \"$$NOW\" ,\n \"unit\": \"year\" ,\n \"amount\" : -1 } }\n ]\n}}\n",
"text": "You could also experiment with $dataAdd using $$NOW as the starting date.Something like (untested):",
"username": "steevej"
}
] |
[Online Archive] Custom Criteria with ISODate
|
2022-05-16T05:03:37.836Z
|
[Online Archive] Custom Criteria with ISODate
| 3,751 |
[
"atlas-device-sync"
] |
[
{
"code": "Credentials credentials = Credentials.emailPassword(email, password);\napp.loginAsync(credentials, result -> {\n SyncConfiguration config = new SyncConfiguration.Builder(app.currentUser(), PARTITION)\n\t.waitForInitialRemoteData().build();\n realm = Realm.getInstance(config);\n\n RealmResults<AppUser> users = realm.where(AppUser.class).findAll();\n AppUser user = null;\n\n // Find the AppUser with the email\n for (int i = 0; i < users.size(); i++) {\n if (users.get(i).getEmail().equals(email)) {\n user = users.get(i);\n }\n }\n if (user == null) Log.wtf(\"EXAMPLE\", \"This should NEVER happen.\"); // but it happens anyway, causing the exception below\n String type = user.getUserType(); // *** EXCEPTION here ***\n // ...\n});\nCredentials credentials = Credentials.emailPassword(email, password);\napp.loginAsync(credentials, result -> {\n MongoClient mongoClient = app.currentUser().getMongoClient(\"mongodb-atlas\");\n MongoDatabase mongoDatabase = mongoClient.getDatabase(\"MyDB\");\n CodecRegistry pojoCodecRegistry = fromRegistries(AppConfiguration.DEFAULT_BSON_CODEC_REGISTRY,\n fromProviders(PojoCodecProvider.builder().automatic(true).build()));\n MongoCollection<AppUser> mongoCollection = mongoDatabase.getCollection(\"AppUser\", AppUser.class)\n .withCodecRegistry(pojoCodecRegistry);\n Document queryFilter = new Document(\"email\", email);\n AppUser user = mongoCollection.findOne(queryFilter).get(); // *** EXCEPTION here ***\n String type = user.getUserType();\n // ...\n});\n at io.realm.internal.network.NetworkRequest.resultOrThrow(NetworkRequest.java:84)\n at io.realm.internal.objectstore.OsMongoCollection.findOneInternal(OsMongoCollection.java:274)\n at io.realm.internal.objectstore.OsMongoCollection.findOne(OsMongoCollection.java:224)\n at io.realm.mongodb.mongo.MongoCollection$6.run(MongoCollection.java:236)\n at io.realm.internal.async.RealmResultTaskImpl.get(RealmResultTaskImpl.java:92)\n at com.jchan.testing.MainActivity.lambda$onCreate$0$com-jchan-testing-MainActivity(MainActivity.java:126)\n at com.jchan.testing.MainActivity$$ExternalSyntheticLambda2.onResult(Unknown Source:6)\n at io.realm.internal.mongodb.Request$3.run(Request.java:90)\n at android.os.Handler.handleCallback(Handler.java:883)\n at android.os.Handler.dispatchMessage(Handler.java:100)\n at android.os.Looper.loop(Looper.java:214)\n at android.app.ActivityThread.main(ActivityThread.java:7356)\n at java.lang.reflect.Method.invoke(Native Method)\n at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492)\n at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:930)\n",
"text": "I am trying to query documents in an Atlas cluster, specifically from my AppUser collection. The intended functionality is that a user logs in and then their respective AppUser would be queried; I would be able to log in to any account on any device.\nhelp1430×1060 131 KB\nHowever, I was encountering many issues. See the code:E/AndroidRuntime: FATAL EXCEPTION: main\nProcess: com.jchan.testing, PID: 13568\njava.lang.NullPointerException: Attempt to invoke virtual method ‘java.lang.String com.jchan.testing.AppUser.getUserType()’ on a null object reference\nat com.jchan.testing.MainActivity.lambda$onCreate$0$com-jchan-testing-MainActivity(MainActivity.java:120)\nat com.jchan.testing.MainActivity$$ExternalSyntheticLambda2.onResult(Unknown Source:6)\nat io.realm.internal.mongodb.Request$3.run(Request.java:90)\nat android.os.Handler.handleCallback(Handler.java:883)\nat android.os.Handler.dispatchMessage(Handler.java:100)\nat android.os.Looper.loop(Looper.java:214)\nat android.app.ActivityThread.main(ActivityThread.java:7356)\nat java.lang.reflect.Method.invoke(Native Method)\nat com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492)\nat com.android.internal.os.ZygoteInit.main(ZygoteInit.java:930)Supposedly, the ‘jchan’ AppUser should have been queried after entering the credentials for the ‘jchan’ User. The code above does work; locally that is. However, after wiping my emulator, reinstalling, and attempting to log in with the same credentials, I get the above exception. I did not have problems registering a User and AppUser into the database using a synced realm though. That brings me to one question:Why isn’t the synced realm getting documents from my remote Atlas cluster?Maybe it can only upload documents and can’t download them, but I’m certain this isn’t the case. I have tried:Next, I tried to query the cluster directly by referring to this documentation. But of course, it still doesn’t work:E/AndroidRuntime: FATAL EXCEPTION: main\nProcess: com.jchan.testing, PID: 13389\nNETWORK_UNKNOWN(realm::app::CustomError:1002)\nandroid.os.NetworkOnMainThreadException\nat io.realm.internal.network.NetworkRequest.resultOrThrow(NetworkRequest.java:84)\nat io.realm.internal.objectstore.OsMongoCollection.findOneInternal(OsMongoCollection.java:274)\nat io.realm.internal.objectstore.OsMongoCollection.findOne(OsMongoCollection.java:224)\nat io.realm.mongodb.mongo.MongoCollection$6.run(MongoCollection.java:236)\nat io.realm.internal.async.RealmResultTaskImpl.get(RealmResultTaskImpl.java:92)\nat com.jchan.testing.MainActivity.lambda$onCreate$0$com-jchan-testing-MainActivity(MainActivity.java:126)\nat com.jchan.testing.MainActivity$$ExternalSyntheticLambda2.onResult(Unknown Source:6)\nat io.realm.internal.mongodb.Request$3.run(Request.java:90)\nat android.os.Handler.handleCallback(Handler.java:883)\nat android.os.Handler.dispatchMessage(Handler.java:100)\nat android.os.Looper.loop(Looper.java:214)\nat android.app.ActivityThread.main(ActivityThread.java:7356)\nat java.lang.reflect.Method.invoke(Native Method)\nat com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492)\nat com.android.internal.os.ZygoteInit.main(ZygoteInit.java:930)I’m not sure why this didn’t work. If anyone can help me with this, I would give my utmost appreciation.",
"username": "JChan"
},
{
"code": "",
"text": "I know its been a year for this post, but this answer might help someone. The first Error showing the user is null. Therefore, make sure that your not getting a null User Credentials. Second error is because your using get() instead of getAsync() then you can iterate the values.",
"username": "laith_ayyat"
},
{
"code": "",
"text": "Credentials was definitely not null back when I tested this. Email and password were proper Strings and the matching user was also in the database. As for get() instead of getAsync(), that might be a valid solution but the error states that the exception occurred on the main thread, even though it was under loginAsync(). Plus, it would seem unnecessary to use multiple threads for sequential tasks.",
"username": "JChan"
}
] |
Unable to query documents in Atlas cluster
|
2021-10-15T03:56:01.927Z
|
Unable to query documents in Atlas cluster
| 3,357 |
|
[
"aggregation",
"data-modeling",
"compass",
"connecting",
"mongodb-shell"
] |
[
{
"code": "{ \"_id\": { \"$oid\": \"627f925ffa5e617f51d5632e\" }, \"elevatorInfo\": { \"Institution_Characteristics\": { \"Unitid\": \"139384\", \"Name\": \"Georgia Northwestern Technical College\", \"City\": \"Rome\", \"State\": \"GA\", \"Web_Address\": \"www.gntc.edu/\", \"Distance_Learning\": \"Offers undergraduate courses and/or programs\" } }, \"studentCharges\": { \"Cost\": { \"Published_Tuition_And_Required_Fees\": \"\", \"In-state\": \"$3,062\", \"Out-of-state\": \"$5,462\", \"Books_And_Supplies\": \"$1,500\", \"Off-campus_(not_With_Family)_Room_And_Board\": \"$5,528\", \"Off-campus_(not_With_Family)_Other_Expenses\": \"$5,191\", \"Off-campus_(with_Family)_Other_Expenses\": \"$2,431\", \"Total_Cost\": \"\", \"Off-campus_(not_With_Family),_In-state\": \"$15,281\", \"Off-campus_(not_With_Family),_Out_Of_State\": \"$17,681\", \"Off-campus_(with_Family),_In-state\": \"$6,993\", \"Off-campus_(with_Family),_Out-of-state\": \"$9,393\" }, \"Level_of_student\": { \"Undergraduate\": { \"In-state\": \"$3,062\", \"Out-of-state\": \"$5,462\" }, \"Graduate\": { \"In-state\": \"\", \"Out-of-state\": \"\" } } }}\n{ \"_id\": { \"$oid\": \"622ce9ba5d72be4d703e972d\" }, \"financialAid\": { \"Student_Financial_Aid\": { \"All_Undergraduate_Students\": { \"Percent_receiving_aid\": \"\", \"Average_amount_of_aid_received\": \"\" }, \"Any_Grant_Or_Scholarship_Aid\": { \"Percent_receiving_aid\": \"90%\", \"Average_amount_of_aid_received\": \"$5,603\" }, \"Pell_Grants\": { \"Percent_receiving_aid\": \"69%\", \"Average_amount_of_aid_received\": \"$7,845\" }, \"Federal_Student_Loans\": { \"Percent_receiving_aid\": \"8%\", \"Average_amount_of_aid_received\": \"$3,371\" }, \"Full-time,_First-time,_Degree/certificate-seeking_Undergraduate_Students\": { \"Percent_receiving_aid\": \"\", \"Average_amount_of_aid_received\": \"\" } } }, \"retentionAndGraduation\": { \"Retention_And_Graduation\": { \"Overall_Graduation_Rates\": { \"Rate\": \" \" }, \"Total\": { \"Rate\": \"49%\" }, \"Men\": { \"Rate\": \"57%\" }, \"Women\": { \"Rate\": \"40%\" }, \"Nonresident_Alien\": { \"Rate\": \"100%\" }, \"Transfer_Out-rate\": { \"Rate\": \"7%\" } } }, \"unitId\": 139384, \"__v\": 0}\n db.aidretentionandgraduations([\n {\n '$lookup': {\n 'from': 'datas', \n 'localField': 'Unitid', \n 'foreignField': 'unitId', \n 'as': 'nice'\n }\n }, {\n '$unwind': {\n 'path': '$nice'\n }\n }, {\n '$match': {\n '$expr': {\n '$eq': [\n '$unitId', '$elevatorInfo.Institution_Characteristics.Unitid'\n ]\n }\n }\n }\n])\n",
"text": "I have a collection called “datas” and the other collection is named as “aidretentionandgraduations” in a database called “challenge”. Both collections have similar values stored in different field called unitId and Unitid, their values should be used to merge those two collections. So if unitId==elevatorInfo.Institution_Characteristics.Unitid display the documents else don’t dispaly is what I am trying to achieve. I used $lookup aggregattion but the as in $lookup aggregation inserts all documents in the collection.\nthe datacollection:the aidretentionandgraduations collection:below is what i tried:below is the final output I am tryin to achieve:\n\nfinalOUTPUT1072×882 64.1 KB\n",
"username": "taskAtHand"
},
{
"code": "foreignFieldelevatorInfo.Institution_Characteristics.UnitidforeignFieldlocalFieldunitIdUnitid$match$lookup",
"text": "Looks like you didn’t set your foreignField value correctly. Didn’t you say that it’s stored in field elevatorInfo.Institution_Characteristics.Unitid - you have to put the entire string not just the last part of the field path into foreignField. Also, localField has a capitalization issue, it should be unitId, not Unitid.You also don’t need any $match after unwind because the $lookup will only match documents where the two values are equal.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "elevatorInfo.Institution_Characteristics.UnitidI tried that multiple times and although what you said is the correct way to do it. It didn’t workout for me so\nI deliberately kept wrong foreignField as well as capitalized unitId to make it work. It makes no sense at all.\nbelow u can see the newly fomed array named nice has no values:\n\nnot working1601×753 99.8 KB\n\nif I use the wrong foreigneField and lowercase unitId it works but all the db.datas documents are combined with single document of db.aidretentionandgraduation which need to be filterd using values of unitId and Unitid to display matching values",
"username": "taskAtHand"
},
{
"code": "",
"text": "Your aggregation will not work.unitId in aidretentionandgraduations collection is a numberelevatorInfo.Institution_Characteristics.Unitid in datas is a stringmigrating datas to use number will be better that migrating the other to use string",
"username": "steevej"
},
{
"code": "",
"text": "please do forgive me but could you elaborate more on mentioned solutions… It’s just been two days since I started using mongodb as a part of internship challenge. I am assuming the problem lies trying to manipulate two different types without any type casting, but it’s difficult understanding solutions, any similar links or past question to the similar problem would be highly appreciated",
"username": "taskAtHand"
},
{
"code": "\"Unitid\": \"139384\"\"unitId\": 139384Atlas atlas-d7b9wu-shard-0 [primary] test> unit_id = \"369\"\n369\nAtlas atlas-d7b9wu-shard-0 [primary] test> Unit_Id = 369\n369\nAtlas atlas-d7b9wu-shard-0 [primary] test> unit_id === Unit_Id\nfalse\n// but\nAtlas atlas-d7b9wu-shard-0 [primary] test> unit_id === Unit_Id.toString()\ntrue\n// or\nAtlas atlas-d7b9wu-shard-0 [primary] test> parseInt(unit_id) === Unit_Id\ntrue\n",
"text": "The string\"Unitid\": \"139384\"inside objectelevatorInfo.Institution_Characteristicsfrom thedatacollectionIS NOT EQUAL TO the number\"unitId\": 139384fromthe aidretentionandgraduations collectionI am assuming the problem lies trying to manipulate two different types without any type casting,Yes this is exactly what I wrote.Just like JS:Inside aggregation you can use $convert.However sinceIt’s just been two days since I started using mongodb as a part of internship challenge.I recommend that you take the M001, M121 from https://university.mongodb.com/.",
"username": "steevej"
},
{
"code": "",
"text": "thanks a lot … I have been searching for this solution all over google and stackoverflow.",
"username": "taskAtHand"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How do I merge two different collections of same database into a new collection having similar field values in MongoDB compass?
|
2022-05-16T07:07:00.283Z
|
How do I merge two different collections of same database into a new collection having similar field values in MongoDB compass?
| 4,171 |
|
null |
[
"aggregation"
] |
[
{
"code": "DELETE FROM HNG_BASKET_ITEM WHERE PRODUCT_ID NOT IN (\n SELECT DISTINCT PRODUCT_ID FROM HNG_PRODUCT_MASTER WHERE IS_VISIBLE = 'Y'\n); \n",
"text": "This is mysql query:Please help me to join these tablesI’m new to MongoDB please share some references on same.",
"username": "Varun_Shetty"
},
{
"code": "db.basket.remove(\n db.basket.aggregate({ \n $lookup: {\n from: \"product_master\",\n let:{\n productHighlights.isVisible :\"Y\",\n localField:\"items.skuid\",\n foreignField:\"_id\",as:\"Deleted items\" \n }\n })\n)\n",
"text": "I have tried something like this please verify is this valid",
"username": "Varun_Shetty"
},
{
"code": "var deleted_list = db.product_master.aggregate([{$match : { \"productHighlights.isVisible\" :\"Y\"}},\n{$group : { _id : null , list : { $addToSet : \"$product_id\"}}}])\n\ndb.basket.remove({product_id : { $nin : deleted_list.list}})\n",
"text": "Hi @Varun_Shetty ,Doing this shell syntax with 2 inner queries doesn’t sound right and the returned data from inner aggregation needs to be turned into an array and passed to a $nin for the removal.I would suggest coding this in 2 commands:PLEASE NOTE THAT THIS CODE WAS NEVER TESTED AND IT MAY NOT FIT YOUR EXACT FIELD NAMES OR COLLECTIONS , SO PLEASE TEST IT CAREFULLY IN A DEV ENVIRONMENT. ITS MAINLY A PSEDOCODEThanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny Thank you for the suggestion will try this .",
"username": "Varun_Shetty"
},
{
"code": "",
"text": "@Pavel_Duchovny There is one scenario we need to create a trigger and when i insert a data into one collection and i need to insert same matching columns data to one more collection .can you share some sample functions on same .",
"username": "Varun_Shetty"
},
{
"code": "",
"text": "Hi @Varun_Shetty ,I have no.clue on what exactly is your requestYou have to.be more specific, what you insert where and where do you expect that data to ve written to…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "– ADD BELOW TRIGGER ON ORDER_PAYMENT_DETAILS TABLE FOR AFTER INSERT/UPDATE EVENTS– USAGE: INSERTS CURRENT PAYMENT STATUS IN ORDER_PAYMENT_HISTORY TABLE WHEN ON CHANGE OF ROW IN ORDER_PAYMENT_DETAILS TABLEBEGININSERT INTO ORDER_PAYMENT_HISTORY (ORDER_ID, STATUS_ID, PAYMENT_METHOD_TYPE_ID, AMOUNT, GATEWAY_NAME, GATEWAY_STATUS, GATEWAY_MESSAGE, GATEWAY_ISSUER_CODE, REFERENCE_NUM, MERCHANT_TXN_ID)VALUES (NEW.ORDER_ID, NEW.STATUS_ID, NEW.PAYMENT_METHOD_TYPE_ID, NEW.AMOUNT, NEW.GATEWAY_NAME, NEW.GATEWAY_STATUS, NEW.GATEWAY_MESSAGE, NEW.GATEWAY_ISSUER_CODE, NEW.REFERENCE_NUM, NEW.MERCHANT_TXN_ID);ENDSo this is the query and i need to replicate same query into mongodb triggers and i have understood how to add triggers to collection and add document.\nSo here on change of order payment details data needs to be inserted to order payment history how we can add same inserted document to one more collection and sql we do have New concept.This is the concern i have",
"username": "Varun_Shetty"
},
{
"code": "",
"text": "Hi @Varun_Shetty ,So basically you need to set “on insert” trigger with fullDocument flag enabled.You get the full document extracted from the event variable.Then you take it and insert it into the target collection.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny yes correct",
"username": "Varun_Shetty"
}
] |
I Need to convert mysql script to mongodb scheduled events
|
2022-05-10T04:04:17.403Z
|
I Need to convert mysql script to mongodb scheduled events
| 2,266 |
null |
[
"aggregation",
"compass",
"vscode"
] |
[
{
"code": "",
"text": "Using the Mongodb compass aggregation (latest version) the documentation shows that you can do CRUD operations on a collection.I have tried to do exactly this, and I can see the $match and then $set stage , which should add data to the collection… I can see the preview correctly, BUT the data isn’t infact updated. Does this mean that COMPASS aggregation is only a playground and doesn’t actually mutate data ?Is this also true for VScode mongodb extension? it seems so as my $set pipeline returns correct results, but when i query the database, i see no changes being made?where in the docs does it state that i cant do updates in COMPASS? Thank you.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Hello @Rishi_uttam … I think you may have another mistake somewhere in your code, what you are trying should work.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "It is indeed confusing , but as I read the docs for $set it says that it isn’t the same as the normal query $set.Is it possible that $set when used In a aggregation pipeline does not mutate the document but rather is like projection , it only shows the new result in a projection and not actually updating the document.I am not sure as the docs doesn’t state this clear enough.I also see there are two other operators called $ merge and $ out which write to the database.But as you said , $set should also write to the database but in my case it does not and no errors are shown.So my question now is , is my understanding of $set when used in an aggregation incorrect ? I. E it does not write but rather project?",
"username": "Rishi_uttam"
},
{
"code": "$setupsertpublic function upsert_pot(object $doc): bool {\n $doc['price'] = new MongoDB\\BSON\\Decimal128($doc['price']);\n $uresult = $this->mongodb_db->pottery->updateOne(\n ['potnum' => $doc['potnum']],\n ['$set' => $doc],\n [\n 'upsert' => true,\n 'writeConcern' => new MongoDB\\Driver\\WriteConcern(MongoDB\\Driver\\WriteConcern::MAJORITY)\n ]\n );\n return $uresult->getModifiedCount() > 0 || $uresult->getUpsertedCount() > 0;\n }\n",
"text": "I misunderstood what you said.\n$set in an upsert sets the document to be upserted.\nHere it is in PHP where I add a pot to a collection of pottery:",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This is my agg query :db.users.aggregate([\n{\n‘$match’: {\n‘lastLogin’: {\n‘$gte’: []\n}\n}\n},\n{\n‘$set’: {\n‘lastLogin’: {\n‘$first’: ‘$lastLogin’\n}\n}\n}\n] )which results in a document showing the output i expect, but when querying the database in Atlas, the data isn’t mutated. So looking at your code, i need to use $set inside a update… does Aggregations have a update property?",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "No, you need to do something like updateOne() … Aggregate outputs documents, it does not update the database.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thank you, that is a revelation for me… I always though the Agg framework can also write to database, but it does not have a update operator. For this i will need to go back to using normal query sytax as you mentioned. So to summarize i cant write to database using aggregation framework?\nthe docs do say : “$set appends new fields to existing documents.” My interpretation of “documents” is perhaps wrong, i thought this meant documents in the database, but it means output documents. but i guess this is only projection, they need to be used in conjunction with a update, and since the Agg framework does not have an update, i need to use the former query syntax… Thank you.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Here is my playground, hopefully this helpsand my query using $set.\nThe problem now is that $set with $first does not work, it outs the string literal instead of running the $first operator.Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Solved, my mistake was -",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Glad you solved your problem, @Rishi_uttam … Perhaps you are right about Aggregation.\nJust in case there was confusion, I am not a MongoDB representative. I’m just a user like yourself ",
"username": "Jack_Woehr"
},
{
"code": "aggregate$out$mergeupdateupdateOneupdateMany",
"text": "The aggregate command reads from a collection and it can optionally write to a different (or same) collection via special output stages $out and $merge. This is different than update command (known as updateOne or updateMany driver methods) which can use some aggregation stages to specify how to transform the document being updated.Hope this helps,\nAsya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "oh yeah i knew that, not to worry thats what these forums are for.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Thanks, it does help, I wondered why there isn’t a $update and $updateMany \nFor newbs it is tad bit confusing as the docs do say the $set (agg):Adds new fields to documents.But i guess they mean only as a projection, but not written to the database.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Yeah, I hear you. When we were adding write stages to aggregation, we considered this, but without removing regular update commands, we thought it would be even more confusing (and removing existing commands is “hard” because we don’t want to break any existing applications).Asya",
"username": "Asya_Kamsky"
}
] |
MongoDB Compass - Are Aggregation queries $set executed or only previewed?
|
2022-05-15T14:42:03.705Z
|
MongoDB Compass - Are Aggregation queries $set executed or only previewed?
| 5,496 |
null |
[] |
[
{
"code": "",
"text": "I have chosen MongoDB for the database layer of my APP.Even after implementing indexes and sharding, the queries that are generated by the users of this app will require multiple trips to the Mongo Server, multiple collections, etc.Can this app still scale to 100s of millions or even billions of users?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "I cannot imagine a large successful app that doesn’t have to make multiple trips to the database - this is true for all databases I’m familiar with, so I think your concern is maybe slightly misplaced - the key is not making any unnecessary extra calls to the database, and most importantly making sure that all the operations in the database are as efficient as possible (meaning, well indexed, etc).Asya",
"username": "Asya_Kamsky"
}
] |
If an app cannot avoid complex queries, can it still scale?
|
2022-05-14T13:42:48.478Z
|
If an app cannot avoid complex queries, can it still scale?
| 1,332 |
[
"connector-for-bi"
] |
[
{
"code": "",
"text": "I am currently trying to connect powerbi to a MongoDb database using a bi connector. I have configured my sqld.exe file to point at the MongoDb database on the cloud. While using PowerShell I am able to establish a connection but when I try to create the system dsn I get this error, Does anyone know a fix for this issue.\n\nimage420×722 93.5 KB\n",
"username": "Abhishek_Menon"
},
{
"code": "",
"text": "Did you read through MongoDB BI Connector ODBC Driver",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Yes I have followed the steps and have even downloaded the authentication plug in but no luck. ",
"username": "Abhishek_Menon"
},
{
"code": "",
"text": "Maybe @Stennie_X can help you.",
"username": "Jack_Woehr"
}
] |
ODBC Connector System DSN issue
|
2022-05-13T10:37:36.117Z
|
ODBC Connector System DSN issue
| 2,855 |
|
null |
[
"app-services-user-auth"
] |
[
{
"code": "",
"text": "I am curious if there is a timeline (or even pipeline) for tutorials on the types of authentication for Realm applications, ideally React Native. As far as I am aware all the existing tutorials use anonymous user authentication and in the limited event there is an email/password authentication tutorial it is not a production level use case (i.e., without email verification, password reset, etc.).I think these would be extremely valuable for the community. If there are resources out there I am somehow missing, beyond what is in the current documentation, direction would be greatly appreciated.Cheers!",
"username": "Jason_Tulloch"
},
{
"code": "",
"text": "I will share other tutorials I find over time, but wanted to make sure everyone in the community knew that a tutorial is available for Facebook Authentication. I still think it would be extremely helpful if more detail was added to each authentication method in the MongoDB Realm documentation, but I hope this helps others who keep trying to figure out how to integrate authentication methods.Also, just to be clear, I followed this tutorial and was able to get FB Auth working in my application.Facebook Authentication - Implementing Facebook Authentication into Your React Native App using Atlas App Services and Realm | by Daphne Levy | Realm Blog | MediumHope this helps someone!",
"username": "Jason_Tulloch"
}
] |
Email/Password, Facebook, Google, etc. Authentication Resources
|
2021-12-25T03:10:21.541Z
|
Email/Password, Facebook, Google, etc. Authentication Resources
| 2,667 |
null |
[
"app-services-user-auth"
] |
[
{
"code": "",
"text": "I have a React Native mobile app and a React web app both using Realm authentication to log my users in.I would like my users to be logged out automatically when their password is reset or in cases where I disable or revoke all sessions for that user from the Realm UI.I vaguely understand that refresh tokens may have something to do with that?How could I go about enforcing user logout?",
"username": "Laekipia"
},
{
"code": "",
"text": "Hi Natacha,How could I go about enforcing user logout?Please see documentation on User.logOut() for the React SDK.As mentioned in the User Sessions article it will do the following:You can also revoke the user session from the UI or CLI as you mentioned which will require them to log in again.Hope that helps.Regards",
"username": "Mansoor_Omar"
},
{
"code": "sharedUsersharedUsersharedUser",
"text": "Hi @Mansoor_Omar, thanks for your reply. Apologies if my initial query was a bit vague. I’ll try to be more specific.In my system, a company is given a realm user (called sharedUser) with credentials that all their staff can use to log in to a mobile app. Company admins can access a web app portal and change the password for this sharedUser. I would like this to automatically log out every sharedUser on their mobile devices. They’ll then have to enter the newly changed password to log back in.I understand that when an admin changes the password for sharedUser, this does something in MongoDB Realm backend to changes the refresh token for this user.On my Realm Sync mobile app, how can I listen to changes in the refresh token (after the sharedUser password has been changed) so I can trigger the User.logOut()?Apologies if I’m missing something obvious here.",
"username": "Laekipia"
}
] |
Automatically log out a user after password reset or all sessions revoked
|
2022-05-11T11:28:54.846Z
|
Automatically log out a user after password reset or all sessions revoked
| 3,151 |
null |
[] |
[
{
"code": "",
"text": "Hi. Who knows how to check the current setting of TTL?",
"username": "Steven_Yu"
},
{
"code": "expireAfterSecondsdb.collection.getIndexes()test> db.eventlog.createIndex(\n { \"lastModifiedDate\": 1 },\n { expireAfterSeconds: 7200 }\n)\nlastModifiedDate_1\n\ntest> db.eventlog.getIndexes()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { lastModifiedDate: 1 },\n name: 'lastModifiedDate_1',\n expireAfterSeconds: 7200\n }\n]\ncollModtest> db.runCommand({\n 'collMod': \"eventlog\", \n index: {\n keyPattern: { \"lastModifiedDate\": 1 },\n expireAfterSeconds: 3600\n }\n})\n{\n expireAfterSeconds_old: Long(\"7200\"),\n expireAfterSeconds_new: Long(\"3600\"),\n ok: 1\n}\n",
"text": "Welcome to the MongoDB Community @Steven_Yu !Time-to-Live (TTL) indexes are defined per-collection based on a provided expireAfterSeconds value. By default collections do not have a TTL index.You can use the db.collection.getIndexes() helper in the MongoDB Shell to get index information, for example:You can modify the expiry for a TTL index using the collMod command, for example:If that isn’t the information you are looking for, please provide some further details on your environment:Thanks,\nStennie",
"username": "Stennie_X"
}
] |
How to check current setting of TTL
|
2022-05-16T08:25:26.530Z
|
How to check current setting of TTL
| 5,661 |
null |
[
"kafka-connector"
] |
[
{
"code": "{\n\"name\":\"mongo-DB\",\n\"connector.class\":\"com.mongodb.kafka.connect.MongoSourceConnector\",\n\"tasks.max\":\"1\",\n\"connection.uri\":\"\",\n\"database\":\"DB\",\n\"copy.existing\":\"true\",\n\"copy.existing.namespace.regex\":\"DB.coll1$|DB.coll2$|DB.coll3$|DB.coll4\",\n\"topic.namespace.map\":\"{\\\"DB.coll1\\\\\\\" : \\\"topic1\\\",\\\"DB.coll2\\\\\\\" : \\\"topic2\\\",\\\"DB.coll3\\\\\\\" : \\\"topic3\\\",\\\"DB.coll4\\\\\\\" : \\\"topic4\\\"}\",\n\"poll.max.batch.size\":\"1000\",\n\"poll.await.time.ms\":\"5000\",\n\"pipeline\":\"[{\\\"$match\\\":{\\\"ns.coll\\\": {\\\"$regex\\\": \\\"\\/^(DB.coll1|DB.coll2|DB.coll3|DB.coll4)$\\/\\\"}}}]\",\n\"batch.size\":\"1\",\n\"change.stream.full.document\":\"updateLookup\",\n\"key.converter\":\"org.apache.kafka.connect.storage.StringConverter\",\n\"value.converter\":\"org.apache.kafka.connect.storage.StringConverter\",\n\"key.converter.schemas.enable\":\"false\",\n\"value.converter.schemas.enable\":\"false\",\n\"publish.full.document.only\":\"true\"\n} \nINFO Opened connection [connectionId{localValue:7, serverValue:283461}] to DB (org.mongodb.driver.connection:71)\n[2021-12-05 10:59:14,616] INFO Opened connection [connectionId{localValue:8, serverValue:283462}] to DB (org.mongodb.driver.connection:71)\n[2021-12-05 10:59:16,021] INFO Copying existing data on the following namespaces: [ecaf-staging.augmentPlanRelationship, ecaf-staging.augmentPlan, ecaf-staging.device, ecaf-staging.location] (com.mongodb.kafka.connect.source.MongoCopyDataManager:104)\n[2021-12-05 10:59:16,035] INFO Started MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:203)\n[2021-12-05 10:59:16,036] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:225)\n[2021-12-05 10:59:16,386] INFO Opened connection [connectionId{localValue:9, serverValue:283463}] to DB (org.mongodb.driver.connection:71)\n[2021-12-05 10:59:16,394] INFO Opened connection [connectionId{localValue:10, serverValue:283464}] to DB (org.mongodb.driver.connection:71)\n[2021-12-05 10:59:24,042] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n[2021-12-05 10:59:31,037] INFO Shutting down executors (com.mongodb.kafka.connect.source.MongoSourceTask:604)\n[2021-12-05 10:59:31,037] INFO Finished copying existing data from the collection(s). (com.mongodb.kafka.connect.source.MongoSourceTask:611)\n[2021-12-05 10:59:31,038] INFO Watching for database changes on 'ecaf-staging' (com.mongodb.kafka.connect.source.MongoSourceTask:677)\n[2021-12-05 10:59:31,066] INFO Resuming the change stream after the previous offset: {\"_data\": \"8261AC9B83000023282B0229296E04\"} (com.mongodb.kafka.connect.source.MongoSourceTask:415)\n[2021-12-05 10:59:34,043] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n[2021-12-05 10:59:44,044] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n[2021-12-05 10:59:54,045] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n[2021-12-05 11:00:04,045] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n[2021-12-05 11:00:14,053] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n[2021-12-05 11:00:24,053] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n[2021-12-05 11:00:34,054] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n[2021-12-05 11:00:44,055] INFO WorkerSourceTask{id=mongo-ecaf-staging-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)\n",
"text": "Am using mongodb kafka source connector v1.6… kafka connect is running in distributed mode The problem is message from mongo db is not published to respective kafka topic…\nAm using topic.namespace.map config… in logs also I see no error… below is the config filelogs:",
"username": "karthick_raina"
},
{
"code": "",
"text": "Did you find any solution? I am having very similar issue. I also see messages in the logs like:\nINFO Opened connection [connectionId{localValue:21, serverValue:3339}] to\nmyserver.host.name:1025 (org.mongodb.driver.connection:71)\nCopying existing data on the following namespaces: [myDb.myCollection]\nfollowed a short time later by:\nFinished copying existing data from the collection(s).Yet, no topic is created in Kafka / no messages. Any suggestions/thoughts?",
"username": "KURT_SCHWANZ"
},
{
"code": "{\n \"name\": \"mdbsrc\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"tasks.max\": \"1\",\n \"connection.uri\": \"mongodb://mongodb:27017\",\n \"database\": \"DB\",\n \"copy.existing\": \"true\",\n \"copy.existing.namespace.regex\": \"DB.coll[1-4]$\",\n \"topic.namespace.map\": \"{\\\"DB.coll1\\\" : \\\"topic1\\\",\\\"DB.coll2\\\" : \\\"topic2\\\",\\\"DB.coll3\\\" : \\\"topic3\\\",\\\"DB.coll4\\\" : \\\"topic4\\\"}\",\n \"pipeline\": \"[ { $match: { \\\"ns.coll\\\": { $regex: /^coll[1-4]$/ } } } ]\",\n \"poll.max.batch.size\": \"1000\",\n \"poll.await.time.ms\": \"5000\",\n \"batch.size\": \"1\",\n \"change.stream.full.document\": \"updateLookup\",\n \"key.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"key.converter.schemas.enable\": \"false\",\n \"value.converter.schemas.enable\": \"false\",\n \"publish.full.document.only\": \"true\"\n }\n}\n$matchns.collns.db\\\\",
"text": "I think there are just some small issues with the connector configuration of yours.\nGive this a try which should hopefully work at your end too.Most important difference to your example is that if you use $match for ns.coll this refers to the collection name only NOT the combination of db+coll name. If you want to also match db name you have to add a match against ns.db as well. Also your topic mapping config contains additional \\\\s which would not match and thus lead to default topic namings on kafka side. Also I simplified the regexp a bit because I’m lazy with typing ",
"username": "hpgrahsl"
},
{
"code": "",
"text": "Solution to my specific issue was, to create the topic manually. I was assuming it would auto-create but, well… Anyway, once I created the topic, my connector worked.",
"username": "KURT_SCHWANZ"
},
{
"code": "",
"text": "I put in the following lines and topic auto-created when data flows in:topic.creation.enable: “true”\ntopic.creation.default.replication.factor: “-1”\ntopic.creation.default.partitions: “-1”",
"username": "Raymond_Lai"
},
{
"code": "",
"text": "Oh - I will have to try that! Thanks for the info.",
"username": "KURT_SCHWANZ"
}
] |
Mongo-kafka source connector not shipping data to kafka topic
|
2021-12-19T07:34:33.829Z
|
Mongo-kafka source connector not shipping data to kafka topic
| 5,586 |
[
"aggregation",
"compass",
"mdbw22-hackathon",
"mdbw-hackhelp"
] |
[
{
"code": "",
"text": "I am watching the video Hackathon APAC & EMEA Session - GDELT Data - Geofencing & Creating notifications MongoDBin it they went to http://geojson.io to create a JSON with the coordinates of Scotland to use in an Aggregation Pipeline in MONGOBB COMPASS.I created a Polygon with the Spain and Portugal countries.When I create the $match Stage in MONGOBB COMPASS an error message is shows saying:longitude/latitude is out of bounds, lng: -716.638 lat: 42.391\nmapSpainAndPortugal890×922 120 KB\n\nlongitude latitude is out of bounds385×711 47.5 KB\nI also tried create a polygon of Scotland, but the coordinates that I get are very different to what they get in the video.I get values around -720 and they get values between -1 and -6 for a similar region.\nMyScotlandMap1211×963 232 KB\n\nVideoScotlandMap1920×908 92.5 KB\nWhat is wrong in http://geojson.io?Thanks",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "In MongoDB a GeoJSON point is recorded in Lat/Long order. Your polygons are correct you just need to reverse the coordinates in in GeoJSON point for MongoDB.",
"username": "Joe_Drumgoole"
},
{
"code": "\"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [\n 3.8891601562499996,\n 42.47209690919285\n ],\n [\n -3.0322265625,\n 43.691707903073805\n ],\n [\n -10.70068359375,\n 44.37098696297173\n ],\n [\n -10.56884765625,\n 35.871246850027966\n ],\n [\n -1.1865234375,\n 36.13787471840729\n ],\n [\n 4.81201171875,\n 39.842286020743394\n ],\n [\n 3.8891601562499996,\n 42.47209690919285\n ]\n ]\n ]\n }\n",
"text": "Hi @Manuel_Martin,I’m afraid I have no idea why geojson.io is giving you those numbers! I’ve just created a similar polygon and got the following numbers:\nScreenshot 2022-05-16 at 08.41.411194×936 99.5 KB\nHope this helps!Mark",
"username": "Mark_Smith"
},
{
"code": "-180180-9090",
"text": "In MongoDB GeoJSON points are Longitude then latitude, same as in geojson.io - so the pairs are in the correct order.The problem here is that -720 isn’t a valid longitude or latitude value - it’s just nonsense.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Thank you very much , I appreciate it",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Aggregation Pipeline in MONGOBB COMPASS longitude/latitude is out of bounds (http://geojson.io)
|
2022-05-15T21:50:59.101Z
|
Aggregation Pipeline in MONGOBB COMPASS longitude/latitude is out of bounds (http://geojson.io)
| 4,148 |
|
[
"java",
"sharding",
"performance"
] |
[
{
"code": "",
"text": "Hello,We have 10 shards of 3 members (pri+sec+sec hidden) hosted on GCP’s instances. The first shard is on SSD, the other 9 on regular disks.We’ve been running mongod 4.0 for quite some time without any issues but then decided to upgrade to mongodb 5.0. And ever since, we have big performance issues.Randomly, but quite often, a shard will slow down and the slow queries logged on this instance will go in the thousands for 5/10/30mn/1h.\nimage1813×304 100 KB\nWe managed to establish a relation between these slow downs and the balancer. Usually the slow down period correspond to a chunk being moved from/to the server. These transfers usually take a few seconds, but sometime last for 40mn.When we stopped the balancer, the issue was no longer occurring.But as it’s not a long term solution, we decided to rollback to mongodb 4.4 but ended up having the same behavior.We then rolled back to mongodb 4.2 and noticed an amelioration. We still had some slow queries, but not as often and not for as long. However last week, we’ve had nearly a full day of slow queries on all shards, and once again, stopping the balancer stopped the issue.\nimage1818×311 109 KB\nWe also noticed an increase of logs regarding the WiredTiger’s checkpoints with an increase in delay going up to 200/250s\nimage1903×664 165 KB\nBut we’re now kind of stuck and don’t know what to do / what to monitor.In parallel to all these upgrades/downgrades we tried some upgrades/downgrades of the java driver as we upgraded from 3.9 to 4.5, but then noticed 4.3+ versions of the driver had known performance issues we rolled back to the version 4.2 of the driver, but with no luck.Our only option at the moment is to rollback to the 4.0 to see if we manage to fallback to our initial calm state, but because of the 4.4 upgrade, downgrade to 4.0 does not seem possible unless we rebuild all our nodes from scratch…Is there anything we can provide to help find & fix the issue?Random notes:",
"username": "Michael_Longo"
},
{
"code": "",
"text": "Some other thing we noticed (but it might be a consequence), as far as we can see, every time a node struggle, there is an increase in the number of active processes and tcp connections\nimage1075×265 26.4 KB\n\n\nimage2163×922 201 KB\n",
"username": "Michael_Longo"
}
] |
Performance issues on mongo 4.2, 4.4 and 5.0
|
2022-05-16T08:03:10.249Z
|
Performance issues on mongo 4.2, 4.4 and 5.0
| 3,847 |
|
null |
[] |
[
{
"code": "",
"text": "My issue concerns Realm Sync on iOS - SDK v10.20.0 on a free cluster M0 (which is my dev env, my prod env is on a premium tier).When I update the server’s schema by adding a new field or a new collection, I have noticed an undocumented behavior :I have tried to do this schema update using the “Enter Dev sync mode” and by using Realm Dashboard “Data Access > Rules > Add new collection”.",
"username": "Jerome_Pasquier"
},
{
"code": "",
"text": "Hi @Jerome_Pasquier – have you taken a look at this article… Migrating Your iOS App's Synced Realm Schema in Production | MongoDB ?If that doesn’t fix things in, could you provide a before and after schema / Object definition that triggers this, and I’ll try to reproduce it.Best Regards, Andrew.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thank you @Andrew_Morgan, I didn’t know there was such an article. The destructive part is very interesting.However, my concern is related to additive changes. According to your article, additive changes should not pause or stop the current users from syncing.What I have realized is that it does it.I have added a field to a document while having my device connected to Realm Sync - the same way as your article. Then I do save new values on the device, I check the cloud DB with MongoCompass => The changes are not synced.I have to wait for 2-3 minutes to see the changes finally on the cloud (or restart the client that will create a new connection).I have cross-posted this question on Github Upon updating Realm Sync rules/schema, mobile clients don't sync anymore · Issue #7580 · realm/realm-swift · GitHub and apparently someone observe the same behaviour on their end.I didn’t try calling a Realm function during this “down time” yet, so I don’t know if this will work.",
"username": "Jerome_Pasquier"
}
] |
Updating Realm Sync Schema cuts client's connection
|
2021-12-20T12:00:34.043Z
|
Updating Realm Sync Schema cuts client’s connection
| 2,822 |
null |
[
"connecting",
"mongodb-shell"
] |
[
{
"code": "",
"text": "Hi!\nToday when i tryed to connect to my cluster by mongoshell, i received the error below:connecting to: mongodb://cluster0-shard-00-00.cxmga.mongodb.net:27017,cluster0-shard-00-01.cxmga.mongodb.net:27017,cluster0-shard-00-02.cxmga.mongodb.net:27017/productions?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=Cluster0-shard-0&ssl=true\n{“t”:{“$date”:“2021-03-31T14:21:12.044Z”},“s”:“I”, “c”:“NETWORK”, “id”:4333208, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM host selection timeout”,“attr”:{“replicaSet”:“Cluster0-shard-0”,“error”:“FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \"nearest\" } for set Cluster0-shard-0”}}*** You have failed to connect to a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.Error: connect failed to replica set Cluster0-shard-0/cluster0-shard-00-00.cxmga.mongodb.net:27017,cluster0-shard-00-01.cxmga.mongodb.net:27017,cluster0-shard-00-02.cxmga.mongodb.net:27017 :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1I’ve unninstall and reinstall, but without success.\nThe error persists…could you help meThanks",
"username": "Joaquim_Pedro_27648"
},
{
"code": "",
"text": "Welcome to the Community!What is the status of your cluster\nAre all nodes up and running\nWhat type of connect string are you using?SRV or a different\nHave you whitelisted your IP",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Ramachandra_Tummala\nThanks for answer, i’ve found the problem…it was my ISP that was blocking the port 27017, today is working fine.The issue is solved Thanks a lot!",
"username": "Joaquim_Pedro_27648"
},
{
"code": "",
"text": "thanks , I also have same issue, I changed network and its working fine",
"username": "Tohsib_Qureshi"
}
] |
Failed to connect to a MongoDB Atlas cluster
|
2021-03-31T14:50:35.282Z
|
Failed to connect to a MongoDB Atlas cluster
| 7,276 |
null |
[
"aggregation",
"queries",
"crud"
] |
[
{
"code": "db.collection.insertMany([\n { '_id': 1, 'items': ['abc', 'ab']},\n { '_id': 2, 'items': ['abc', 'bc']},\n]) \ndb.collection.find({\n \"items\":{ $regex : /^A/}}\n})\n$all$and",
"text": "Hello,Apologies because I didn’t know what category this topics belongs to.Here’s the context, I have elements in database:I want to retrieve elements with ALL items matching my regex, in this case, I want it to match if first letter is an ‘A’I tried:But it seems that it matches the second element in our example also, because of the item matches the regex, and I need both to match.I tried other operator such as $all and $and but I couldn’t make it.Thanks in advance for your help",
"username": "Alexandre_Baron"
},
{
"code": "$regexAitemsaa{ <field>: <value> }<value>{ ‘_id’: 1, ‘items’: [‘abc’, ‘ab’]}\nitemsa/// Sample documents in the database\narraydb> db.collection.find()\n[\n { _id: 1, items: [ 'abc', 'ab' ] },\n { _id: 2, items: [ 'abc', 'bc' ] }\n]\n\n/// Aggregation operation to get all documents where all elements in the `items` array match the regex\ndb.collection.aggregate([\n {$match: {items: /^a/}}, \n {$addFields: { \n xx: {$subtract: [\n {$size: '$items'},\n {$size: {$filter: {\n input: '$items',\n cond: {$regexMatch: {input: '$$this', regex:/^a/}}\n }}}\n ]}\n }},\n {$match: {xx: 0}},\n {$unset: 'xx'}\n])\n\n/// Output\n[ { _id: 1, items: [ 'abc', 'ab' ] } ]\n$match/^a/$addFieldsxxitemsitemsxxitemsa$match{xx: 0}xx$unsetxx$match",
"text": "Hi @Alexandre_Baron - Welcome to the community!Thanks for providing the sample documents and what you’ve tried so far I tried:\ndb.collection.find({\n“items”:{ $regex : /^A/}}\n})\nBut it seems that it matches the second element in our example also, because of the item matches the regex, and I need both to match.This is slightly surprising as the $regex value is an uppercase A whilst the sample documents contain items elements beginning with lowercase a which I would expect none of the documents to be returned. However, this could be a typo. In the case that this was a typo (perhaps a lowercase a in the query you’ve provided) when writing the post, the reason for both documents being returned is described in the Query an Array for an element documentation:To query if the array field contains at least one element with the specified value, use the filter { <field>: <value> } where <value> is the element value.In saying so, based off the description, I would assume your expected / desired output / returned document(s) would be:I.e, the only document that contains all elements of the items array that begin with an a. Correct me if I am wrong here.One method that may possibly work for you is to use an aggregation operation with the following stages shown in the example below:For your reference regarding the above pipeline, here is a brief description of what is occuring at each stage:For the above example aggregation, the first $match should be able to make use of indexes.Please note that this was just performed in my test environment with the sample documents provided. Please ensure you test this to see if it suits your use case and environment.If further assistance is required, please provide the following details:Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi,\nThanks a lot for the solution, and more importantly the explanation.\nYour assumptions were right regarding my request ",
"username": "Alexandre_Baron"
},
{
"code": "",
"text": "Glad to hear it and thanks for confirming the solution! ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to retrieve element only if the whole list of string matches a regex
|
2022-05-09T13:57:57.777Z
|
How to retrieve element only if the whole list of string matches a regex
| 5,172 |
[
"queries",
"data-modeling"
] |
[
{
"code": "",
"text": "Good day,is it possible in MongoDB to have default values in nested arrays or objects?I use the following schema for a nested object:\nnested_011110×444 53.5 KB\nHowever, when I retrieve this schema on the frontend, with “findOne”, “used” does not have a default value at all:I would like to prepopulate this array with a default value, if it is empty, to access it with the JS frontend without always checking if this array exists.Do you have any idea how I could achieve this, or if this would even be a good practice concerning data modelling?Thank you!Best regards",
"username": "Malte_Hoch"
},
{
"code": "",
"text": "Hi @Malte_Hochis it possible in MongoDB to have default values in nested arrays or objects?No, since MongoDB is a flexible schema database, it cannot assume that the structure of one document will be mirrored by another document, even in the same collection.Do you have any idea how I could achieve this, or if this would even be a good practice concerning data modelling?Having said that, some ODMs like Mongoose do allow you to define schemas and default values for a collection. See Mongoose defaults for example.Regarding data modeling, generally you design your schema according to how the data would be used instead of how they will be stored, so you would want your schema to be easy to query. If your nested array structure proves to be difficult to query, then you might want to revisit the schema design.You might also might to check out the MongoDB University course M320: Data modeling, but please note that this course assumes you’re somewhat familiar with basic MongoDB concepts already.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadiThank you for your prompt and detailed reply. That makes definitely sense to me according your explanation!JavaScript in the frontend cannot do anything with an empty nested array. You could, of course, in the frontend either use ternary operators for each array, which blows up the code a bit, or send an empty nested array from the backend by checking if this nested array has any values.As you said: In the schema you only specify how the data set can and may look.The latter would be my current solution. Still not sure if this approach is a good style.",
"username": "Malte_Hoch"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Default value for nested arrays
|
2022-05-15T13:27:31.510Z
|
Default value for nested arrays
| 6,012 |
|
null |
[
"aggregation",
"java",
"views"
] |
[
{
"code": "",
"text": "hi All,\nI want to create ciew in Mongo from Java and also i want to ensure none of my vview stage gives memory exceeded error.So how can i create a view in Mongo from java with {“allowDiskUse”:true}",
"username": "Sanjay_Naik"
},
{
"code": "",
"text": "Hi there,Once SERVER-63208 is completed, allowDiskUse will be enabled by default, and that will apply to queries on views as well. Until then, you will have to specify allowDiskUse:true whenever you run a query (find or aggregate) against the view.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "thanks a lot for the reply @Jeffrey_Yemin .Any idea when it will be completed?",
"username": "Sanjay_Naik"
},
{
"code": "",
"text": "@Jeffrey_Yemin so after https://jira.mongodb.org/browse/SERVER-63208 is completed then in view creation this issue of memory exceeded limit will not come right?",
"username": "Sanjay_Naik"
},
{
"code": "",
"text": "We’re hopeful that it will be included in the 6.0 release in June. Please watch the ticket for updates to the fix version.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "@Jeffrey_Yemin is the update done?",
"username": "Sanjay_Naik"
}
] |
Create a mongo View using Java with {"allowDiskUse":true}
|
2022-04-08T14:42:58.329Z
|
Create a mongo View using Java with {“allowDiskUse”:true}
| 3,769 |
[] |
[
{
"code": "",
"text": "Mongo Object Model reports that I have some InvalidSchemaError. So I think the problem regarding to my relationships between documentsHere is my documents\nimage1627×137 8.75 KB\nI tried to delete the One-to-One relationship in Events but it still generates the same error.I also tried to delete the relationships One-to-many in Customers and events field at the same time, and it also generates the errorI don’t know whether I delete the relationship correctly. I opened the “Expand relationships”, neither just clears the box nor changes to “{}” works",
"username": "Phu_An_Chau"
},
{
"code": "Failed to convert MongoDB document with configured schema during initial syncrealm-cli pullrealm-cli push",
"text": "Hi Phu,Is this the error you’re referring to?Failed to convert MongoDB document with configured schema during initial syncThis means that a document exists which did not comply with your current Sync Schema - the namespace and the document id will be specified in the error log within your Realm Application Logs.The error details will show you what the problem was, for instance if your sync configuration has specified the partition key to be a field named “user_id” and a document is created without a value for this field, it will throw the error above and said document will not be syncable until it is corrected.To correct this document and make it syncable you will need to issue a replace() command from MongoShell, or update it using Atlas Data Explorer in order to bring it in compliance with the sync schema.I don’t know whether I delete the relationship correctly. I opened the “Expand relationships”, neither just clears the box nor changes to “{}” worksRegarding this issue, please try first creating Rules for your collections and then try making the changes to the relationships.If you’re still getting an error try making the change using the latest realm-cli version:Regards",
"username": "Mansoor_Omar"
}
] |
Error when trying to remove relationship of schemas
|
2022-05-15T02:30:22.401Z
|
Error when trying to remove relationship of schemas
| 1,876 |
|
null |
[] |
[
{
"code": "",
"text": "Hi,I am trying to explore if it is possible for a Realm Function to include app = New Realm.App(“appid”) ?Can I manually upload an external dependency based on the realm npm package or is it already accessible.I’m looking to hide the app id from the client side, and one way I can think of (if possible) is to have the client side fetch a webhook on Realm which then triggers a long function including a Realm Instance.",
"username": "5ff25d3440814e198ead77c273f7525"
},
{
"code": "client_app_idNew Realm.App()exports = async function() {\n \n const response = await context.http.get({ \n url: \"https://realm.mongodb.com/api/admin/v3.0/groups/<project_id>/apps/<realm_app_hexid>\",\n headers:{\n \"Content-Type\": [ \"application/json\" ],\n \"Authorization\": [ \"Bearer <admin_api_token>\" ]\n}\n });\n return EJSON.parse(response.body.text());\n};\n",
"text": "Hi,Creating a Realm instance in the function is not possible but you could use the Realm Admin API to retrieve the application definition for the app in question to send to the client. The response will contain the client_app_id you need for New Realm.App().The function can use context.http.get() to make the request to the API.Example of function:Regards",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Can a Realm Instance be created in a Realm Function?
|
2021-11-01T10:01:39.041Z
|
Can a Realm Instance be created in a Realm Function?
| 2,008 |
null |
[
"aggregation",
"swift"
] |
[
{
"code": "",
"text": "Is there any way similar to $lookup (aggregation) with RealmDB on client side?",
"username": "Basavaraj_KM1"
},
{
"code": "",
"text": "Hi Basavaraj,Please see article below on Aggregation Stage availability in Realm:Hope that answers the question.Regards",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "This will not work on local realmDB with Realm Database SDK(Swift) right? Works only with MongoDB Realm.",
"username": "Basavaraj_KM1"
},
{
"code": "",
"text": "The documentation I linked is relevant to performing queries on MongoDB Cloud which can be initiated from the client e.g. calling a function from the SDK.This will not work on local realmDB with Realm Database SDK(Swift) right? Works only with MongoDB Realm.It sounds like you want to filter the data in the local realm?\nCould you provide an example of what you want to achieve using lookup with regard to this?Regards",
"username": "Mansoor_Omar"
}
] |
Is there any way similar to $lookup (aggregation) with RealmDB on client side?
|
2022-05-12T13:32:46.972Z
|
Is there any way similar to $lookup (aggregation) with RealmDB on client side?
| 1,961 |
null |
[
"swift"
] |
[
{
"code": "\n\n/// NOTE: Code shown is not actual code, rather recreated for this post.\n\t\n//\n// Object+Ext(DeepUnManageRealm).swift\n// SAMPLE CODE\n//\n// Created by Reveel on 5/12/22.\n// Copyright © 2022 Reveel® All rights reserved.\n//\n\n\nimport Foundation\nimport RealmSwift\n\n\n/// A Protocol for Unmanaging Realm Objects (Deep)\nprotocol UnmanageRealmObject: AnyObject {\n func unmanageDeep(_ newPartitionValue: String?, _ furtherPartitioning: Bool) -> Self\n// END\n// END\n} // END of Protocol for 'UnmanageRealmObject'\n\n// NOTE: Extension for Unmanaging Realm Objects (Deep) - For 'Object'\nextension Object: UnmanageRealmObject {\n \n\t/// This method will unmanage a Realm object deep and update partitioning value WITHOUT forcing to further partitioning.\n\t/// - Parameters:\n\t/// \t- newPartitionValue: pass 'nil' to unmanage deep without changing partition value.\n\tinternal func unmanageAndUpdatePartitionsDeep(_ newPartitionValue: String?) -> Self {\n\t\tif let haveSourcePartition = self.realm?.configuration.syncConfiguration?.partitionValue {\n\t\t\tif haveSourcePartition.stringValue != newPartitionValue {\n\t\t\t\treturn unmanageDeep(newPartitionValue, false)\n\t\t\t}\n\t\t\telse {\n\t\t\t\treturn unmanageDeep(haveSourcePartition.stringValue, false)\n\t\t\t}\n\t\t}\n\t\telse {\n\t\t\tfatalError(\"During DEV - crashing bc NO Realm or Partition on source object. Check method: '\\(#function)'\", file: #file, line: #line)\n\t\t}\n\t} // End of 'unmanageAndUpdatePartitionsDeep' method\n\n\t/// This method will unmanage a Realm object deep and update partitioning value -AND- will force apply Further Partition practice.\n\t/// - Parameters:\n\t/// \t- newPartitionValue: pass 'nil' to unmanage deep without changing partition value.\n\tinternal func unmanageAndUpdateFurtherPartitioningDeep(_ newPartitionValue: String?) -> Self {\n\t\t\n\t\t// Custom Code for needs to further partition and do it auto/conditionally\n\t\t\n\t\treturn unmanageDeep(newPartitionValue, true)\n\t} // End of 'unmanageAndUpdateFurtherPartitioningDeep' method\n\n\n \n\t/// This method will unmanage a Realm object deep\n\t/// - Parameters:\n\t/// \t- newPartitionValue: pass 'nil' to unmanage deep without changing partition value.\n\t/// \t- furtherPartitioning: pass 'true' to force apply Further Partition practice.\n\tinternal func unmanageDeep(_ newPartitionValue: String?, _ furtherPartitioning: Bool = false) -> Self {\n\t \n\t\tlet unmanaged = type(of: self).init()\n\t\tlet partitioningKey = \"partitioningKey\"\n\t\tlet furtherPartitionPropertyName = \"YOUR-NAME\"\n\n\t\tfor property in objectSchema.properties {\n\t\t \n\t\t guard var propertyValue = value(forKey: property.name) else { continue; }\n\t\t\t\n\t\t\tvar processedFurtherPartitioning: Bool = false\n\t\t\t\n\t\t\tif property.isArray {\n\t\t\t\t// NOTE: For Realm's List<>.\n\t\t\t\tlet doUnmanage = propertyValue as? UnmanageRealmObject\n\t\t\t\tunmanaged.setValue(doUnmanage?.unmanageDeep(newPartitionValue, furtherPartitioning), forKey: property.name)\n\t\t\t}\n\t\t\telse if property.isMap || property.isSet {\n\t\t\t\t// NOTE: For Realm's Map (aka: Dictionary) -OR- Set.\n\t\t\t\tunmanaged.setValue(propertyValue, forKey: property.name)\n\t\t\t}\n\t\t\telse if property.type == .object {\n\t\t\t\t // TODO: Test if this handles EmbeddedObjects by Reference (i.e. 'related' by 'foreign_key') when not fully embedded.\n\t\t\t\t // NOTE: For Realm's Object and assuming it handles EmbeddedObject (when such is a reference via foreign_key in Schema).\n\t\t\t\tlet doUnmanage = propertyValue as? UnmanageRealmObject\n\t\t\t\tunmanaged.setValue(doUnmanage?.unmanageDeep(newPartitionValue, furtherPartitioning), forKey: property.name)\n\t\t\t}\n\t\t\telse {\n\t\t\t \n\t\t\t\t if property.name == partitioningKey, let haveNewPartition = newPartitionValue, partitioningKey != haveNewPartition {\n\t\t\t\t\t let oldPartition = propertyValue\n\t\t\t\t\t propertyValue = haveNewPartition\n\n\t\t\t\t\t processedFurtherPartitioning = RealmPartitionServices.shared.checkToApplyFurtherPartition(property: property)\n\t\t\t\t } // End of IF-\"have New Partition Value\"\n\n\t\t\t\t \n\t\t\t\t if furtherPartitioning, property.name == furtherPartitionPropertyName {\n\t\t\t\t\t let oldValue = propertyValue\n\t\t\t\t\t \n\t\t\t\t\t if oldValue is String {\n\t\t\t\t\t\t if !processedFurtherPartitioning {\n\t\t\t\t\t\t\t propertyValue = RealmPartitionServices.shared.createFurtherPartitioning(value: propertyValue as! String)\n\t\t\t\t\t\t }\n\t\t\t\t\t }\n\t\t\t\t\t else {\n\t\t\t\t\t\t // means non-standard type for further-partitioning for Reveel, which for DEV we crash but for PROD we Error-Log\n\t\t\t\t\t\t fatalError(\"During DEV - crashing bc UN-KNOWN OLD-Type for Further-Partitioning. Check method: '\\(#function)'\", file: #file, line: #line)\n\t\t\t\t\t }\n\t\t\t\t } // End of IF-\"Update for Further Partitioning\"\n\t\t\t \n\t\t\t\tunmanaged.setValue(propertyValue, forKey: property.name)\n\t\t\t}\n\t\t} // End of for-in-LOOP\n\n\t\treturn unmanaged\n } // End of internal-method 'unmanageDeep', For: overrided method\n\t\n// END\n// END\n} // END of 'Object' EXTENSION', For: 'UnmanageRealmObject'\n\n",
"text": "I have been hitting an issue when copying managed objects to a new Realm. I have a legacy Realm app being converted to MongoDb Realm. I know to use ‘.create()’ for copying to a new Realm. In legacy Realm there were no partition values for Realms, so when a copy was made the nested referenced Objects or List<>s (aka: relations via the ‘foreign_key’) were not a problem. I am finding that unless I update the partition values (deep) the local data will not sync to the cloud. Of course, the data writes fine locally but does not sync to the cloud.My issue is regarding Objects that are nested with Objects or List<>s - that would be related (by use of the foreign_key relationship). Also, this is for Object type not EmbeddedObjects. Since each related Object or Array of Objects would have a partition key and value. Upon the copy (i.e., the target) such related objects do not have the new partition value, rather it holds the old (source) value.To solve this, I have created some services and an Extension to Realm’s Object in order to unmanage them from Realm before writing the copy. This seems to work so far, though I have not completed all thorough testing yet. I want to make sure that I am not missing something (i.e., a better way to handle this) or do I need build more for this idea to work -please advise. If I am on the right path, then this code can help others.I am including the code for my Swift Extension. The code is somewhat re-created for this post (so not exact) and I am not including the related services I built to complete what I need (since it doesn’t matter for the issue). FYI - I have a design to “further partition” (due to the data denormalization) when I need to create duplicate objects across Realms that would cause a primary-key duplication issue (within same Collection) for MongoDB.",
"username": "Reveel"
},
{
"code": "@Persisted(primaryKey: true) var _id: ObjectIdlet someEmbddedObjectProperty = parentObject.embeddedObject.propertyclass MyOldSchoolPersonClass: Object {\n @Persisted var name = \"\"\n}\nclass MyCoolSyncedPersonClass: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var _partitionKey = \"\" //showing this for clarity\n @Persisted var name = \"\"\n\n convenience init(oldPerson: MyCoolSyncedPersonClass, partition: String) {\n self.init()\n self._partitionKey = partition\n self.name = oldPerson.name\n}\n\n",
"text": "I kinda get the question but some of it is unclear.My issue is regarding Objects that are nested with Objects or List<>sThose aren’t really nested. Lists (in the case) are references to other managed objects - all of those objects will need to have objectId properties@Persisted(primaryKey: true) var _id: ObjectIdsuch related objects do not have the new partition value, rather it holds the old (source) valueIf the object doesn’t have the new partition value, it won’t be part of that partition. Can you clarify? Do you want it to be in that partition or perhaps you want those objects in their own discreet partition?From your code:// TODO: Test if this handles EmbeddedObjects by Reference (i.e. ‘related’ by ‘foreign_key’) when not fully embedded.An embedded object is either embedded or it doesn’t exist; there are no other options there. Also, embedded objects can’t be referenced per se - they only exist within their parent object, so you access them through their parent object with dot notationlet someEmbddedObjectProperty = parentObject.embeddedObject.propertyAs you know, an object can only be managed by one Realm - so making an unmanaged copy of an object is one path you can go down but a copy of an object will still be that object.It sounds more like you want to migrate your objects from an old-style object to a new MongoDB Realm Managed object. Perhaps that’s the point to your code?Since there is no migration in Sync, what we did was pass the old object (via init) to a different new synced object and populate it based on the old objects properties.the old object above is passed to the new managed objectNo idea if any of that will help but I think we need more clarity on what the issue is (and that’s a lot of code in your question for us to parse through - does it work for your use case?). It might be good to add your models to the question so we understand what you’re looking at as well. (if you add them, please keep them small and only include relevant properties)",
"username": "Jay"
},
{
"code": ".create().create()",
"text": "Hi there Jay - thanks for offering to help!Let me start off by clarifying some things and then I will answer some of the questions you posed.Clarifications:\n• This is not for a data migration - rather normal usage by users.\n• I had mentioned these nested are through relations (aka references) not EmbeddedObjects, so yes related, which is through the primary-key (foreign_key). Additionally as an FYI, those related Objects & Array of Objects can have any allowed type of primary-key and does not have to only be an ObjectId.\n• Data needs to be denormalized for this use case in-hand. This is because it is not possible to use Sync Permissions at an object/document level via partition-based. Additionally, this case will not work with current (or even coming soon) version of Flexible-Sync either.\n• The issue is that Realm (on MongoDB) does not offer a way (that I’m aware of) to copy an Object to a new Realm that is managed by another Realm - when there are nested relations to Objects or Array of Objects. Since using ‘.create()’ only updates the top level and not deep. That’s the problem.\n• The goal is to be able copy across Realms and update the partition values for all the relationships (deep), so that it will sync to the cloud properly. Therefore, I am asking to know if there is an existing process (that I don’t know about) -OR- is this process (i.e, my Extension) to unmanage first in-order to update partition values and then copy to new Realm the only way for this to work? -This was not an issue in legacy Realm, hence thought it may already be solved [].Responses:My “TODO” in supplied codeIf the object doesn’t have the new partition value, it won’t be part of that partition. Can you clarify?RE code posted: does it work for your use case?As you know, an object can only be managed by one Realm - so making an unmanaged copy of an object is one path you can go down but a copy of an object will still be that object.`… so I am hoping you or anyone (e.g., Realm team) can please advise (quickly) on what to do - Thank you in advance!",
"username": "Reveel"
},
{
"code": "class Person: Object {\n var dogList = List<String> //you store the dogs primary key here?\n}\nclass Person: Object {\n var dogList = List<Dog> //or do you store the Dog object here?\n}\n.create",
"text": "That added a lot of clarity - a question remains though because we don’t know what the objects look.For simplicity, I will refer to Person and Dog where the Person has some kind of List relationship to Dogswhich is through the primary-key (foreign_key)…which can have any allowed type of primary-keydoes that mean you are creating “manual” relationships via a primary key (ObjectId, String, etc)?instead ofI ask as it can impact how the data is copied.If it’s the prior, the manual relationship maintains by just copying the Person object to a new person object. e.g. no deep copy needed.Whereas if it’s the latter, using .create is not a deep copy so that has be addressed separately.The other question is; for simplicity, are there existing “dog” objects that have no relationship back to a “person”? Or are all the dogs spoken for (e.g. no stray dogs; lol)",
"username": "Jay"
},
{
"code": "List<Dog>String.create()",
"text": "Hello Jay,To you use your example it would be the second one with ‘List<Dog>’, since ‘String’ is not Object type. I have been pretty specific on my posts that these are the type Object (cap on the ‘O’ for Realm type), or an Array of Objects.\n– Of course the models can be deeper - e.g., Dog could have an Object of ‘Breed’ within it - etc., etc.does that mean you are creating “manual” relationships via a primary key (ObjectId, String, etc)?– All Objects must have a primary-key and a partition value (if using partitions). The primary-key is the relationship for the references that are nested (Object or List). There is no “manual” relationship, these relationships are setup in the Schema. Basically whenever you are not using an EmbeddedObject this is how you have relationships for nested Objects or Array of Objects.Before we get too lost on tangents, I’d like to stay on focus here on the issue for a solution. Since ‘.create()’ will NOT update partition values deep on related Objects…\n• is there another Realm supported way to handle copying across Realms on MongoDB?\n• am I missing something?\n• shall I proceed as I first posted?Thank you",
"username": "Reveel"
},
{
"code": "var dogList = List<Dog>",
"text": "Thanks for the clarification. I was asking due to thisby use of the foreign_key relationshipbecause thisvar dogList = List<Dog>is not a foreign key relationship, it’s an object relationship.Hopefully someone will be able to provide an answer.",
"username": "Jay"
},
{
"code": "foreign_keyvar dogList = List<Dog>List<Dog>_idforeign_key",
"text": "Jay - Yeah let’s see if someone from Realm Team can chime in!I’d like to correct your recent comment. From an object view point it is a “To-Many” type relationship and which for the Schema (as mentioned) it is through the the ‘foreign_key’:var dogList = List<Dog>\nis not a foreign key relationship, it’s an object relationship.– in that example you provided ‘List<Dog>’ is a relationship in the Schema. It is for an Array of Objects (aka: “To-Many”) as you can see more on this Realm docs link. For an Object (aka: “To-One”) you can see this link. Both are setup in Realm’s Schema where the primary-key for an Object (i.e., ‘_id’) makes the relationship connection; this is done via the ‘foreign_key’ which makes that connection for such relations. Hope that helps.",
"username": "Reveel"
},
{
"code": "foreign_key",
"text": "As you know Realm objects are not required to have keys in all cases. e.g. local only realm objects do not need keys to have forward or even inverse relationships. Therefore that relationship is to the object, not the key (it’s an object database after all)We have no idea how your objects were structured before and after since they were not included in the question: Since you’re going from local to sync, the questions I presented are trying to get details on what those deep objects look like (and how the relationship was/is set up) instead of trying to decipher it from the code.I will close with this from the Realm Swift documentation which is what prompted the question about how the objects relationships were/are implemented:A relationship is when an object property references another objectIs there foreign_key in your actual Realm Swift object or is that the primary_key property? Perhaps including your models would clarify it for someone on the Realm Team.",
"username": "Jay"
},
{
"code": "foreign_keyforeign_key_id",
"text": "@Jay - This whole post is about syncing; please look at the links I provided in my last post (“To-Many” & “To-One”). The ‘foreign_key’ is part of the Schema and is used with the primary-key to connect to the related Objects. Therefore, the models by way of an SDK (i.e., iOS in this case) would not have a property of ‘foreign_key’, rather depend on ‘_id’ (aka: primary-key). This is needed for Realm to work with MongoDB and sync.Jay let’s please leave it here - as these posts will make it hard for someone to follow what is going here since this is all off topic. Thank you for trying though - I do appreciate that.",
"username": "Reveel"
}
] |
Sync Issues Copying Object Across Realms - Partition values not copying on deep relations
|
2022-05-12T23:34:15.380Z
|
Sync Issues Copying Object Across Realms - Partition values not copying on deep relations
| 2,896 |
null |
[
"mdbw22-hackathon",
"charts",
"mdbw-hackhelp"
] |
[
{
"code": "Add Dashboards > My Dashboard > Add Chart > Data Source > Choose a Data Source\nThere was an error when loading the data source fields\n",
"text": "I’m now watching Getting started with MongoDB & GDELT - APAC-EMEA Session MongoDB.Before I started the Hackathon I got a cluster in MongoDB and I deleted it and create a new one for the Hackathon.Now in the new cluster I want to create a chartBut the options to choose for a Data Source are the Databases of the cluster that I deleted and not from the database that I created for the HackathonIf I select one Database for the deleted cluster the following error is show:Why I can’t select the Data Source for the new cluster and why are the Data Sources of the deleted cluster still as an option for the Data Source?I also deleted the current Dashboard and create a new one but happens the same.",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "I tried again yesterday and today but the problem persists.I contacted with support using the Chat and they help me to solve the issue.If someone has the same issue, to fix it go to DashboardsDATA SOURCES > Add Data Source >And select your Data Source and the Collections you would like to show in the Graph.",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "Glad you got sorted and thanks for sharing the fix",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Thanks, I’m learning a lot from the videos of this Hackathon too, and discovering many features that MongoDb offers that are amazing in my opinion.",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Can not choose a Data Source for a Chart from my new Cluster
|
2022-05-10T20:52:52.113Z
|
Can not choose a Data Source for a Chart from my new Cluster
| 3,871 |
null |
[
"aggregation",
"node-js",
"compass",
"mdbw22-hackathon",
"mdbw-hackhelp"
] |
[
{
"code": "",
"text": "I watched the video Getting Started with GDELT MongoDB, in it they create in MongoDB Compass an Aggregation called EventsGroupedBySource, I also do the same but using the data from my own cluster and works ok.I tried to replicate in my code Node.js the same:const newsWithAggregationBySourceUrl = await collection.aggregate([{$group:{\n_id: “$SOURCEURL”,\ncount: {\n$sum: 1\n},\nids: {\n$push: { id: “$_id”, code: “$EventCode”}\n}\n}}, {\n“count”: { $gt: 1}\n}]).toArray();But I got the following error:MongoServerError: count is not allowed in this atlas tierWhy I am able to create the Aggregation in MongoDB Compass but not in my code?Is there an alternative to perform this query?Thanks",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "Your stage count:{$gt:1} is not a valid stage.If you want to filter documents, you must use a $match stage.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your answer, I don’t want filter documents, what I would like is to group documents with the same SOURCEURL., like in the image below:\ngroup documents with the same SOURCEURL.1143×511 77.3 KB\n",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "That image is a $match stage with you count query.The box you circled in red reads Output after $match stage.",
"username": "steevej"
},
{
"code": "",
"text": "All right, I understand you know, the following code works ok:const newsWithAggregationBySourceUrl = await collection.aggregate([{$group:{\n_id: “$SOURCEURL”,\ncount: {\n$sum: 1\n},\nids: {\n$push: { id: “$_id”, code: “$EventCode”}\n}\n}}, {$match:{\n“count”: { $gt: 1}\n}}]).toArray();Thank you very much for your help.",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "Thanks for the help @steevejGlad you got sorted @Manuel_Martin",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Thanks, I also discovered a bit later and I would like to share because ca help others that MongoDB Compass has the functionality to Export Pipeline to several languages that is a great help to build the queries in code.",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
MongoServerError: count is not allowed in this atlas tier
|
2022-05-15T15:48:01.295Z
|
MongoServerError: count is not allowed in this atlas tier
| 3,873 |
null |
[
"node-js",
"python",
"mongoose-odm",
"cxx",
"mdbw22-hackathon"
] |
[
{
"code": "",
"text": "Hi , I am Satyam and I am looking for a Project .\nI am 2nd year undergrad loves to apply my knowledge of Machine Learning and Development to exciting Projects.\nI just completed my first ML internship so eagerly waiting for new challenges.I know Python (for ML & Data Science) , used MongoDB , Javascript , HTML ,CSS , SQL ,Node , C++ .\nAlso good in Competetive programming.India Standard Time",
"username": "satyam_mishra1"
},
{
"code": "",
"text": "Great set of skills and thanks for listing. Hopefully some team will snap you up!!Have you looked in the Projects looking for Hackers category ?",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Satyam_mishra1 is looking for a project!
|
2022-05-15T12:03:25.574Z
|
Satyam_mishra1 is looking for a project!
| 2,737 |
null |
[
"compass",
"atlas-cluster",
"mdbw22-hackathon",
"gdelt"
] |
[
{
"code": "",
"text": "For this hackathon you will be working with the GDELT Project Dataset . The GDELT ( Global Database of Events, Language, and Tone ) Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, themes, sources, emotions, counts, quotes, images and events driving our global society every second of every day.Over the next few weeks we’re going to be publishing blog posts, hosting live streams and AMA (ask me anything) sessions to help you with your GDELT and MongoDB journey. In the meantime, you have a couple of options: You can work with our existing GDELT data cluster (containing the entirety of last year’s GDELT data), or you can load a subset of the GDELT data into your own cluster.We currently host the past year’s GDELT data in a cluster called GDELT2. Once you have an Atlas account set-up, you can access it read-only using Compass, or any of the MongoDB drivers, with the following connection string:mongodb+srv://readonly:[email protected]/GDELT?retryWrites=true&w=majorityThe raw data is contained in a collection called “eventsCSV”, and a slightly massaged copy of the data (with Actors and Actions broken down into subdocuments) is contained in a collection called “recentEvents”.We’re still making changes to this cluster, and plan to load more data in as time goes on (as well as keeping up-to-date with the 15-minute updates to GDELT!), so keep an eye out for the updates!There’s a high likelihood that you can’t work with the data in its raw form. For one reason or another you need the data in a different format, or filtered in some way to work with it efficiently. In that case, I highly recommend you follow Adrienne’s advice in her GDELT Primer README.In the next few days we’ll be publishing a tool to efficiently load the data you want into a MongoDB cluster - bear with us. In the meantime, read up on GDELT, have a look at the sample data, and find some teammates to build with!The following documents contain most of the official documentation you’ll need for working with GDELT. We’ve summarized much of it here, but it’s always good to check the source, and you’ll need the CAMEO encoding listing!Please reply below with any questions you may have regarding GDELT and we’ll endeavour to answer them as quickly as we can.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Shane_McAllister"
},
{
"code": "gdeltloaderpip install gdelttools\nmongoimport.shgdeltloader% gdeltloader -h\nusage: gdeltloader [-h] [--host HOST] [--database DATABASE]\n [--collection COLLECTION] [--master] [--update]\n [--local LOCAL] [--overwrite] [--download] [--importdata]\n [--metadata] [--filter {all,gkg,mentions,export}]\n [--last LAST] [--version]\n\noptional arguments:\n -h, --help show this help message and exit\n --host HOST MongoDB URI\n --database DATABASE Default database for loading [GDELT2]\n --collection COLLECTION\n Default collection for loading [eventscsv]\n --master GDELT master file [False]\n --update GDELT update file [False]\n --local LOCAL load data from local list of zips\n --overwrite Overwrite files when they exist already\n --download download zip files from master or local file\n --importdata Import files into MongoDB\n --metadata grab meta data files\n --filter {all,gkg,mentions,export}\n download a subset of the data, the default is all data\n [export, mentions gkg, all]\n --last LAST how many recent files to download default : [0]\n implies all files\n --version show program's version number and exit\n\nVersion: 0.07b2 More info : https://github.com/jdrumgoole/gdelttools\nmongoimport.sh",
"text": "We are continuing to improve the gdelttools python package. This is a package which mainly creates a command line program (which should be on your path after install) called gdeltloader.To install this package run:To get the real value out of the package you should also clone the repo that this package is generated from. This contains a script mongoimport.sh that @Mark_Smith has done some sterling work on to improve how it loads large collections of input files.You should definitely check both items out.Here is the help for gdeltloader:This version also have latent support for loading the downloaded files directly, but right now that approach is much slower than the mongoimport.sh script.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "tool to efficiently load the data you want into a MongoDB clusterHello @Shane_McAllister , which tool were you referring we use for efficiently loading the data we want into a MongoDB cluster?",
"username": "Ayo_Exbizy"
},
{
"code": "",
"text": "This one located here - gdelttools · PyPI - the GDELT Tools Python Package",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
The GDELT dataset Primer (for use in hackathon)
|
2022-04-11T16:46:33.795Z
|
The GDELT dataset Primer (for use in hackathon)
| 4,742 |
null |
[
"queries",
"charts"
] |
[
{
"code": "",
"text": "I am having an issue with the Filtering feature not working when working with ISODate types.When viewing the public link to my dashboard for example, applying a filter of Period: Previous 162 Days, half of the data is removed as expected.However, when applying the exact same filter while editing the chart directly all of the data is returned. This happens both while editing the chart, and when sending a date filter query through the Charts Embedding SDK.It seems to have the following behavior specifically:Filters or queries for any other field (numeric, string) work as expected.\nThe chart is a Grid/Heatmap type if that makes a difference.Any help would be greatly appreciated.",
"username": "Mitch_Palmer"
},
{
"code": "",
"text": "Hi @Mitch_Palmer -This definitely isn’t expected, and I’ve not been able to reproduce the behaviour you are describing. Is it possible for you to send some screenshots showing the issue?Tom",
"username": "tomhollander"
}
] |
Atlas Charts: Issue with Date Filtering
|
2022-05-12T20:42:39.974Z
|
Atlas Charts: Issue with Date Filtering
| 2,916 |
null |
[
"aggregation",
"python"
] |
[
{
"code": "[\n {\n \"_id\":332,\n \"vendors\":[\n {\n \"count\":6,\n \"products\":[\n {\n \"count\":6\n }\n ]\n }\n ]\n },\n {\n \"_id\":464,\n \"vendors\":[\n {\n \"count\":10,\n \"products\":[\n {\n \"count\":10\n }\n ]\n }\n ]\n },\n {\n \"_id\":538,\n \"vendors\":[\n {\n \"count\":9,\n \"products\":[\n {\n \"count\":9\n }\n ]\n }\n ]\n },\n {\n \"_id\":437,\n \"vendors\":[\n {\n \"count\":13,\n \"products\":[\n {\n \"count\":13\n }\n ]\n }\n ]\n },\n {\n \"_id\":352,\n \"vendors\":[\n {\n \"count\":8,\n \"products\":[\n {\n \"count\":8\n }\n ]\n }\n ]\n },\n {\n \"_id\":498,\n \"vendors\":[\n {\n \"count\":9,\n \"products\":[\n {\n \"count\":9\n }\n ]\n }\n ]\n },\n {\n \"_id\":329,\n \"vendors\":[\n {\n \"count\":8,\n \"products\":[\n {\n \"count\":8\n }\n ]\n }\n ]\n },\n {\n \"_id\":467,\n \"vendors\":[\n {\n \"count\":9,\n \"products\":[\n {\n \"count\":9\n }\n ]\n }\n ]\n },\n {\n \"_id\":430,\n \"vendors\":[\n {\n \"count\":13,\n \"products\":[\n {\n \"count\":13\n }\n ]\n }\n ]\n },\n {\n \"_id\":291,\n \"vendors\":[\n {\n \"count\":5,\n \"products\":[\n {\n \"count\":5\n }\n ]\n }\n ]\n },\n {\n \"_id\":192,\n \"vendors\":[\n {\n \"count\":2,\n \"products\":[\n {\n \"count\":2\n }\n ]\n }\n ]\n },\n {\n \"_id\":441,\n \"vendors\":[\n {\n \"count\":13,\n \"products\":[\n {\n \"count\":13\n }\n ]\n }\n ]\n },\n {\n \"_id\":466,\n \"vendors\":[\n {\n \"count\":10,\n \"products\":[\n {\n \"count\":10\n }\n ]\n }\n ]\n },\n {\n \"_id\":445,\n \"vendors\":[\n {\n \"count\":13,\n \"products\":[\n {\n \"count\":13\n }\n ]\n }\n ]\n },\n {\n \"_id\":465,\n \"vendors\":[\n {\n \"count\":9,\n \"products\":[\n {\n \"count\":9\n }\n ]\n }\n ]\n }\n]\n{\n \"192\":{\n \"count\":2,\n \"products\":{\n \"count\":2\n }\n },\n \"329\":{\n \"count\":8,\n \"products\":{\n \"count\":8\n }\n },\n \"498\":{\n \"count\":9,\n \"products\":{\n \"count\":9\n }\n },\n \"291\":{\n \"count\":5,\n \"products\":{\n \"count\":5\n }\n },\n \"332\":{\n \"count\":6,\n \"products\":{\n \"count\":6\n }\n },\n \"437\":{\n \"count\":13,\n \"products\":{\n \"count\":13\n }\n },\n \"352\":{\n \"count\":8,\n \"products\":{\n \"count\":8\n }\n },\n \"445\":{\n \"count\":13,\n \"products\":{\n \"count\":13\n }\n },\n \"538\":{\n \"count\":9,\n \"products\":{\n \"count\":9\n }\n },\n \"466\":{\n \"count\":10,\n \"products\":{\n \"count\":10\n }\n },\n \"464\":{\n \"count\":10,\n \"products\":{\n \"count\":10\n }\n },\n \"465\":{\n \"count\":9,\n \"products\":{\n \"count\":9\n }\n },\n \"430\":{\n \"count\":13,\n \"products\":{\n \"count\":13\n }\n },\n \"467\":{\n \"count\":9,\n \"products\":{\n \"count\":9\n }\n },\n \"441\":{\n \"count\":13,\n \"products\":{\n \"count\":13\n }\n }\n}\nresult=collection.aggregate([\n { \"$project\": { \"object.definition.metadata.affected.@family\":1,\"object.definition.metadata.affected.product\":1,\"job_id\":1,\"_id\":0} }, #\"object.definition.metadata.affected.product\":1,\n { \"$group\": {\n \"_id\": {\n \"job_id\":\"$job_id\",\n \"family\": \"$object.definition.metadata.affected.@family\",\n \"product\": \"$object.definition.metadata.affected.product\"\n \n },\n \"count\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"job_id\": \"$_id.job_id\",\n \"vendor\": \"$_id.family\",\n # \"product\":\"$_id.product\"\n },\n \"count\": {\n \"$sum\": \"$count\"\n },\n \"products\": {\n \"$push\": {\n \"product\": \"$_id.product\",\n \"count\": \"$count\",\n \"products\": \"$products\"\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$_id.job_id\",\n \n \"vendors\": {\n \"$push\": {\n \"vendor\": \"$_id.vendor\",\n \"count\": \"$count\",\n \"products\": \"$products\"\n }\n }\n }\n },\n",
"text": "Hey after i did an aggregation by group aggregation i found in the last stage the data below i want to replace array by object how could I do it (by adding another stage in the pipeline) Thanks\ndata in the final stage :data i want :PS: i did a group in the pipeline to group by id then vendor then product using push so i got the array\nusing the following codeso i want to save values not in an array (push) but in a nested object thanks",
"username": "Mohamed_Habib"
},
{
"code": "$fieldaggregate(\n[{\"$replaceRoot\": \n {\"newRoot\": \n {\"$arrayToObject\": \n [[{\"k\": {\"$toString\": \"$_id\"},\n \"v\": {\"count\": {\"$arrayElemAt\": [\"$vendors.count\", 0]},\n \"products\": \n {\"count\": {\"$arrayElemAt\": [\"$vendors.count\", 0]}}}}]]}}}])\n",
"text": "QueryTest code here",
"username": "Takis"
},
{
"code": "[{\"$project\": \n {\"_id\": 0,\n \"root-array\": \n {\"$map\": \n {\"input\": {\"$objectToArray\": \"$ROOT\"},\n \"in\": [\"$m.k\", \"$m.v\"],\n \"as\": \"m\"}}}},\n {\"$set\": \n {\"root-array\": \n [{\"$map\": \n {\"input\": \"$root-array\",\n \"in\": \n {\"$cond\": \n [{\"$eq\": [{\"$arrayElemAt\": [\"$this\", 0]}, \"_id\"]},\n {\"$toString\": {\"$arrayElemAt\": [\"$this\", 1]}},\n {\"$let\": \n {\"vars\": \n {\"c_value\": \n {\"$getField\": \n {\"field\": \"count\",\n \"input\": \n {\"$arrayElemAt\": [{\"$arrayElemAt\": [\"$this\", 1]}, 0]}}}},\n \"in\": \n {\"count\": \"$c_value\",\n \"products\": {\"count\": \"$c_value\"}}}}]}}}]}},\n {\"$replaceRoot\": {\"newRoot\": {\"$arrayToObject\": \"$root-array\"}}}])\n",
"text": "Thank you that is exactly what i need but when i try to update the value of the key based on the key the following this format (update_one based with “job_id” which is the key of the last result )\nit update only one document and insert the other document\nthe desired format is below\ni used field=date +\".job_id\" field to filter\nvalue_to_update=date+.$.+field_to_update (i tried .$[]. and also .)\nand a loop in the list of job to update the document thanks{\"_id\":{\"$oid\":“61a4b0cb5b60664cba5ef11d”},“29-11-2021”:[{“job_id”:467,“overlap”:{“430”:10,“437”:60,“441”:6,“445”:25,“464”:3,“467”:9,“498”:9},“non_exclusive”:0,“exclusive”:0,“No_of_object”:9,“activity”:{},“job_context”:{},“vendor_product_version”:{}},{“job_id”:498,“overlap”:{“430”:10,“437”:60,“441”:6,“445”:25,“464”:3,“467”:9,“498”:9},“non_exclusive”:0,“exclusive”:0,“No_of_object”:9,“activity”:{},“job_context”:{},“vendor_product_version”:{}},{“job_id”:430,“overlap”:{“430”:16,“437”:290,“441”:30,“445”:128,“464”:6,“467”:10,“498”:10},“non_exclusive”:0,“exclusive”:0,“No_of_object”:13,“activity”:{},“job_context”:{},“vendor_product_version”:{}},{“job_id”:437,“overlap”:{“430”:290,“437”:13156,“441”:1374,“445”:5902,“464”:156,“467”:60,“498”:60},“non_exclusive”:0,“exclusive”:0,“No_of_object”:13,“activity”:{},“job_context”:{},“vendor_product_version”:{}},{“job_id”:441,“overlap”:{“430”:30,“437”:1374,“441”:144,“445”:618,“464”:18,“467”:6,“498”:6},“non_exclusive”:0,“exclusive”:0,“No_of_object”:13,“activity”:{},“job_context”:{},“vendor_product_version”:{}},{“job_id”:445,“overlap”:{“430”:128,“437”:5902,“441”:618,“445”:2653,“464”:75,“467”:25,“498”:25},“non_exclusive”:0,“exclusive”:0,“No_of_object”:13,“activity”:{},“job_context”:{},“vendor_product_version”:{}},{“job_id”:464,“overlap”:{“430”:6,“437”:156,“441”:18,“445”:75,“464”:9,“467”:3,“498”:3},“non_exclusive”:0,“exclusive”:0,“No_of_object”:10,“activity”:{},“job_context”:{},“vendor_product_version”:{}},{“job_id”:9000,“overlap”:{“9000”:1},“non_exclusive”:0,“exclusive”:0,“No_of_object”:0,“activity”:{},“job_context”:{},“vendor_product_version”:{}}]}",
"username": "Mohamed_Habib"
},
{
"code": "",
"text": "i updated the previous answer and i made it simpler, but i dont understand the second question, if possible write it like the previous question, with sample data and expected output.",
"username": "Takis"
},
{
"code": "[\n {\n \"_id\":332,\n \"vendors\":[\n {\n \"count\":6,\n \"products\":[\n {\n \"count\":6\n }\n ]\n }\n ]\n },\n {\n \"_id\":464,\n \"vendors\":[\n {\n \"count\":10,\n \"products\":[\n {\n \"count\":10\n }\n ]\n }\n ]\n },\n {\n \"_id\":538,\n \"vendors\":[\n {\n \"count\":9,\n \"products\":[\n {\n \"count\":9\n }\n ]\n }\n ]\n },\n {\n \"_id\":437,\n \"vendors\":[\n {\n \"count\":13,\n \"products\":[\n {\n \"count\":13\n }\n ]\n }\n ]\n },\n {\n \"_id\":352,\n \"vendors\":[\n {\n \"count\":8,\n \"products\":[\n {\n \"count\":8\n }\n ]\n }\n ]\n },\n {\n \"_id\":498,\n \"vendors\":[\n {\n \"count\":9,\n \"products\":[\n {\n \"count\":9\n }\n ]\n }\n ]\n },\n {\n \"_id\":329,\n \"vendors\":[\n {\n \"count\":8,\n \"products\":[\n {\n \"count\":8\n }\n ]\n }\n ]\n },\n {\n \"_id\":467,\n \"vendors\":[\n {\n \"count\":9,\n \"products\":[\n {\n \"count\":9\n }\n ]\n }\n ]\n },\n {\n \"_id\":430,\n \"vendors\":[\n {\n \"count\":13,\n \"products\":[\n {\n \"count\":13\n }\n ]\n }\n ]\n },\n {\n \"_id\":291,\n \"vendors\":[\n {\n \"count\":5,\n \"products\":[\n {\n \"count\":5\n }\n ]\n }\n ]\n },\n {\n \"_id\":192,\n \"vendors\":[\n {\n \"count\":2,\n \"products\":[\n {\n \"count\":2\n }\n ]\n }\n ]\n },\n {\n \"_id\":441,\n \"vendors\":[\n {\n \"count\":13,\n \"products\":[\n {\n \"count\":13\n }\n ]\n }\n ]\n },\n {\n \"_id\":466,\n \"vendors\":[\n {\n \"count\":10,\n \"products\":[\n {\n \"count\":10\n }\n ]\n }\n ]\n },\n {\n \"_id\":445,\n \"vendors\":[\n {\n \"count\":13,\n \"products\":[\n {\n \"count\":13\n }\n ]\n }\n ]\n },\n {\n \"_id\":465,\n \"vendors\":[\n {\n \"count\":9,\n \"products\":[\n {\n \"count\":9\n }\n ]\n }\n ]\n }\n]\n",
"text": "i have the following data obtained after aggregation which contain nested object and array\nl2={185: [{‘vendors’: ‘monal’,\n‘count’: 1,\n‘products’: [{‘product’: ‘monal’,\n‘count’: 1,\n‘versions’: [{‘version’: ‘’, ‘count’: 1}]}]},\n{‘vendors’: ‘nibbleblog’,\n‘count’: 1,\n‘products’: [{‘product’: ‘nibbleblog’,\n‘count’: 1,\n‘versions’: [{‘version’: ‘3.7.1c’, ‘count’: 1}]}]},\n{‘vendors’: ‘ftpd_project’,\n‘count’: 1,\n‘products’: [{‘product’: ‘ftpd’,\n‘count’: 1,\n‘versions’: [{‘version’: ‘0.2.1’, ‘count’: 1}]}]}],\n190:[ {‘vendors’: ‘oracle’,\n‘count’: 11,\n‘products’: [{‘product’: ‘identity_manager_connector’,\n‘count’: 1,\n‘versions’: [{‘version’: ‘9.0’, ‘count’: 1}]},\n{‘product’: ‘enterprise_repository’,\n‘count’: 1,\n‘versions’: [{‘version’: ‘12.1.3.0.0’, ‘count’: 1}]},\n{‘product’: ‘georaster’,\n‘count’: 1,\n‘versions’: [{‘version’: ‘18c’, ‘count’: 1}]},\n{‘product’: ‘communications_diameter_signaling_router’,\n‘count’: 4,\n‘versions’: [{‘version’: ‘8.2.1’, ‘count’: 1},\n{‘version’: ‘8.2’, ‘count’: 1},\n{‘version’: ‘8.0.0’, ‘count’: 1},\n{‘version’: ‘8.1’, ‘count’: 1}]},\n{‘product’: ‘enterprise_manager_base_platform’,\n‘count’: 3,\n‘versions’: [{‘version’: ‘13.2.0.0.0’, ‘count’: 1},\n{‘version’: ‘13.3.0.0.0’, ‘count’: 1},\n{‘version’: ‘12.1.0.5.0’, ‘count’: 1}]},\n{‘product’: ‘goldengate_stream_analytics’,\n‘count’: 1,\n‘versions’: [{‘version’: '’, ‘count’: 1}]}]},\n{‘vendors’: ‘opendesign’,\n‘count’: 1,\n‘products’: [{‘product’: ‘drawings_software_development_kit’,\n‘count’: 1,\n‘versions’: [{‘version’: ‘*’, ‘count’: 1}]}]}]}i want to update it in the following document which already exists (so using update_one and filter by job_id which are the keys of my list of dictionaries above 185 et 190\nthe collection before update is :\n{\n“24-11-2021”: [{\n“job_id”: 185,\n“overlap”: {\n“467”: 9,\n“498”: 9,\n“430”: 10,\n“437”: 61,\n“441”: 6,\n“445”: 25,\n“464”: 3\n},\n“non_exclusive”: 0,\n“exclusive”: 0,\n“No_of_object”: 0,\n“activity”: {},\n“job_context”: {},\n“vendor_product_version”: {}\n}, {\n“job_id”: 190,\n“overlap”: {\n“467”: 9,\n“498”: 9,\n“430”: 10,\n“437”: 61,\n“441”: 6,\n“445”: 25,\n“464”: 3\n},\n“non_exclusive”: 0,\n“exclusive”: 0,\n“No_of_object”: 0,\n“activity”: {},\n“job_context”: {},\n“vendor_product_version”: {}\n}, {\n“job_id”: 430,\n“overlap”: {\n“467”: 10,\n“498”: 10,\n“430”: 16,\n“437”: “295”,\n“441”: 30,\n“445”: 128,\n“464”: 6\n},\n“non_exclusive”: 0,\n“exclusive”: 0,\n“No_of_object”: 0,\n“activity”: {},\n“job_context”: {},\n“vendor_product_version”: {}\n}, {\n“job_id”: 437,\n“overlap”: {\n“467”: 61,\n“498”: 61,\n“430”: 295,\n“437”: 13618,\n“441”: 1398,\n“445”: 6005,\n“464”: 159\n},\n“non_exclusive”: 0,\n“exclusive”: 0,\n“No_of_object”: 0,\n“activity”: {},\n“job_context”: {},\n“vendor_product_version”: {}\n}, {\n“job_id”: 441,\n“overlap”: {\n“467”: 6,\n“498”: 6,\n“430”: 30,\n“441”: 144,\n“445”: 618,\n“464”: 18,\n“437”: 1398\n},\n“non_exclusive”: 0,\n“exclusive”: 0,\n“No_of_object”: 0,\n“activity”: {},\n“job_context”: {},\n“vendor_product_version”: {}\n}, {\n“job_id”: 445,\n“overlap”: {\n“467”: 25,\n“498”: 25,\n“430”: 128,\n“441”: 618,\n“445”: 2653,\n“464”: 75,\n“437”: 6005\n},\n“non_exclusive”: 0,\n“exclusive”: 0,\n“No_of_object”: 0,\n“activity”: {},\n“job_context”: {},\n“vendor_product_version”: {}\n}, {\n“job_id”: 464,\n“overlap”: {\n“467”: 3,\n“498”: 3,\n“430”: 6,\n“441”: 18,\n“445”: 75,\n“464”: 9,\n“437”: 159\n},\n“non_exclusive”: 0,\n“exclusive”: 0,\n“No_of_object”: 0,\n“activity”: {},\n“job_context”: {},\n“vendor_product_version”: {}\n}, {\n“job_id”: 9000,\n“overlap”: {\n“9000”: 1\n},\n“non_exclusive”: 0,\n“exclusive”: 0,\n“No_of_object”: 0,\n“activity”: {},\n“job_context”: {},\n“vendor_product_version”: {}\n}]\n}i want to update the field ‘vendor_product_version’ following its job_id which is the key of my previous list of dictionary thanks",
"username": "Mohamed_Habib"
},
{
"code": "",
"text": "Hello Mohamed, I want to achieve an array of Objects the way you did it above, How can I do that",
"username": "Seghosimhe_David"
}
] |
Converting nested array into nested object
|
2021-11-29T21:56:15.197Z
|
Converting nested array into nested object
| 4,462 |
null |
[
"app-services-user-auth"
] |
[
{
"code": "",
"text": "I would like for my ios and android clients to be able to delete their account through the mobile app, meaning delete all their data and also delete the authorized user. Deleting all their data is easily done, but how do I programmatically delete the user? Can it be done through a function which can be called from the app if the user id is passed to the function?",
"username": "Deji_Apps"
},
{
"code": "",
"text": "Hi @Deji_Apps ,I suggested a function approach on deleting anonymous users with apiI believe the same approach can be done hereThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/appsBearer ${cloud_auth_body.access_token}https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps/${internalAppId}/usersBearer ${cloud_auth_body.access_token} if (user._id == userId)\n {\n usersToDelete.push(user._id);\n}\n\n}\n );\n\nconsole.log(JSON.stringify(usersToDelete));\n\n// Delete the users on the list\n usersToDelete.map(function(id){ \n const respone_realm_users_delete = context.http.delete({\nurl : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps/${internalAppId}/users/${id}`,\nheaders : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`]\n}\n\n });\n });\n",
"text": "@Pavel_Duchovny\nYou are right about your approach. I tried modifying your anonymous user delete to delete a specific user that matches a specific user id. However I keep getting this error:{“message”:“‘map’ is not a function”,“name”:“TypeError”}\n{\n“arguments”: [\n“60d4e453482d128a8c0ac15c”\n],\n“name”: “deleteUser”\n}Here is the what I tried:exports = async function(userId) {// Get Atlas Parameters and application id\nconst AtlasPrivateKey = context.values.get(“AtlasPrivateKey”);\nconst AtlasPublicKey = context.values.get(“AtlasPublicKey”);\nconst AtlasGroupId = context.values.get(“AtlasGroupId”);\nconst appId = ‘my_app_id’;// Authenticate to Realm API\nconst respone_cloud_auth = await context.http.post({\nurl : “https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login”,\nheaders : { “Content-Type” : [“application/json”],\n“Accept” : [“application/json”]},\nbody : {“username”: AtlasPublicKey, “apiKey”: AtlasPrivateKey},\nencodeBodyAsJSON: true});const cloud_auth_body = JSON.parse(respone_cloud_auth.body.text());// Get the internal appId\nconst respone_realm_apps = await context.http.get({\nurl : https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps,\nheaders : { “Content-Type” : [“application/json”],\n“Accept” : [“application/json”],\n“Authorization” : [Bearer ${cloud_auth_body.access_token}]\n}});const realm_apps = JSON.parse(respone_realm_apps.body.text());var internalAppId = “”;realm_apps.map(function(app){\nif (app.client_app_id == appId)\n{\nconsole.log(JSON.stringify(appId));\ninternalAppId = app._id;\n}\n});// Get all realm users\nconst respone_realm_users = await context.http.get({\nurl : https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps/${internalAppId}/users,\nheaders : { “Content-Type” : [“application/json”],\n“Accept” : [“application/json”],\n“Authorization” : [Bearer ${cloud_auth_body.access_token}]\n}});const realm_users = JSON.parse(respone_realm_users.body.text());// Filter only anon-users\nvar usersToDelete = ;realm_users.map(function(user){};",
"username": "Deji_Apps"
},
{
"code": "",
"text": "Hi @Deji_Apps ,Please add some prints before and after every map operation so we can narrrow down the function issue.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " const AtlasPrivateKey = context.values.get(\"PrivateKey\");\n const AtlasPublicKey = context.values.get(\"PublicKey\");\n const AtlasGroupId = context.values.get(\"GroupId\");",
"text": "For those coming here and who have the same problem as @Deji_Apps and me: {“message”:“‘map’ is not a function”,“name”:“TypeError”}Check here:You cannot directly read the value of a Secret after defining it. Instead, you link to the Secret by name in authentication provider and service configurations. If you need to access the Secret from a Function or Rule, you can link the Secret to a Value.You will end up with something like this:\nScreenshot 2021-09-30 at 12.32.471860×912 90.1 KB\nThen in your code:",
"username": "Mike_Notta"
},
{
"code": "",
"text": "@Deji_Apps in addition to @Mike_Notta last reply, you should create API key with Owner Access (Admin Access is not enough) to delete user from Realm App. Also, don’t forget to make corresponding changes to your Secrets in Values before executing the function.",
"username": "sumeet_N_A"
}
] |
Remove Authenticated users
|
2021-06-26T21:41:19.806Z
|
Remove Authenticated users
| 5,232 |
null |
[
"mongodb-shell",
"mdbw22-hackathon",
"mdbw-hackhelp"
] |
[
{
"code": "",
"text": "I have seen in the Parsing GDELT Web Data - MongoDB World Hackathon Series MongoDB video that integrated MongoDB shell (_MONGOSH) is used in MongoDB Dashboard but I have not been able to find how to access it.Could someone provide me with the steps to use it?Thanks",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "I mean this:\nScreenshot 2022-05-14 at 01.50.57869×392 103 KB\n",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "I just realized that in the video they are using MongoDB Compass (I thought they were using https://cloud.mongodb.com/), after installing MongoDB Compass I see that I can use from there MongoDB shell (_MONGOSH).",
"username": "Manuel_Martin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Can not find MongoDB shell (_MONGOSH) in MongoDB Dashboard
|
2022-05-13T21:25:35.529Z
|
Can not find MongoDB shell (_MONGOSH) in MongoDB Dashboard
| 2,963 |
null |
[
"queries",
"node-js",
"next-js"
] |
[
{
"code": "pages/api/moviesimport { connectToDatabase } from \"../../util/mongodb\";\n\nexport default async (req, res) => {\n const { db } = await connectToDatabase();\n\n const movies = await db\n .collection(\"movies\")\n .find({})\n .sort({ metacritic: -1 })\n .limit(20)\n .toArray();\n\n res.json(movies);\n};\npages/api/movies/[id]import { connectToDatabase } from '../../../util/mongodb';\n\nexport default async (req, res) => {\n const { db } = await connectToDatabase();\n const { id } = req.query;\n\n const movies = await db\n .collection(\"movies\")\n .find({\"_id\": `${id}` })\n .limit(20)\n .toArray();\n\n res.json(movies);\n};\nhttp://localhost:3000/api/movies/573a1394f29313caabcdfa3e\"_id\": \"573a1394f29313caabcdfa3e\",\n\"fullplot\": \"Some text\"\n",
"text": "Hello!\nI’m following along in this tutorial (https://www.mongodb.com/how-to/nextjs-with-mongodb/), and am on the section about creating dynamic routes with Next.js. (Link to Next.js dynamic routes guide)I have the initial route to get 20 movie objects from the db, and that works perfectly. The route lives at pages/api/movies and here’s the code I’m using for that:Trouble is when I try to return a single movie object using a dynamic route in Next. The route is pages/api/movies/[id], and the code I have looks like this:But when I query the route with http://localhost:3000/api/movies/573a1394f29313caabcdfa3e I get back an empty array, even though it should correspond with this object:Any help sorting this out is appreciated! Thanks in advance.",
"username": "Zac_Alexander"
},
{
"code": "_idObjectIdObjectIdObjectId",
"text": "Hello @Zac_Alexander, welcome to the MongoDB Community forum!In the route you are specifying a string “573a1394f29313caabcdfa3e”. But, you are querying the collection by its _id field - which I think is of type ObjectId. So, build the ObjectId using the input string and then use it in the query filter.You can use the NodeJS Driver APIs ObjectID class for constructing the ObjectId using the string.",
"username": "Prasad_Saya"
},
{
"code": "idhttp://localhost:3000/api/movies/573a1394f29313caabcdfa3e_idsample_mflix",
"text": "To give you some pointers, you’ll use Next.js Dynamic API Routes to capture the id . So if a user calls http://localhost:3000/api/movies/573a1394f29313caabcdfa3e the movie that should be returned is Seven Samurai . Another tip, the _id property for the sample_mflix database in MongoDB is stored as an ObjectID, so you’ll have to convert the string to an ObjectID. If you get stuck, create a thread on the MongoDB Community forums and we’ll solve it together! Next, we’ll take a look at how to access our MongoDB data within our Next.js pages.MongoDB shouldn’t redirect people to something they didn’t even have a docu for.it’s a line a mine on a perfect road.",
"username": "Gui_Chi"
}
] |
Next.js Dynamic Routes Returning Empty Array
|
2021-07-07T02:26:39.548Z
|
Next.js Dynamic Routes Returning Empty Array
| 6,251 |
null |
[
"react-native"
] |
[
{
"code": "Person(name: “Greg”, phone: “415-867-5309”)\nthis.setState({ phone: “4158675309” })\nrealm.objects(‘Person’).filtered(‘phone CONTAINS $0’, this.state.phone)\n",
"text": "I am hoping to do a filter query with something along the lines of LIKE, but not sure how to include the question marks or wildcards.I have a text field where the user is typing in their phone number.I have a phone string property in the object I’m querying. I want to find the user by their phone number, but if there we ( ) or - or . included in the string then it doesn’t match.For instance, if I have this object and state variable:then I want this query to include this object:Is this possible?",
"username": "Kurt_Libby1"
},
{
"code": "*",
"text": "Hi @Kurt_Libby1\nI would approach this problem by normalizing both sides of the search so that you can do a simple equality match. This would involve transforming all stored phone numbers into a standard format that removes “(,),-” or some known format that you choose. Then you can do the same transformation for the input that is coming in from the user text field. At that point you can query using ==, or CONTAINS directly.If that doesn’t work for your app, you can use LIKE, but you will have to inject the wildcards into the input string. Maybe inject * between every number or some stricter match that you can come up with. If you are dealing with a large dataset, it is much more efficient to use direct matching, or even CONTAINS rather than LIKE, so that’s why I’d suggest normalizing your input if possible.",
"username": "James_Stone"
},
{
"code": "111-222-3333\n(111)222-3333\n(\"phone LIKE %@\", \"*111*222*3333\") //Swift but the format is the same for the LIKE",
"text": "First see the classic NSPredicate Cheat Sheet It’s just a reference piece but good content.I would suggest changing your model to force consistency. e.g. don’t store this “415-867-5309”, but store this “4158675309”. You can always format it with the dashes in the UI. That makes the queries a snap.Inconsistent data usually ends up being a nightmare to work with so the above is suggested - but to more specifically answer the question if you have two objects stored in Realm with the following phone numbers:and you want to return them both, here’s the query(\"phone LIKE %@\", \"*111*222*3333\") //Swift but the format is the same for the LIKE",
"username": "Jay"
},
{
"code": "",
"text": "I have a more user-friendly approach. This is informed by being an Australian where we have different phone number structures to the USA, as does a lot of the rest of the world. It’s always annoying to have an app group our numbers weirdly.Store two fields. Save a string version with whatever formatting the user wants to use.\nThe field used for searching and sorting is an Int which you update from the string, just pulling out the digits.Int64 has the range 9,223,372,036,854,775,807 which is more digits than any legal phone number.If you want to save an international flag, one cheat is to use the negative sign as the flag for local numbers (which actually matches the + used on entering an international number).",
"username": "Andy_Dent"
},
{
"code": "let filteredPhone = this.state.phone.replace(/(.{1})/g,\"$1*\");\nconst filteredPeople = realm.objects('Person').filtered('phone LIKE $0', filteredPhone);\n",
"text": "Thanks @Andy_Dent.This is what I ultimately went with. I do have users all over the world and none of their databases would be querying more than a few thousand records and most of them would only be hundreds at the most.For reference in case anyone else finds this post, this is what it ended up looking like:",
"username": "Kurt_Libby1"
},
{
"code": "Person(name: “Greg”, phone: “415-867-5309”)\nthis.setState({ phone: “4158675309” })\n",
"text": "@Kurt_Libby1For future readers - can you elaborate a bit on how your final solution answers the question in the original post? Just want to tie the two together for anyone that’s just getting started. Sometimes code is challenging to read unless the intent is stated.For instance, if I have this object and state variable:then I want this query to include this object:",
"username": "Jay"
},
{
"code": "this.setState({ phone: “4158675309” });\nlet filteredPhone = this.state.phone.replace(/(.{1})/g,\"$1*\");\n.replace()4*1*5*8*6*7*5*3*0*9*",
"text": "Yeah @Jay.I do have people with phone numbers all over the world in all kinds of formats and some that even deal with people on opposite sides of international borders, so there ends up being extra symbols lie +, -, ( ), etc in the string.Rather than forcing a second variable, stripping out all of that formatting for everyone, figuring out the use case in 100+ different countries, etc, I am just adding the wildcard character between every character in the string that is being searched for:The .replace() function converts the string to 4*1*5*8*6*7*5*3*0*9*Now, when the filter runs it includes Person objects that may have additional characters in their phone string.",
"username": "Kurt_Libby1"
}
] |
Query a String with LIKE
|
2022-05-12T16:13:03.581Z
|
Query a String with LIKE
| 4,097 |
null |
[
"aggregation",
"indexes"
] |
[
{
"code": "{ \"from\" : \"Books\", \n\"let\" : { \n\"code\" : \"$Code\"}, \n\"pipeline\" : [{ \"$match\" : \n{ \"$expr\" : { \"$and\" : [{ \"$eq\" : \n[\"$Code\", \"$$code\"] }, \n{ \"$eq\" : [\"$Category\", \"LPR\"] }, \n{ \"$eq\" : [\"$MonthProcessed\", 202202] }] } } }],\n\"as\" : \"Books\" } \n",
"text": "I have the look up code as belowThis stage is not using the index on Category. Is there any way to achieve that?Thanks",
"username": "Sangeetha_Vinusabari"
},
{
"code": "",
"text": "Hi @Sangeetha_Vinusabari,Can you share getIndexes from both collection and the execution plan?Please note that the $expr lookups were greatly improved with association to indexes in 5.0+ version. What version you use and will it be possible to upgrade?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{\n \"queryPlanner\":{\n \"plannerVersion\":1,\n \"namespace\":\"DEV-T01-DR.Library\",\n \"indexFilterSet\":false,\n \"parsedQuery\":{\n \n },\n \"winningPlan\":{\n \"stage\":\"COLLSCAN\",\n \"direction\":\"forward\"\n },\n \"rejectedPlans\":[\n \n ]\n },\n \"executionStats\":{\n \"executionSuccess\":true,\n \"nReturned\":1,\n \"executionTimeMillis\":0,\n \"totalKeysExamined\":0,\n \"totalDocsExamined\":1,\n \"executionStages\":{\n \"stage\":\"COLLSCAN\",\n \"nReturned\":1,\n \"executionTimeMillisEstimate\":0,\n \"works\":3,\n \"advanced\":1,\n \"needTime\":1,\n \"needYield\":0,\n \"saveState\":0,\n \"restoreState\":0,\n \"isEOF\":1,\n \"direction\":\"forward\",\n \"docsExamined\":1\n },\n \"allPlansExecution\":[\n \n ]\n },\n \"serverInfo\":{\n \"host\":\"GWT-5CG0356LF0\",\n \"port\":27017,\n \"version\":\"4.2.15\",\n \"gitVersion\":\"d7fd78dead621a539c20791a93abec34bb1be385\"\n },\n \"ok\":1\n}\".\"\n",
"text": "We are using Mongo db server 4.4.\n\nimage888×317 14.9 KB\nExplain PlanThere is no possibility in the near future to upgrade to 5.0version",
"username": "Sangeetha_Vinusabari"
},
{
"code": "totalKeysExamined”:0\n",
"text": "@Sangeetha_Vinusabari ,First it looks like there is no data to scan so why would an index be preferred?Second you should create an index on all three equality fields combine to answer the prediction…Having said that the $expr option is not optimized in 4.4 unfortunately… According to the execution plan it actually looks like the server is 4.2.15 …Can you find or change the data model to use localField and foreignField syntax as it will be better for indexing…Pavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "First it looks like there is no data to scan so why would an index be preferred?\nI just created a sample collection to get the data u have requested with 1 document",
"username": "Sangeetha_Vinusabari"
},
{
"code": "",
"text": "Ok… In order to test performance and index behaviour I suggest to have a data set with a reasonable amount of test data.Its hard to predict any execution outcomes with 1 or 2 documents involved…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "db.foo.drop(); db.foo.insert({ Code: 1 });\ndb.bar.drop(); db.bar.insert({ Code: 1, Category: \"LPR\", MonthProcessed: 202202 });\ndb.foo.explain(\"executionStats\").aggregate([\n{ $lookup: {\n \"from\": \"bar\",\n \"let\": { \"code\": \"$Code\" },\n \"pipeline\": [{\n \"$match\": {\n \"$expr\": {\n \"$and\": [\n { \"$eq\": [\"$Code\", \"$$code\"] },\n { \"$eq\": [\"$Category\", \"LPR\"] },\n { \"$eq\": [\"$MonthProcessed\", 202202] }\n ]\n }\n }\n }],\n \"as\": \"Books\"\n}}]);\nCOLLSCAN\"executionStats\" : {\n \"executionSuccess\" : true, \n \"nReturned\" : 1.0, \n \"executionTimeMillis\" : 1.0, \n \"totalKeysExamined\" : 0.0, \n \"totalDocsExamined\" : 1.0, \n \"executionStages\" : {\n \"stage\" : \"COLLSCAN\", \n \"nReturned\" : 1.0, \n \"executionTimeMillisEstimate\" : 0.0, \n \"works\" : 3.0, \n \"advanced\" : 1.0, \n \"needTime\" : 1.0, \n \"needYield\" : 0.0, \n \"saveState\" : 1.0, \n \"restoreState\" : 1.0, \n \"isEOF\" : 1.0, \n \"direction\" : \"forward\", \n \"docsExamined\" : 1.0\n }\n}\n$match$sort$lookupCOLLSCANfoo$lookup$lookupbar{ Category: 1 }function MEASURE_INDEX_USAGE(block) {\n var statsCmd = [{ $indexStats: {} }];\n // measure index usage\n var statsBefore = {};\n db.getCollectionNames().forEach(function (c) {\n statsBefore[c] = {};\n db.getCollection(c).aggregate(statsCmd).forEach(function (d) {\n statsBefore[c][d.name] = d.accesses.ops * 1.0;\n });\n });\n\n block();\n\n // measure index usage again\n var stats = {};\n db.getCollectionNames().forEach(function (c) {\n stats[c] = {};\n db.getCollection(c).aggregate(statsCmd).forEach(function (d) {\n if (!statsBefore[c].hasOwnProperty(d.name)) {\n stats[c][d.name] = d.accesses.ops;\n } else if (statsBefore[c][d.name] != d.accesses.ops) {\n stats[c][d.name] = (d.accesses.ops - statsBefore[c][d.name]) + \" (\" + d.accesses.ops + \" total)\";\n }\n });\n });\n\n printjson(stats);\n}\ndb.bar.createIndex({ Category: 1 });\nMEASURE_INDEX_USAGE(function () {\ndb.foo.aggregate([\n{ $lookup: {\n \"from\": \"bar\",\n \"let\": { \"code\": \"$Code\" },\n \"pipeline\": [{\n \"$match\": {\n \"$expr\": {\n \"$and\": [\n { \"$eq\": [\"$Code\", \"$$code\"] },\n { \"$eq\": [\"$Category\", \"LPR\"] },\n { \"$eq\": [\"$MonthProcessed\", 202202] }\n ]\n }\n }\n }],\n \"as\": \"Books\"\n}}]);\n});\nfoobar{ \n \"bar\" : {\n \"Category_1\" : \"1 (1 total)\"\n }, \n \"foo\" : {\n\n }\n}\n$lookup",
"text": "@Sangeetha_Vinusabari there may be some confusion as to what the explain results are showing.For example, if we try the following in MongoDB 4.2.15:the output would show that the operation is performing a COLLSCAN (as you’ve witnessed as well):This is due to pipelines only being able to use indexes for specific initial stages (ex: $match or $sort).In our sample above, the first stage is a $lookup, which results in a full COLLSCAN as every document in the foo collection must be scanned and passed to the $lookup stage.In the $lookup stage (targeting the bar collection in our example) if an index exists on { Category: 1 } it is actually being used!Though the explain output doesn’t show it directly, we can wrap the operation in a custom function to measure the index usage:The output to the above shows (as we expect) no indexes touched on foo, but 1 index hit on bar:Explain output was improved in MongoDB 5.0 (see SERVER-53762) which makes it easier to determine what indexes were used by the $lookup stage.",
"username": "alexbevi"
},
{
"code": "{ \n \"_id\" : ObjectId(\"622f573cb9fd75a8f988cdb6\"), \n \"branchId\" : ObjectId(\"6212f2fa0615b313e2eb83f5\"), \n \"groupId\" : ObjectId(\"622f573cb9fd75a8f988cdb4\"), \n \"teacherId\" : ObjectId(\"622f4f70475460a853fd8fa1\"), \n \"date\" : ISODate(\"2022-03-15T00:00:00.000+0000\"), \n \"state\" : \"created\", \n \"createdAt\" : ISODate(\"2022-03-14T14:54:52.850+0000\"), \n \"updatedAt\" : ISODate(\"2022-03-14T14:54:52.850+0000\"), \n \"deletedAt\" : ISODate(\"2022-03-16T09:04:15.740+0000\")\n}\n[\n {\n \"v\" : 2.0, \n \"key\" : {\n \"_id\" : 1.0\n }, \n \"name\" : \"_id_\"\n }, \n {\n \"v\" : 2.0, \n \"key\" : {\n \"date\" : 1.0, \n \"groupId\" : 1.0\n }, \n \"name\" : \"UniqueGruopLesson\", \n \"background\" : true, \n \"unique\" : true, \n \"partialFilterExpression\" : {\n \"deletedAt\" : {\n \"$eq\" : null\n }\n }\n }\n]\n\ndb = db.getSiblingDB(\"TEST_DB\");\ndb.getCollection(\"LESSONS\").explain(\"executionStats\")\n.aggregate(\n [\n {\n $match: {\n groupId: ObjectId(\"627f7821c5e9de1b328ea918\"),\n date: ISODate(\"2022-05-30T00:00:00.000+0000\"),\n deletedAt:null\n }\n }\n ]\n);\n{ \n \"explainVersion\" : \"1\", \n \"queryPlanner\" : {\n \"namespace\" : \"TEST_DB.LESSONS\", \n \"indexFilterSet\" : false, \n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"date\" : {\n \"$eq\" : ISODate(\"2022-05-30T00:00:00.000+0000\")\n }\n }, \n {\n \"deletedAt\" : {\n \"$eq\" : null\n }\n }, \n {\n \"groupId\" : {\n \"$eq\" : ObjectId(\"627f7821c5e9de1b328ea918\")\n }\n }\n ]\n }, \n \"optimizedPipeline\" : true, \n \"maxIndexedOrSolutionsReached\" : false, \n \"maxIndexedAndSolutionsReached\" : false, \n \"maxScansToExplodeReached\" : false, \n \"winningPlan\" : {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deletedAt\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"date\" : 1.0, \n \"groupId\" : 1.0\n }, \n \"indexName\" : \"UniqueGruopLesson\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"date\" : [\n\n ], \n \"groupId\" : [\n\n ]\n }, \n \"isUnique\" : true, \n \"isSparse\" : false, \n \"isPartial\" : true, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"date\" : [\n \"[new Date(1653868800000), new Date(1653868800000)]\"\n ], \n \"groupId\" : [\n \"[ObjectId('627f7821c5e9de1b328ea918'), ObjectId('627f7821c5e9de1b328ea918')]\"\n ]\n }\n }\n }, \n \"rejectedPlans\" : [\n\n ]\n }, \n \"executionStats\" : {\n \"executionSuccess\" : true, \n \"nReturned\" : 1.0, \n \"executionTimeMillis\" : 0.0, \n \"totalKeysExamined\" : 1.0, \n \"totalDocsExamined\" : 1.0, \n \"executionStages\" : {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deletedAt\" : {\n \"$eq\" : null\n }\n }, \n \"nReturned\" : 1.0, \n \"executionTimeMillisEstimate\" : 0.0, \n \"works\" : 2.0, \n \"advanced\" : 1.0, \n \"needTime\" : 0.0, \n \"needYield\" : 0.0, \n \"saveState\" : 0.0, \n \"restoreState\" : 0.0, \n \"isEOF\" : 1.0, \n \"docsExamined\" : 1.0, \n \"alreadyHasObj\" : 0.0, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"nReturned\" : 1.0, \n \"executionTimeMillisEstimate\" : 0.0, \n \"works\" : 2.0, \n \"advanced\" : 1.0, \n \"needTime\" : 0.0, \n \"needYield\" : 0.0, \n \"saveState\" : 0.0, \n \"restoreState\" : 0.0, \n \"isEOF\" : 1.0, \n \"keyPattern\" : {\n \"date\" : 1.0, \n \"groupId\" : 1.0\n }, \n \"indexName\" : \"UniqueGruopLesson\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"date\" : [\n\n ], \n \"groupId\" : [\n\n ]\n }, \n \"isUnique\" : true, \n \"isSparse\" : false, \n \"isPartial\" : true, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"date\" : [\n \"[new Date(1653868800000), new Date(1653868800000)]\"\n ], \n \"groupId\" : [\n \"[ObjectId('627f7821c5e9de1b328ea918'), ObjectId('627f7821c5e9de1b328ea918')]\"\n ]\n }, \n \"keysExamined\" : 1.0, \n \"seeks\" : 1.0, \n \"dupsTested\" : 0.0, \n \"dupsDropped\" : 0.0\n }\n }\n }, \n \"command\" : {\n \"aggregate\" : \"LESSONS\", \n \"pipeline\" : [\n {\n \"$match\" : {\n \"groupId\" : ObjectId(\"627f7821c5e9de1b328ea918\"), \n \"date\" : ISODate(\"2022-05-30T00:00:00.000+0000\"), \n \"deletedAt\" : null\n }\n }\n ], \n \"cursor\" : {\n\n }, \n \"$db\" : \"TEST_DB\"\n }, \n \"serverInfo\" : {\n \"host\" : \"nitro\", \n \"port\" : 2717.0, \n \"version\" : \"5.0.8\", \n \"gitVersion\" : \"c87e1c23421bf79614baf500fda6622bd90f674e\"\n }, \n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : 104857600.0, \n \"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600.0, \n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600.0, \n \"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600.0, \n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600.0, \n \"internalQueryProhibitBlockingMergeOnMongoS\" : 0.0, \n \"internalQueryMaxAddToSetBytes\" : 104857600.0, \n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600.0\n }, \n \"ok\" : 1.0, \n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1652524686, 1), \n \"signature\" : {\n \"hash\" : BinData(0, \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"), \n \"keyId\" : NumberLong(0)\n }\n }, \n \"operationTime\" : Timestamp(1652524686, 1)\n}\n\n$expr$expr$lookupdb = db.getSiblingDB(\"TEST_DB\");\ndb.getCollection(\"LESSONS\").explain(\"executionStats\")\n.aggregate(\n [\n {\n $match: {\n $expr: {\n $and: [\n {\n $eq: [\n \"$groupId\",\n ObjectId(\"627f7821c5e9de1b328ea918\"), \n ]\n },\n {\n \"$eq\": [\n \"$deletedAt\",\n null\n ]\n },\n {\n $eq:[\n \"$date\",\n ISODate(\"2022-05-30T00:00:00.000+0000\")\n ]\n }\n ]\n }\n }\n }\n ]\n);\n\n{ \n \"explainVersion\" : \"1\", \n \"queryPlanner\" : {\n \"namespace\" : \"TEST_DB.LESSONS\", \n \"indexFilterSet\" : false, \n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"$expr\" : {\n \"$and\" : [\n {\n \"$eq\" : [\n \"$groupId\", \n {\n \"$const\" : ObjectId(\"627f7821c5e9de1b328ea918\")\n }\n ]\n }, \n {\n \"$eq\" : [\n \"$deletedAt\", \n {\n \"$const\" : null\n }\n ]\n }, \n {\n \"$eq\" : [\n \"$date\", \n {\n \"$const\" : ISODate(\"2022-05-30T00:00:00.000+0000\")\n }\n ]\n }\n ]\n }\n }, \n {\n \"date\" : {\n \"$_internalExprEq\" : ISODate(\"2022-05-30T00:00:00.000+0000\")\n }\n }, \n {\n \"deletedAt\" : {\n \"$_internalExprEq\" : null\n }\n }, \n {\n \"groupId\" : {\n \"$_internalExprEq\" : ObjectId(\"627f7821c5e9de1b328ea918\")\n }\n }\n ]\n }, \n \"optimizedPipeline\" : true, \n \"maxIndexedOrSolutionsReached\" : false, \n \"maxIndexedAndSolutionsReached\" : false, \n \"maxScansToExplodeReached\" : false, \n \"winningPlan\" : {\n \"stage\" : \"COLLSCAN\", \n \"filter\" : {\n \"$and\" : [\n {\n \"$expr\" : {\n \"$and\" : [\n {\n \"$eq\" : [\n \"$groupId\", \n {\n \"$const\" : ObjectId(\"627f7821c5e9de1b328ea918\")\n }\n ]\n }, \n {\n \"$eq\" : [\n \"$deletedAt\", \n {\n \"$const\" : null\n }\n ]\n }, \n {\n \"$eq\" : [\n \"$date\", \n {\n \"$const\" : ISODate(\"2022-05-30T00:00:00.000+0000\")\n }\n ]\n }\n ]\n }\n }, \n {\n \"date\" : {\n \"$_internalExprEq\" : ISODate(\"2022-05-30T00:00:00.000+0000\")\n }\n }, \n {\n \"deletedAt\" : {\n \"$_internalExprEq\" : null\n }\n }, \n {\n \"groupId\" : {\n \"$_internalExprEq\" : ObjectId(\"627f7821c5e9de1b328ea918\")\n }\n }\n ]\n }, \n \"direction\" : \"forward\"\n }, \n \"rejectedPlans\" : [\n\n ]\n }, \n \"executionStats\" : {\n \"executionSuccess\" : true, \n \"nReturned\" : 1.0, \n \"executionTimeMillis\" : 25.0, \n \"totalKeysExamined\" : 0.0, \n \"totalDocsExamined\" : 28851.0, \n \"executionStages\" : {\n \"stage\" : \"COLLSCAN\", \n \"filter\" : {\n \"$and\" : [\n {\n \"$expr\" : {\n \"$and\" : [\n {\n \"$eq\" : [\n \"$groupId\", \n {\n \"$const\" : ObjectId(\"627f7821c5e9de1b328ea918\")\n }\n ]\n }, \n {\n \"$eq\" : [\n \"$deletedAt\", \n {\n \"$const\" : null\n }\n ]\n }, \n {\n \"$eq\" : [\n \"$date\", \n {\n \"$const\" : ISODate(\"2022-05-30T00:00:00.000+0000\")\n }\n ]\n }\n ]\n }\n }, \n {\n \"date\" : {\n \"$_internalExprEq\" : ISODate(\"2022-05-30T00:00:00.000+0000\")\n }\n }, \n {\n \"deletedAt\" : {\n \"$_internalExprEq\" : null\n }\n }, \n {\n \"groupId\" : {\n \"$_internalExprEq\" : ObjectId(\"627f7821c5e9de1b328ea918\")\n }\n }\n ]\n }, \n \"nReturned\" : 1.0, \n \"executionTimeMillisEstimate\" : 3.0, \n \"works\" : 28853.0, \n \"advanced\" : 1.0, \n \"needTime\" : 28851.0, \n \"needYield\" : 0.0, \n \"saveState\" : 28.0, \n \"restoreState\" : 28.0, \n \"isEOF\" : 1.0, \n \"direction\" : \"forward\", \n \"docsExamined\" : 28851.0\n }\n }, \n \"command\" : {\n \"aggregate\" : \"LESSONS\", \n \"pipeline\" : [\n {\n \"$match\" : {\n \"$expr\" : {\n \"$and\" : [\n {\n \"$eq\" : [\n \"$groupId\", \n ObjectId(\"627f7821c5e9de1b328ea918\")\n ]\n }, \n {\n \"$eq\" : [\n \"$deletedAt\", \n null\n ]\n }, \n {\n \"$eq\" : [\n \"$date\", \n ISODate(\"2022-05-30T00:00:00.000+0000\")\n ]\n }\n ]\n }\n }\n }\n ], \n \"cursor\" : {\n\n }, \n \"$db\" : \"TEST_DB\"\n }, \n \"serverInfo\" : {\n \"host\" : \"nitro\", \n \"port\" : 2717.0, \n \"version\" : \"5.0.8\", \n \"gitVersion\" : \"c87e1c23421bf79614baf500fda6622bd90f674e\"\n }, \n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : 104857600.0, \n \"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600.0, \n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600.0, \n \"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600.0, \n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600.0, \n \"internalQueryProhibitBlockingMergeOnMongoS\" : 0.0, \n \"internalQueryMaxAddToSetBytes\" : 104857600.0, \n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600.0\n }, \n \"ok\" : 1.0, \n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1652524646, 1), \n \"signature\" : {\n \"hash\" : BinData(0, \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"), \n \"keyId\" : NumberLong(0)\n }\n }, \n \"operationTime\" : Timestamp(1652524646, 1)\n}\n\n",
"text": "What about $expr in $matchMy Model exampleIndexes of my collectionSituation 1.Result 1Situation 1 working fine butSituation 2.\nwith $expr scanning all documents in collection\nwhy I need it to be $expr because then I need it to be inside $lookupResult 2",
"username": "Yusufjon_Nazarov"
},
{
"code": "",
"text": "Thanks in advance @Pavel_Duchovny",
"username": "Yusufjon_Nazarov"
}
] |
Index not used in $lookup
|
2022-03-07T08:13:12.545Z
|
Index not used in $lookup
| 11,300 |
null |
[
"queries",
"indexes"
] |
[
{
"code": "",
"text": "Hi all\nwill findOne({_id: ObjectId(id), userId: userName}) be using default _id index?\nThanks\nVladimir",
"username": "Vladimir_Vorobiev1"
},
{
"code": "explainexplainfindOnefind",
"text": "Hello @Vladimir_Vorobiev1, welcome to the forum!You can know about how and what index is being used with your query by generating a Query Plan on a query. Use the explain method for that. But, not all collection methods support the explain, and findOne is one that doesn’t allow it. So, as an alternative you can generate the query plan by using the find method for your query.Useful resources:",
"username": "Prasad_Saya"
}
] |
_id default index
|
2022-05-13T18:49:40.169Z
|
_id default index
| 1,769 |
null |
[
"node-js"
] |
[
{
"code": "{\n \"roles\": [\n {\n \"name\": \"default\",\n \"apply_when\": {},\n \"insert\": true,\n \"delete\": true,\n \"search\": true,\n \"write\": true,\n \"fields\": {},\n \"additional_fields\": {}\n }\n ],\n \"filters\": []\n}\n",
"text": "Trying to get a node.js app to talk to my Realm App services. I am able to use the anonymous login and instantiate a currentUser but when I try to do a find query against a collection I get the following error.Error: {“message”:“no rule exists for namespace ‘MyDB.myCollection’”,“code”:-1}here are the rules for this collectionwhat am I missing?",
"username": "Jesse_Beckton"
},
{
"code": "",
"text": "Hi Jesse,Could you please provide more detail on how you are performing the query e.g. calling a Realm function etc.Please accompany this with any relevant documentation you’re following.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "const realmClientService = ((): RealmClientService => {\n\tconst APP_ID: string = process.env.REALM_APP_ID || \"UNDEFINED\";\n\tconst REALM_GRAPHQL_ENDPOINT: string = `https://us-west-2.aws.realm.mongodb.com/api/client/v2.0/app/My-App-Id/graphql`;\n\n\tlet app: App;\n\n\tconst init = (): void => {\n\t\ttry {\n\t\t\tapp = new App({ id: APP_ID });\n\t\t} catch (error) {\n\t\t\tthrow error;\n\t\t}\n\t};\n\n\tconst handleLogin = async (): Promise<User> => {\n\t\ttry {\n\t\t\tif (!app) {\n\t\t\t\tinit();\n\t\t\t}\n\n\t\t\tconst credentials = Credentials.anonymous();\n\n\t\t\tawait app.logIn(credentials).catch((error) => {\n\t\t\t\treturn Promise.reject(error);\n\t\t\t});\n\n\t\t\treturn Promise.resolve(app.currentUser);\n\t\t} catch (error) {\n\t\t\treturn Promise.reject(error);\n\t\t}\n\t};\n\n\tconst generateAuthHeader = async (): Promise<{ Authorization: string }> => {\n\t\ttry {\n\t\t\tif (!app.currentUser) {\n\t\t\t\tawait handleLogin();\n\t\t\t} else {\n\t\t\t\tawait app.currentUser.refreshCustomData();\n\t\t\t}\n\n\t\t\tconst { accessToken } = app.currentUser as User;\n\n\t\t\treturn { Authorization: `Bearer ${accessToken}` };\n\t\t} catch (error) {\n\t\t\treturn Promise.reject(error);\n\t\t}\n\t};\n\n\tconst getMongoClient = (): Services.MongoDB => {\n\t\treturn app.currentUser?.mongoClient(\n\t\t\t\"mongodb-atlas\"\n\t\t) as Services.MongoDB;\n\t};\n\n\tconst getDb = (databaseName?: string): Services.MongoDBDatabase => {\n\t\tconst dbName = databaseName\n\t\t\t? databaseName\n\t\t\t: (process.env.MONGO_DATABASE as string);\n\n\t\tconst database = getMongoClient()?.db(dbName);\n\t\tif (!database) {\n\t\t\tthrow new Error(`Could not find a database with the name ${dbName}`);\n\t\t}\n\t\treturn database;\n\t};\n\n\treturn {\n\t\tlogin: handleLogin,\n\t\tgenerateAuthHeader: generateAuthHeader,\n\t\tget gqlEndPoint() {\n\t\t\treturn REALM_GRAPHQL_ENDPOINT;\n\t\t},\n\t\tget mongoClient() {\n\t\t\treturn getMongoClient();\n\t\t},\n\t\tdb: getDb,\n\t};\n})();\n\nexport default realmClientService;\nconst data = await realmClientService\n\t\t.db()\n\t\t.collection(\"myCollection\")\n\t .find();\n",
"text": "Here is the code that sets up my realm client instance.Here is the query",
"username": "Jesse_Beckton"
},
{
"code": "",
"text": "I discovered the problem. The database name I was using in my code was using the wrong case. My actual database name was like ‘myDatabase’ but I was using ‘MyDatabase’.",
"username": "Jesse_Beckton"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Error: {"message":"no rule exists for namespace 'MyDB.myCollection'","code":-1}
|
2022-05-13T03:56:20.387Z
|
Error: {“message”:”no rule exists for namespace ‘MyDB.myCollection’”,”code”:-1}
| 2,735 |
null |
[
"mdbw22-hackathon"
] |
[
{
"code": "",
"text": "I am an Database developer, usually I work with RDBMS, but I’m interested in Mongo.\nI like the idea of @michael_hoeller “Can news indicate a stock price change”SQLGMT+2",
"username": "Crist"
},
{
"code": "",
"text": "Welcome @Crist - glad to have you.I love @michael_hoeller 's idea too - hopefully so do others and you can collaborate? Why respond to that in the Projects looking for Hackers category and see who bites?",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Hello Crist,Can we team-up and work on the project together?Regards,",
"username": "Ayo_Exbizy"
},
{
"code": "",
"text": "Hello @Ayo_Exbizy ,\nYes , we can team-up to work on the project together.RegardsThanks @Shane_McAllister for the support",
"username": "Crist"
},
{
"code": "",
"text": "Hello Crist,Thanks for the feedback, that’s great.Lets schedule a session to meet and discuss the project.Is there any platform you propose for us to have discussions?Thank you",
"username": "Ayo_Exbizy"
},
{
"code": "",
"text": "Hallo @Ayo_Exbizy and @Crist\nas mentioned I am limited due to some sudden high workload, in case you like I can offer to support when it comes to MongoDB questions. I like this idea, this would have been my choice. Please do not feel pushed, this is your project. I mentioned to support if asked with the idea, so I stand by it.\nRegards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hello @michael_hoeller @Ayo_Exbizy\nI was looking at the GDELT data and the CAMEO trying to figure out what could be usefull for the project.\nIn this first attempt to find news related to the crypto and maybe with Actor type GOV that could be the class of Actor with more impact in this world.Do you have any suggestions about the platform?Thanks",
"username": "Crist"
},
{
"code": "",
"text": "Hello @michael_hoeller @Crist ,\nCan we schedule a google meet to discuss the options for this project in detail?Let me know which time is okay by the two of you, I am in GMT+1 time zone.Thank you",
"username": "Ayo_Exbizy"
},
{
"code": "",
"text": "Hello\nI can be available on Friday between 16:00 CET (14:00 UTC) and 19:00 CET (17:00 UTC).\nCheers,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hello\nfor me it’s ok on Friday between 16:00 CET (14:00 UTC) and 19:00 CET (17:00 UTC)Thanks",
"username": "Crist"
},
{
"code": "",
"text": "Thanks for the feedback @Crist @michael_hoellerThe time is okay for me too.Looking forward to meeting you guys, meanwhile @michael_hoeller @Shane_McAllister Can you help confirm the links to the reference documents for this project?",
"username": "Ayo_Exbizy"
},
{
"code": "",
"text": "Hi the GDELT DB does not contain the summary of news, so searching thru this is not an option without extra work. I am not aware of a CAMEO Code coming close to “crypto” or bitcoin.\nFiltering on “title” and “sourceURL” can be an option to get the tone from the remaining documents, maybe this can correlate the stock price?\nIn case it comes to the point that the filter on title and sourceURL turns out to be a weak filter, it also could be an option not to filter but to try to find a correlation from the Goldstein score + tone to the bitcoin price.Can you send me your eMail Addresses via DM to schedule a meeting on Friday 16:00 - 17:00 CET",
"username": "michael_hoeller"
},
{
"code": "",
"text": "@nraboy has been streaming a project that includes a metadata scraper - maybe you’d also find this useful!main/scripts/website-meta-parserHacking with the GDELT Dataset. Contribute to mongodb-developer/mongodb-world-2022-hackathon development by creating an account on GitHub.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Hello @Crist @michael_hoeller,Have you shared the meeting link?",
"username": "Ayo_Exbizy"
},
{
"code": "",
"text": "Yes I did\nLink: https://meet.google.com/bib-ajgq-xgn",
"username": "michael_hoeller"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Crist is looking for a project!
|
2022-05-10T13:40:20.561Z
|
Crist is looking for a project!
| 4,341 |
null |
[
"aggregation"
] |
[
{
"code": "",
"text": "In a $projet stage, I use a $reduce aggregation.\nThe same $cond aggregation is used twice.I would like to create a variable to avoid repeating the $cond and don’t succeed to do so ?Here is my pipeline{\noptionId: 1,\nerrors: {\n$reduce: {\ninput: ‘$sizes’,\ninitialValue: {count:0,find:[]},\n‘in’: {\ncount:{\n$add: [’$$value.count’, {\n$cond: [{\n$ne: [{\n$substr: [’$$this.sku’, 0, 11]\n}, ‘$optionId’]\n}, 1, 0]\n}]\n},\nfind:{\n$concatArrays:[’$$value.find’, {\n$cond: [{\n$ne: [{\n$substr: [’$$this.sku’, 0, 11]\n}, ‘$optionId’]\n}, [’$$this.sku’],[]]\n}]\n}\n}\n}\n}\n}Best regards",
"username": "emmanuel_bernard"
},
{
"code": "substr_var = { \"$substr\" : [ \"$$this.sku\" , 0 , 11 ] }\ntest_var = { \"$ne\" : [ substr_var , \"$optionId\" ] }\ncount = { \"$add\" : [\n \"$$value.count\" ,\n { \"$cond\" : [ test_var , 1 , 0 ] }\n] }\nfind : { \"$concatArrays\" : [\n \"$$value.find\" ,\n { \"$cond\" : [ test_var , [\"$$this.sku\"] , [] ] }\n} ]\n\n// here is my pipeline\n{\n \"optionId\" : 1,\n \"errors\" : {\n \"$reduce\" : {\n \"input\" : \"$sizes\" ,\n \"initialValue\" : { \"count\" : 0 , \"find\": [] } ,\n \"in\": { count , find }\n }\n }\n}\n",
"text": "A query is simply a JSON document.You can use any variable you want, just like you do with any other json document. But in your case your $cond is not the same in both case. The test inside the $cond is the same but the returned values are different.",
"username": "steevej"
},
{
"code": "",
"text": "Hi,Thank for your answer.\nMy query is running !",
"username": "emmanuel_bernard"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to create a variable to avoir repeating the same condition twice?
|
2022-05-12T09:48:20.696Z
|
How to create a variable to avoir repeating the same condition twice?
| 935 |
[
"aggregation",
"nairobi-mug"
] |
[
{
"code": "Entrepreneur and Developer Community Expert",
"text": "\nMUG Nairobi1250×697 102 KB\nJoin us at Chuka University for an afternoon mini-workshop filled with exploration of MongoDB Technology. You will get to set up Atlas, Cloud DBaaS for MongoDB. Opportunities are available on MongoDB Technology, network, and cheat sheets.Get your laptop ready and join the experts in these afternoon filled with fun and learning.AgendaKey BenefitsThis community is for MongoDB users, data scientists/analysts, AI, IoT, backend developers, ML developers, cloud computing, and anyone interested in emerging data technologies. MongoDB community is where the world’s fastest-growing data community comes to connect, explore, and learn.Welcome to the MongoDB community, Come to learn, and stay to connect.We will have swag, pizza, wine, juice, beer, and great conversations! We are excited to see you all soon!Event Type: In-Person\n Location - Chuka University\nRSVP here: https://live.mongodb.com/events/details/mongodb-nairobi-kenya-presents-mongodb-community-mini-workshop-in-chuka-1/\nMichael Kimathi Photo600×600 100 KB\nEntrepreneur and Developer Community ExpertHe connects people to solutions that empower them to realize their highest potential and as a result, this has produced professionals and scalable businesses. In his free time, you will find him leading the developer community. His name is Michael Kimathi and this is his life purpose. Entrepreneur in Music and Technology. Driven by the purpose and love of people. Experienced Leader, Founder, and Co-Founder with a demonstrated history of working in the computer software industry. Skilled in agile, product manager, research, developer community, agile frameworks, team and collaboration tools, strategy, ecosystem builder, databases, mobile applications, web design, and web apps. A strong business development professional. He is an experienced leader with a demonstrated history of working in the computer software industry as a consulting entrepreneur. He is a professional Agile expert with a track record in product management, strategy, and agile frameworks. He has developed a strong interest in the Entertainment industry which as a result has helped his journey in understanding music and appreciating the larger entertainment ecosystem. This has given him an edge in team development and management while focused on all aspects of solution building. He has learned and continues to learn the tools and strategies that are required to produce results in the computing and entertainment industry.",
"username": "Michael_Kimathi"
},
{
"code": "",
"text": "Hey Micheal. I, unfortunately, will not be able to attend this workshop in Chuka, due to logistics constraints. When are you guys planning on having an event in Nairobi? I’d love to attend.",
"username": "Ian_Mugenya"
}
] |
MUG Nairobi: MongoDB Mini Workshop in Chuka University
|
2022-05-12T12:11:57.068Z
|
MUG Nairobi: MongoDB Mini Workshop in Chuka University
| 3,700 |
|
null |
[
"aggregation",
"node-js",
"crud",
"production",
"transactions"
] |
[
{
"code": "'change'$replaceRoot$projectoperationTypeChangeStreamDocumenttolsidtxnNumberclusterTime.watch<any, X>()X",
"text": "The MongoDB Node.js team is pleased to announce version 4.6.0 of the mongodb package!Our change stream document type and watch API have undergone some improvements! You can now define your own custom type for the top level document returned in a 'change' event. This is very useful when using a pipeline that significantly changes the shape of the change document (ex. $replaceRoot, $project operators). Additionally, we’ve improved the type information of the default change stream document to default to union of the possible events from MongoDB. This works well with typescript’s ability to narrow a Discriminated Union based on the operationType key in the default change stream document.Prior to this change the ChangeStreamDocument inaccurately reflected the runtime shape of the change document. Now, using the union, we correctly indicate that some properties do not exist at all on certain events (as opposed to being optional). With this typescript fix we have added the properties to for rename events, as well as lsid, txnNumber, and clusterTime if the change is from within a transaction.NOTE: Updating to this version may require fixing typescript issues. Those looking to adopt this version but defer any type corrections can use the watch API like so: .watch<any, X>(). Where X controls the type of the change document for your use case.Check out the examples and documentation here.Operations will now be directed towards servers that have fewer in progress operations, distributing the load more evenly across servers.This release includes some experimental features that are not yet ready for use. As a reminder, anything marked experimental is not a part of the official driver API and is subject to change without notice.We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "Bailey_Pearson"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] |
MongoDB Node.js Driver 4.6.0 Released
|
2022-05-11T19:31:38.657Z
|
MongoDB Node.js Driver 4.6.0 Released
| 2,524 |
null |
[] |
[
{
"code": "",
"text": "Whether Realm have similar feature like this? https://docs.amplify.aws/lib/auth/mfa/q/platform/js/",
"username": "es_degan"
},
{
"code": "",
"text": "Hi es_degan,Thanks for your question and welcome to the community.To my knowledge we don’t have this feature as of now for our built in authentication providers, however I did find this has been requested in our feedback forum below by other customers. Please vote on it and/or provide your business use case if you wish.It would be great to have an option for two factor authentication to increase security. Ideally this would be via authentication apps such as Microsoft Authenticator, email or SMS authentication would be viable too.If you need extra validation then you may want to use a custom authentication provider such as Custom JWT or Custom Function.Hope that helps.Regards",
"username": "Mansoor_Omar"
}
] |
Realm 2FA/MFA by TOTP
|
2022-05-08T23:42:08.635Z
|
Realm 2FA/MFA by TOTP
| 1,477 |
null |
[
"database-tools"
] |
[
{
"code": "mongoimport --host IP --port 27017 -u myuser -p mypass --authenticationDatabase admin --db dbname --collection collection --drop --type json --file absolutejsonfilepath --jsonArray\nserver returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field.\n2022-05-12T12:29:22.261+0000 checking options\n2022-05-12T12:29:22.262+0000 dumping with object check disabled\n2022-05-12T12:29:22.262+0000 will listen for SIGTERM, SIGINT, and SIGKILL\n2022-05-12T12:29:22.283+0000 got error from options parsing: error connecting to db server: server returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field.\n2022-05-12T12:29:22.283+0000 Failed: error connecting to db server: server returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field.\n",
"text": "I am trying to import some json files to mongodb collections using following command on a server other than mongodb server:It is throwing following error:How ever If I hit the same command on mongodb server itself then it works. I have checked all credentials & other details. All details are correct.updateif I run the command using -vvvv parameter (for verbose output) then I got this:",
"username": "Aman_Kaur"
},
{
"code": "",
"text": "What is your mongodb version and mongoimport version?\nIt could be due to version related issues\nTry to use latest mongoimport from mongodb tools",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Mongodb version: 5.0.8\nMongoimport version: 100.5.2I have latest version of Mongoimport 100.5.2 on my machine.",
"username": "Aman_Kaur"
},
{
"code": "",
"text": "Can you connect to your db from the host where you are trying to run mongoimport?\nAre you using any special characters in your password?\nIf yes try to escape it\nor\nDoes it work with lower version of mongoimport?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Can you connect to your db from the host where you are trying to run mongoimport? YES\nAre you using any special characters in your password? NO\nDoes it work with lower version of mongoimport?\nI AM NOT ABLE TO DOWNGRADE TO VERSION 100.5.1. THE INSTALLATION STEPS MENTIONED IN THE BELOW LINK GIVES ERROR:\nERROR: dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)",
"username": "Aman_Kaur"
},
{
"code": "sudo apt-get --purge remove mongodb-org\nsudo apt purge mongodb*\nsudo dpkg -i --force-all mongodb-database-tools-ubuntu2004-x86_64-100.5.1.deb\n",
"text": "It worked with the mongoimport version 100.5.1. For downgrade issue I removed all db tools from the host using command:After that I force install mongodb tools on the host:After that I tried mongoimport command on the host then it worked.",
"username": "Aman_Kaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Mongoimport error: server returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field
|
2022-05-12T12:50:58.893Z
|
Mongoimport error: server returned error on SASL authentication step: BSON field ‘saslContinue.mechanism’ is an unknown field
| 11,559 |
null |
[
"aggregation",
"data-modeling",
"time-series"
] |
[
{
"code": "{\n\t_id: \"ProductId\",\n\tsalesTotal: 14352.00, // just $inc every time we sell\n\ttotalSoldUnits: 435, // just $inc every time we sell\n\taverageMarginLast48Months: 0.614, // Can be recalculated from \"months.$.averageProfitMargin.margin\"\n\tmonths: [\n\t\t// cap: 48\n\t\t{\n\t\t\tkey: { year: 2022, month: 1 },\n\t\t\tsalesTotal: 14352.00,\n\t\t\taverageProfitMargin: {\n\t\t\t\tmargin: 61.08,\n\t\t\t\t/* In order to recalculate this margin whenever an order is shipped or returned, we'll\n\t\t\t\t need to keep all the numbers needed to compute a new average. Hence this \"basedOn\" array.\n\t\t\t\t So for example when a new order is shipped, we'll just push a new document to the\n\t\t\t\t \"basedOn\" subarray and then update the margin to (sum of basedOn.$.marginAmount / sum of basedOn.$.total).\n\t\t\t\t The \"basedOn\" array should hopefully never grow too large since it'll only contain a months worth of orders. */\n\t\t\t\tbasedOn: [\n\t\t\t\t\t{ orderLine: \"{ordernumber}_{lineIdentifier}\", quantity: 2, total: 234.34, marginAmount: 143.00 },\n\t\t\t\t\t...\n\t\t\t\t]\n\t\t\t}\n\t\t},\n\t\t...\n\t],\n\tdays: [\n\t\t// cap: 365\n\t\t{\n\t\t\tdate: \"2022-01-01\",\n\t\t\tsoldUnits: 23,\n\t\t\treturnedUnits: 0\n\t\t},\n\t\t...\n\t]\n}\nmonthsdays$pushSlicemonthsdays",
"text": "The application is an e-commerce statistics product where we want to track a couple of metrics for every product, such as:We want these numbers aggregated on a daily, monthly and overall level.On a daily level, we want to retain data for the last 365 days.\nOn a monthly level we want to retain data for the last 48 months.\nThis is just to make sure that the collections don’t get too big over time.Calculating these numbers is a solved problem, so that’s not the challenge. The challenge is how to store it in an efficient and scalable way.Expected write frequency would be up to a few hundred times a week per product and a few hundred times per day for the whole collection.\nExpected read frequency of the whole collection would be in the tens-hundreds per minute when usage spikes.My first idea was to have a plain old collection (i.e. not a time-series collection) where each document looks like this:And then implement the capping of months and days subarrays using $pushSlice.This design would lead to fairly large documents. Is there a chance we might hit the max doc size or the max collection size? Would it be better if I moved the months and days subarrays into their own collections? Then I wouldn’t risk hitting the max document size, but I’m not sure how to ensure the cap of 48 (or 365) docs per product. Is there a way to achieve this (in an elegent way without a bunch of db roundtrips)?A third option would be to use the new time series collection feature. Then I wouldn’t have to store these consolidated numbers but could calculate it on the fly every time using window functions. But would that put a big load on the cluster? Would read times scale poorly with the number of products?I could do some experimenting by generating a ton of dummy data and try out all three options, but before I do that, I thought it would be nice to hear some learnings from people who have already walked this road.",
"username": "John_Knoop"
},
{
"code": "dailyData{ date: ISODate(\"2022-01-01T00:00:00Z\"), totalSales: 0, totalUnits: 0 },\n{ date: ISODate(\"2022-01-02T00:00:00Z\"), totalSales: 0, totalUnits: 0 },\n// ...\nmonthlyData{\n _id: \"ProductId\",\n yearlySalesTotal: 4352.00,\n yearlyTotalSoldUnits: 689,\n year: 2022,\n dailyData: [\n { date: ISODate(\"2022-01-01T00:00:00Z\"), totalSales: 123.45, totalUnits: 39 },\n { date: ISODate(\"2022-01-02T00:00:00Z\"), totalSales: 900.45, totalUnits: 297 },\n //...\n { date: ISODate(\"2022-05-10T00:00:00Z\"), totalSales: 12.45, totalUnits: 2 }\n // ... maximum 366 elements\n ],\n monthlyData: [\n { month: 1, totalSales: 123.45, totalUnits: 39 },\n { month: 2, totalSales: 456.70, totalUnits: 111 },\n // ... maximum 12 elements\n ]\n}\n{\n _id: \"ProductId\",\n salesTotal: 1435.00,\n totalSoldUnits: 221,\n date: ISODate(\"2022-05-11T10:00:00Z\")\n}\n",
"text": "Hello @John_Knoop, here are some thoughts.My first idea was to have a plain old collection…It looks like you are storing calculated data (the totals, etc.). You can store just the raw data. And, calculate the totals, averages, etc., at the time of retrieval/querying. Depending upon your use case (performance requirements, size of data, available resources, etc.) you can divide the load of computation between the write and read operations.My first suggestion is that you store data for one year only in one document per product. This will allow 12 months and 365 days data in the document. In this case, the document size per product is not much (my guess is few 100 kilo bytes of the maximum possible 16 mega bytes).An option is you create a document initially for a product, and initialize all the fields, including the array fields. For example, the dailyData array field will have 365/366 elements with:Similarly, for the monthlyData with 12 elements initialized.The document per product would look like this:The second option is storing raw data for a product and per day in a document.This allows minimum computation at the time of writing. When you read the data you aggregate as per your output needs.The third option is to store data per product per month in a document.Finally, using some of these ideas you can create a model that suits your needs best.NOTE: As for maximum collection size, there is no set limit by the server. I believe this is mostly limited by the file size limits of the server hosting the database server.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "HI @Prasad_SayaIt sounds like we see pretty much the same options.What would tip the scale in favor of option 1 instead of putting every individual transaction in a time-series collection (every single one, not even aggregated by day) and calculate all the numbers on the fly using window functions? Obviously that would shift some compute load from the write side to the read side, and I know that I will have lots more reads than writes, but are time-series collections so performant that it’s not a problem?",
"username": "John_Knoop"
},
{
"code": "",
"text": "@John_Knoop, you are in a better position to figure what is best based upon your work factors. I suggest you go over these two articles:And, as you think about Timeseries collection be aware of this: Time Series Collection Limitations.",
"username": "Prasad_Saya"
}
] |
Advice needed for how to best model product sales stats. Timeseries collections?
|
2022-05-11T09:55:34.800Z
|
Advice needed for how to best model product sales stats. Timeseries collections?
| 3,244 |
null |
[] |
[
{
"code": "",
"text": "Hello,I’m trying to use atlas trigger.Whenever there is a trigger, I want to add a field to document “i.e” current timestamp in required format,(MM-DD-YY seconds) seconds will be number of seconds in that particular day.I tried using Date() and Date.Now() and tried referring the documentation as wellDate but I didn’t got the expected output.Can you please help me out.Thanks,\nArun.",
"username": "Arun_Varma_Penmatsa"
},
{
"code": "",
"text": "Do not worry about how your timestamp is stored in your data.If you want to present it in a particular format do that at the application layer.You may use $dateToParts to do than just before your application get the data.You may also use the locale API supplied by your programming language of choice directly in your application. And to me it is better there than using $dateToParts. It is easier to scale data formatting (and user customization) at the application layer versus data access layer.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej Thank you for your reply.I want to add a field , business name concatenated timestamp (with above mentioned format) on insertion trigger.It’s a business use case where I need to represent the field in the expected format.So I need to add field on an insertion itself in to collection.Thus. $dateToParts \nworks on triggers?Thank you for your time.Thanks ,\nArun.",
"username": "Arun_Varma_Penmatsa"
},
{
"code": "",
"text": "As already mentioned:Do not worry about how your timestamp is stored in your data.If you want to present it in a particular format do that at the application layer.@Tyler_Kaye, also confirms it is better to store timestamp in the native format and to apply the business use-case formatting at the presentation layer in your other thread Timestamp in require format - #4 by Tyler_KayeAs he wrote:just store the field as a Date which is supported in MongoDB and is most often the better thing to do when storing a date in the database. Your client-side application can be in charge of how it wants to convert a date into a string",
"username": "steevej"
}
] |
Current Timestamp in required format
|
2022-05-10T16:26:11.316Z
|
Current Timestamp in required format
| 2,737 |
null |
[] |
[
{
"code": "",
"text": "Hello,Hope you’re doing good.I’m trying using current time stamp as an _id in a collection on insertion trigger.\nI’m trying to fetch current timestamp in required format,(MM-DD-YY seconds) seconds will be number of seconds in that particular day.I tried using Date() and Date.Now() and tried referring the documentation as wellDateBut I didn’t got the desire output. My end goal is to use (MM-DD-YY seconds) seconds will be number of seconds in that particular day , as an _id on insertion trigger.Thank you.",
"username": "Arun_Varma_Penmatsa"
},
{
"code": "",
"text": "Using timestamps as an _id is definitely an anti-pattern in MongoDB. These are unique across a collection and it would mean that you cant insert more than two documents in the same second. I would highly recommend just using the MongoDB ObjectID. In fact, you can technically get the timestamp of an insertion from the object id since it is a part of the encoding: https://www.mongodb.com/docs/manual/reference/method/ObjectId/",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thank you for the reply. If I need to add a new attribute to collection with value (MM-DD-YY seconds) seconds will be number of seconds in that particular day , how can I do that on Atlas Triggers.",
"username": "Arun_Varma_Penmatsa"
},
{
"code": "",
"text": "Triggers is just using a function editor with javascript code. So if you want to add a date, you should be able to create one with “new Date()” and use the javascript methods here if you want to store the date as a string: Date Methods ReferenceYou can also just store the field as a Date which is supported in MongoDB and is most often the better thing to do when storing a date in the database. Your client-side application can be in charge of how it wants to convert a date into a string",
"username": "Tyler_Kaye"
}
] |
Timestamp in require format
|
2022-05-04T19:32:20.208Z
|
Timestamp in require format
| 2,616 |
null |
[
"aggregation"
] |
[
{
"code": "mainIdconst interval = moment.utc().add(-5, \"days\"); // number of days may vary\nreturn this.aggregate([\n {\n $lookup: {\n from: 'collectionB',\n localField: 'mainId',\n foreignField: 'mainId',\n as: 'validd',\n },\n },\n {\n $match: {\n $or: [\n { \"validd\": [] },\n { \"validd.createdAt\": { $lt: new Date(interval) } }\n ]\n }\n },\n]);\nvidreturn this.aggregate([\n {\n $lookup: {\n from: \"CollectionB\",\n let: {\n mainIdField: \"$mainId\",\n vidField: \"$vid\"\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n {\n $eq: [\"$mainId\", \"$$mainIdField\"]\n },\n {\n $eq: [ \"$vid\", \"$$vidField\"]\n }\n ]\n }\n }\n }\n ],\n as: \"joined_results\"\n }\n },\n {\n $unwind: { path: \"$joined_results\" }\n }\n])\ncreatedAt",
"text": "So I have the following query, where I join 2 collections by a single field mainId:it works perfectly! But now I need to join those collection by another field named vid .\nI could make this join with two fields work like this:But I can’t make both cases work toghether. I need to use both filters.1 - Make the join with two fields;\n2 - Apply the first example, where I only return values if:\n2.1 - Itens from collectionA doesnt exists in collectionB\n2.2 - row from collectionA exists in collectionB, but the field createdAt is greather than X.Is it possible?",
"username": "Alan"
},
{
"code": " from: 'collectionB',from: \"CollectionB\",createdAt",
"text": "Post sample documents of your collections.The documents shared in your other thread do not have a field named vid. You have typos errors between from: 'collectionB',andfrom: \"CollectionB\",May be they are different collection but it is not obvious.What about vid vs validd? As a human I can see some resemblance but the $lookup will surely failed using vid if it is validd like your other pipeline.Is this a pipeline that is applied after the original pipeline? Or is the original pipeline, the solution that was provided in your other thread, the solution of an incomplete requirement?1 - Make the join with two fields;YES2 - Apply the first example, where I only return values if:\n2.1 - Itens from collectionA doesnt exists in collectionB\n2.2 - row from collectionA exists in collectionB, but the field createdAt is greather than X.Same solution as first example once you got your joined_results.You have to state your requirement completely right at the beginning so that we do not work and provide partial solutions that requires extra work with additional questions.",
"username": "steevej"
},
{
"code": "name: \"first\"\nmainId: a2345e87-a388-4b72-ae2a-1cd69b7e1330\nvid: \"abc\"\n\nname: \"second\"\nmainId: b2345e87-a388-4b72-ae2a-1cd69b7e1330\nvid: \"def\"\n\nname: \"third\"\nmainId: c2345e87-a388-4b72-ae2a-1cd69b7e1330\nvid: \"ghi\"\nname: \"first\"\nmainId: a2345e87-a388-4b72-ae2a-1cd69b7e1330\nvid: \"abc\"\ncreatedAt:2022-05-11T20:44:14.885+00:00\n\nname: \"third\"\nmainId: c2345e87-a388-4b72-ae2a-1cd69b7e1330\nvid: \"ghi\"\ncreatedAt:2022-05-08T20:44:14.885+00:00\n3 days agoname = secondCollectionBname = firstname = thirdreturn this.aggregate([\n {\n $lookup: {\n from: 'collectionB',\n localField: 'mainId',\n foreignField: 'mainId',\n let: { vidField: '$vid', mainIdField: '$mainId'},\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $eq: [\"$mainId\", \"$$mainIdField\"] },\n { $eq: [\"$vid\", \"$$vidField\"] }\n ]\n }\n }\n }\n ],\n as: 'validd',\n },\n },\n {\n $match: {\n $or: [\n { \"validd\": [] },\n { \"validd.createdAt\": { $lt: new Date(interval) } }\n ]\n }\n },\n]);\ninterval",
"text": "Hello again, Steeve \nSorry for the confusion and thanks again for taking some type to help.1 - The miss typo was a mistake I made while typing here instead of copying and pasting.\n2 - vidd is a string, different from valid.So, if I run this query correctly and considering that I want records from 3 days ago what I want as a output is:1- name = second should be returned, because it does not exists on CollectionB.\n2 - name = first should be ignored, because it was created only one day ago.\n3 - name = third should be returned, because it was created 4 days ago.My last failed attempt of querying this solution:The variable interval is the “age” of the records that I want in days.",
"username": "Alan"
},
{
"code": "// We get second with an empty array because it is not in collectionB.\n{ _id: 'second',\n mainId: 'b2345e87-a388-4b72-ae2a-1cd69b7e1330',\n vid: 'def',\n validd: [] }\n// We also get third, with an element dated before the wanted date.\n{ _id: 'third',\n mainId: 'c2345e87-a388-4b72-ae2a-1cd69b7e1330',\n vid: 'ghi',\n validd: \n [ { _id: 'third',\n mainId: 'c2345e87-a388-4b72-ae2a-1cd69b7e1330',\n vid: 'ghi',\n createdAt: 2022-05-08T04:00:00.000Z } ] }\nsecondCollectionBfrom: 'collectionB'",
"text": "That pipeline works for me.But I do not use moment. Somehow you must be using the wrong date.With your data I get:But note again. You writesecond should be returned, because it does not exists on CollectionBbut then you do $lookupfrom: 'collectionB'If you mistype the collection name, then the lookup will return an empty array for all and all documents will match the empty array test of your $or.",
"username": "steevej"
}
] |
How to join multiple fields and filter data by date
|
2022-05-10T21:37:25.613Z
|
How to join multiple fields and filter data by date
| 8,476 |
[
"mdbw22-hackathon"
] |
[
{
"code": "Staff Developer AdvocateSenior Developer Advocate",
"text": "In this session, Staff Developer Advocate Nic Raboy shares the progress of his News Browser Web App that he is building alongside all our hackathon participants.Join us, it will be fun and you will learn too! What’s not to like!!We will be live on MongoDB Youtube and MongoDB TwitchStaff Developer AdvocateSenior Developer Advocate",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "We’re live - jump on MongoDB Youtube and MongoDB Twitchor watch below",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Livestreaming! Geospatial Queries with the GDELT Data Set - MongoDB World Hackathon Series
|
2022-05-12T16:59:11.351Z
|
Livestreaming! Geospatial Queries with the GDELT Data Set - MongoDB World Hackathon Series
| 2,930 |
|
null |
[
"queries",
"graphql"
] |
[
{
"code": "",
"text": "Hello I was wondering if it was possible to set fields to null with a graphql mutation?I thought if the field is not marked as required in the realm schema it would be possible but I keep getting this error.{\n“data”: null,\n“errors”: [\n{\n“message”: “Syntax Error GraphQL request (35:24) Unexpected Name “null”\\n\\n34: _id: “111118e14e40ae75111b0005”,\\n35: classifications: null,\\n ^\\n36: createDateTimeStamp: “2022-02-01T20:32:33.255Z”,\\n”,\n“locations”: [\n{\n“line”: 35,\n“column”: 24\n}\n]\n}\n]\n}",
"username": "Tam_Nguyen1"
},
{
"code": "",
"text": "Hey there, can you provide the mutation that you’re running and the associated schema? I think what you’re looking for is https://www.mongodb.com/docs/realm/schemas/enforce-a-schema/#validate-null-types which will allow you to explicitly pass in null for optional types but please let me know if your question is different",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hi Sumedha,My graphql query is:mutation {\nupdateOneIncidentType(query:\n{\nSQLServerId: “AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA”},\nset:{name: null}\n) {\nSQLServerId\nname\n}\n}And the schema is:{\n“properties”: {\n“SQLServerId”: {\n“bsonType”: “string”\n},\n“_id”: {\n“bsonType”: “objectId”\n},\n“createDateTimeStamp”: {\n“bsonType”: “date”\n},\n“dataHubVersion”: {\n“bsonType”: “long”\n},\n“isActive”: {\n“bsonType”: “bool”\n},\n“isDeleted”: {\n“bsonType”: “bool”\n},\n“name”: {\n“bsonType”: “string”\n},\n“partition”: {\n“bsonType”: “string”\n}\n},\n“required”: [\n“createDateTimeStamp”,\n“isActive”,\n“isDeleted”,\n“partition”\n],\n“title”: “IncidentType”\n}I set Null Type Schema Validation to ON but I still get this error:{\n“data”: null,\n“errors”: [\n{\n“message”: “Syntax Error GraphQL request (36:17) Unexpected Name “null”\\n\\n35: SQLServerId: “AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA”},\\n36: set:{name: null} \\n ^\\n37: ) {\\n”,\n“locations”: [\n{\n“line”: 36,\n“column”: 17\n}\n]\n}\n]\n}",
"username": "Tam_Nguyen1"
},
{
"code": "set:{name_unset:true}",
"text": "I believe you actually have to use set:{name_unset:true} rather than set explicitly to null",
"username": "Sumedha_Mehta1"
},
{
"code": "set:{name_unset:true}",
"text": "set:{name_unset:true}Oh I see name_unset works. That makes it undefined though, although I think that might work for me.Just double checking there’s no way to make it actually null through graphql? I can do it with a realm function although I prefer graphql.Thanks",
"username": "Tam_Nguyen1"
},
{
"code": " mutation {\n \n aliasOrgIncident: updateOneOrgIncident(query: { SQLServerId: \"AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA\" },\n set: { location: \n {longitude_unset: true \n latitude_unset: true \n description: \"Test\"} \n \n }\n )\n {\n _id\n }}\n",
"text": "I encountered another problem using _unsetIf I doI get this error{\n“data”: {\n“aliasOrgIncident”: null\n},\n“errors”: [\n{\n“message”: “(ConflictingUpdateOperators) Updating the path ‘location’ would create a conflict at ‘location’”,\n“locations”: [\n{\n“line”: 4,\n“column”: 17\n}\n],\n“path”: [\n“aliasOrgIncident”\n]\n}\n]\n}But if I update the description without the longitude_unset/latitude_unset it works. It also works if I update\nlongitude_unset/latitude_unset without the description. It’s just when it’s together that it gives me the error. Do you know why?This post has the same question but there’s no replies:",
"username": "Tam_Nguyen1"
}
] |
Null input for graphql mutation
|
2022-05-11T17:32:34.117Z
|
Null input for graphql mutation
| 5,361 |
null |
[
"python",
"pymodm-odm",
"announcement"
] |
[
{
"code": "",
"text": "Hello, Pythonistas! Unfortunately, we have some news that will affect some of you - MongoDB is pausing our development of PyMODM. If you’re a user of PyMODM, you’ve probably noticed long before this point that our commit/improvements here have been sporadic at best.We came to the decision to halt our development begrudgingly. The PyMODM project has relatively low usage, and there have been less than 1k users who visited the pymodm documentation year to date, but those who do use pyMODM have advocated for its improvement. Many of our top users for PyMODM are internal - that is, they are teams at MongoDB - so we are acutely aware that ceasing our development efforts here could be painful for our users.The codebase will remain available to fork. If your team would like a replacement ODM, many Python users use mongoengine. One of our own developers, Ross Lawley, was previously heavily involved, and we know that this is an awesome open source project for the community.\nAs part of this announcement, we’d like to ask if there are any users who want to take over and maintain PyMODM, or if you just have questions, please reach out. Anything we can do to make this smoother for our users, we’re happy to do.Thank you for using MongoDB, and for building great things with us.",
"username": "Rachelle"
},
{
"code": "",
"text": "Thank you for PyMODM!I hope this is the right place to ask. I’ve tried switching to mongoengine in the past and, even though the API is almost the same, mongoengine was way slower than PyMODM. Does anyone have a similar experience? Anyone switching to mongoengine or others?",
"username": "Martin_Basterrechea"
},
{
"code": "",
"text": "We found the same thing too. We ran some tests with a lot of referenced fields and checked the load/dump times. It was surprisingly slower, and this was like a year ago.",
"username": "Tony_Thomas"
},
{
"code": "",
"text": "Same here, mongoengine is much slower than PyMODM for my application (required loading large nested documents).",
"username": "Ben_Kantor"
},
{
"code": "",
"text": "couple years ago, I also switched to PyMODM due to this issue.\ncould you let me know, which version of mongoengine that you used ?",
"username": "Jensen_W"
},
{
"code": "",
"text": "The lastest version, 0.20.0",
"username": "Ben_Kantor"
},
{
"code": "",
"text": "I hope this is the right place to ask. I’ve tried switching to mongoengine in the past and, even though the API is almost the same, mongoengine was way slower than PyMODM.PyMODM has an optimization built in such that Model fields are lazily decoded when they’re first accessed. This optimization means applications with large or deeply nested Models do not pay the full cost of deserialization when accessing a few fields. I believe that mongoengine has worse performance in these cases because it fully decodes the entire Model, including all embedded Models within. It may be possible to get this lazy decoding feature implemented in mongoengine.",
"username": "Shane"
},
{
"code": "",
"text": "Thanks for the info .if we follow these guides, will it solve our performance issue on mongoengine ?",
"username": "Jensen_W"
},
{
"code": "",
"text": "Was this pausing covid related?And are you any plans or conversations to unpause?",
"username": "m_stemmons"
},
{
"code": "",
"text": "Welcome to the community @m_stemmons!The decision to pause development on PyMODM was not COVID-related. The decision was driven mostly by lack of usage as @Rachelle pointed out in the original announcement. We are still looking for an interested member(s) of the community to take over ownership of the project and maintain PyMODM.Hope this helps!",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "Hi @Prashant_Mital,If still available, I would like to volunteer to maintain PyMODM.I am on GitHub at vladdoster.",
"username": "Vlad_N_A"
},
{
"code": "",
"text": "@Prashant_Mital Any update on this?",
"username": "Vlad_N_A"
},
{
"code": "",
"text": "Hi @Vlad_N_A , can you please send me an email to my firstname @ mongodb.com\nThank you!",
"username": "Rachelle"
},
{
"code": "",
"text": "",
"username": "system"
}
] |
Updates on PyMODM
|
2020-09-18T19:18:31.552Z
|
Updates on PyMODM
| 8,043 |
null |
[
"react-native"
] |
[
{
"code": "const handleRealmSyncError: Realm.ErrorCallback = useCallback(\n (_session, error) => {\n if (realmRef.current) {\n if (error.name === 'ClientReset') {\n realmRef.current.close();\n Realm.App.Sync.initiateClientReset(realmApp, realmPath);\n realmRef.current = null;\n }\n }\n },\n [realmPath],\n );\nsync: {\n user: currentRealmUser,\n error: handleRealmSyncError,\n},\n@realm/reactuseRealmRealmProvider",
"text": "Hello,Thank you for this amazing service. When opening a Realm, one could handle client resets with the following code:and opening a Realm with the following sync config:But with @realm/react, we cannot use the useRealm hook in the same component where RealmProvider is declared, and thus I have no idea where how to handle any sync errors. We are paying customers and have a couple of deadlines to rush. Please advise, thank you very much!",
"username": "Chee_Kit_C"
},
{
"code": "const realm = useRealm()\n...\nrealm.close();\n",
"text": "The reason why we need these lines:is because we need to close the realm before performing a client reset. Any MongoDB employees can help us out or is there any other info we can provide to help find a solution for this?",
"username": "Chee_Kit_C"
},
{
"code": "refRealmProvider",
"text": "@Chee_Kit_C excuse the late reply, the forums can be a bit noisy. This is indeed a use case we hadn’t considered and should be implemented. I believe I could provide a ref prop to the RealmProvider and set the value when it’s rendered. That should make what you are trying to do possible.\nCould you please open an issue on our github repository? This way we can get this planned and track progress.",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "@Andrew_Meyer thank you for the quick response! I’ve opened up a new issue: Expose realm ref when using RealmProvider in @realm/react · Issue #4571 · realm/realm-js (github.com)",
"username": "Chee_Kit_C"
}
] |
How to do client reset with @realm/react?
|
2022-05-11T16:25:09.998Z
|
How to do client reset with @realm/react?
| 2,381 |
null |
[
"mdbw22-hackathon"
] |
[
{
"code": "",
"text": "We’ve now racked up over 900 minutes of Hackathon livestreaming! And don’t worry, if you missed them - you can catch them all on our Hackathon Playlist \n",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
View 900+ minutes of Hackathon live-streaming!
|
2022-05-12T13:21:14.496Z
|
View 900+ minutes of Hackathon live-streaming!
| 2,603 |
null |
[] |
[
{
"code": "",
"text": "I currently have a mobile app that is built connected to MongoDB Atlas. I’m trying to understand what is the benefit of using Realm and how difficult it would be to change my mobile app to connect to Realm. Can someone explain the benefits?",
"username": "Matthew_Harris"
},
{
"code": "",
"text": "Hi @Matthew_Harris - welcome to the community forum!Where to start on the advantages of using Realm? This video gives a good intro that might help. For a mobile app, the biggest single benefit is that the app can read and write data even when offline - Realm Sync will then make sure that all copies in the mobile app(s) and Atlas match once you reconect.Moving databases is never a trivial task, but working with an OODBMS (Realm) is much closer to working with an object database (MongoDB) and so it’s one of the simpler migrations you could attempt.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Andrew, thanks for your reply. Are there any resources on this? Blog post etc? If MongoDB has invested so much in Realm, what is the use if I can’t understand specific use cases and have a resource on how to do a transfer where Realm and MongoDb work together for a better mobile experience. Thanks!",
"username": "Matthew_Harris"
},
{
"code": "",
"text": "I’m not aware of any guides for migrating a mobile app from a MongoDB driver to realm, but I can look into it. Is your app using a mobile database such as core data or SQLite?In terms of advantages of making the move, for me these are some of the most significant:In many cases, it removes the need for an application server as MongoDB Realm + Atlas handles all of the backend work (your mobile app just works with the Realm SDK).The docs have a lot more details.We have posts on building mobile apps with Realm - you can find a bunch here.If you have specific questions then this is a good forum to ask.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks for the response. We are using MongoDB atlas as the core data. A few things I like that you are talking about - the serverless functions - we are considering moving our API to lambda, but this could be another option. Are there easy ways to create these functions somewhere then deploy them into the realm environment (ssh or some cmd)? I guess essentially CI/CD. Is there someone you can put me in touch with at Mongo that can go deeper into discussions around this? Thanks, Matt",
"username": "Matthew_Harris"
}
] |
Migrate mobile MERN stack app to Realm
|
2021-04-19T16:29:57.776Z
|
Migrate mobile MERN stack app to Realm
| 1,845 |
null |
[
"node-js",
"mdbw22-hackathon",
"react-js"
] |
[
{
"code": "",
"text": "Hi, Thanks for checking in.I am good at the frontend part and can do a MERN stack project. as a beginner, I am looking for opportunities where I can expand my knowledge. I am ready to take on new challenges and learn from each step.I will give my best if you are ready to give me a chance.\nThanks again",
"username": "Junaid_Ahmed"
},
{
"code": "",
"text": "Hey @Junaid_Ahmed Welcome & Thanks for joining the Twitter Spaces this morning - great to have you.The MERN stack experience is great - very valuable.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "I moved your post into the Hackers looking for Projects category so hopefully it will get picked up here",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Looking for a project
|
2022-05-12T09:57:01.901Z
|
Looking for a project
| 2,854 |
[] |
[
{
"code": "",
"text": "I am new to mongodb and I am unsure if I am even posting in the right place.I have the following schema for a collection of products\nScreenshot 2022-05-04 at 3.10.04 PM1106×309 33.4 KB\nOn looking at this I am concerned about the dimensions (Length, width, height, and weight). I think it would be better to store them as key value pairs under a Dimensions heading. Should I add the Dimensions field as an object or array? And how is the key value entered in to the record?",
"username": "David_Thompson"
},
{
"code": "dimension: {\n length: 10.2,\n width: 18.0,\n height: 5.77,\n weight: 8.0\n}\ndimensions: [\n { k: \"length\", v: 12.7 },\n { k: \"width\", v: 15.0 },\n // other k-v pairs\n]\n",
"text": "Hello @David_Thompson, you can store the “dimension” data as an object or an array depending upon the use case (that is what kind of data you are storing and how you are querying it).In case the dimension’s properties are the same for all the products, you can store like this as an object to group the related data:In case of having different dimension properties for the products then you can apply the Attribute Pattern to store the data in an array field of objects. Each object is a key-value pair as below:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya , Thanks for your response… Sorry it took me so long to reply, I am doing this project as a side project to my current job.I understand your examples, however, am using atlas to establish my schema and I cannot edit the array values… when I make the field dimensions and assign it the array datatype, I am immediately confronted with the 0,1,2 field values in the array (item at place 0 is length) but I am unsure whether to make length another array with the value (basically making it a 2 dimensional array) or not.I made it an object because I can define the data fields with it being an object… Am I on the right thinking?On another note… I have been thinking about how I am doing my product collection and I have a question for you.I am building a program which will allow businesses to sell products online… I currently have the product collection and I am embedding objects into when needed… For example I have embedded the dimension object for the length, height, width, and weight.Where I am getting stuck is I need to allow for the differentiation between textiles and non textile products… Textiles will have the attributes colour size and quantity. I currently have the size as an object with the attributes of the sizes (large, medium, small) and the quantity. I believe I should redo this to be colour with the attribute of size and quantity.below is my current way of doing this. Am I on the right track?\nScreenshot 2022-05-11 at 2.22.07 PM1343×390 45.2 KB\nAm I on the right track or am I making this harder than it needs to be?",
"username": "David_Thompson"
},
{
"code": "[\n { color: \"red\", size: \"L\", qty: 4 },\n { color: \"red\", size: \"M\", qty: 1 },\n { color: \"white\", size: \"XL\", qty: 3 }\n]\n",
"text": "Am I on the right track or am I making this harder than it needs to be?Hello @David_Thompson. Most of the document design depends upon the usage - how you want to query (includes CRUD) it. In other words how you want to use in your application and access it efficiently.Creating some data samples and important queries you might use in your app - can help.For example:I currently have the size as an object with the attributes of the sizes (large, medium, small) and the quantity.\nI believe I should redo this to be colour with the attribute of size and quantity.One way to store this data is as shown below. Will it suit your application needs, I cannot tell for sure as I don’t know how the data is to be used.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya ,Thanks… My question is what does this look like in Atlas? If I were to look at the record in atlas, would it be nested arrays?[ color [red [size [qty] ] ] ]or would this be nested objects?I am trying to see what this would look like at the record level.The issue I run into is if I create an array I can’t rename the array index… If I make it an object, I can. So this would have to be stored as an object right?In atlas, I can make an array for colour, but then I only have 0, 1, 2, etc to place the values in… If I make 0 another array, I cannot name that nested array to size.colour [\n0: red ( if I use the string red, I cannot say it is another array. Or I don’t know how to stipulate that it is an array “red” or the array “colour”.Am I explaining this right?",
"username": "David_Thompson"
},
{
"code": "",
"text": "@Prasad_Saya ,\nI apologise. I struggle with schema with mongodb. I understand what I want to do… I just get confused with the embedded document idea. To embed a document, it is an array or is it an object… This is where I get confused and in searching for this definition or distinction, there isn’t any real good explanations in mongodb documentation.I looked for embedded nested arrays… The mongodb documentation talks about querying them, but not in defining them.",
"username": "David_Thompson"
},
{
"code": "",
"text": "Hi @David_Thompson welcome to the community!In this case, I would recommend you to browse the free MongoDB University courses. We have all sorts of courses from beginner to advanced that may help clear up your confusion. Specifically, please have a look at M001 MongoDB basics and M100 MongoDB for SQL pros to start with. Then M320 data modeling might be a good next step.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank you Kevin… I have taken the Developer courses… But I will look at retaking the M001 MongoDB Basics Again (Although it will only teach me compass and the basics of querying). To my knowledge (from the last time I took it (It has been updated since I took it last) it doesn’t go into how the data is stored. Nesting arrays within arrays (basically a 2d array)M100 MongoDB for SQL pros is new to me… I will look at it and hopefully it will help me to gain a better understanding where I have questions.I will also look at the data modeling course again… I believe I have already taken this one, but I may need to revisit it.ThanksDave",
"username": "David_Thompson"
},
{
"code": "mongosh",
"text": "@David_Thompson, MongoDB data can be of various types and includes arrays and objects. These can be combined and nested. Here is some additional info:The data stored in the MongoDB Atlas cluster or on your local computer is similar. You can work with the data using various tools like mongosh, Compass, etc.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya ,\nTHANKS!!! The link to the mongoDB manual helped a TON!!Regards,Dave",
"username": "David_Thompson"
}
] |
Key Value pairs and how they are listed in the Database
|
2022-05-04T07:13:36.649Z
|
Key Value pairs and how they are listed in the Database
| 8,070 |
|
null |
[
"queries"
] |
[
{
"code": "inventory{\n _id: 1,\n item: \"abc\",\n stock: [\n { size: \"S\", color: \"red\", quantity: 25 },\n { size: \"M\", color: \"blue\", quantity: 50 }\n ]\n}\n{\n _id: 2,\n item: \"def\",\n stock: [\n { size: \"S\", color: \"blue\", quantity: 20 },\n { size: \"M\", color: \"black\", quantity: 10 },\n { size: \"L\", color: \"red\", quantity: 2 }\n ]\n}\n{\n _id: 3,\n item: \"ijk\",\n stock: [\n { size: \"M\", color: \"blue\", quantity: 15 },\n { size: \"L\", color: \"blue\", quantity: 100 }\n ]\n}\n...\ndb.inventory.createIndex( { \"stock.size\": 1, \"stock.quantity\": 1 } )stock.size: \"M\"db.inventory.find( { \"stock.size\": \"M\" } ).sort( { \"stock.quantity\": 1 } )",
"text": "I have a inventory collection with documents as shown below:I have created compound multikey index on:\ndb.inventory.createIndex( { \"stock.size\": 1, \"stock.quantity\": 1 } )I want to query for docs where stock.size: \"M\" and all docs are sorted by it’s quantity in increasing order.\nExpected result: [doc_2, doc_3, doc_1]I am using query: db.inventory.find( { \"stock.size\": \"M\" } ).sort( { \"stock.quantity\": 1 } ), but it’s not returning results in sorted order as expected above.Can someone please help here, how I can achieve same.\nThanks!!",
"username": "Upendra_Kumar"
},
{
"code": " db.inventory.aggregate(\n [\n {\n $match: \n { \"stock.size\": \"M\" \n }\n },\n { \n $project: \n { item: \n { $filter: \n { input: \"$stock\", \n as: \"item\", \n cond: { $eq: [ \"$$item.size\", \"M\" ] }\n } \n } \n } \n } , \n { \n $sort: {\"item.quantity\": 1} \n }\n ]\n )\n\n[\n { _id: 2, item: [ { size: 'M', color: 'black', quantity: 10 } ] },\n { _id: 3, item: [ { size: 'M', color: 'blue', quantity: 15 } ] },\n { _id: 1, item: [ { size: 'M', color: 'white', quantity: 67 } ] }\n]\ndb.inventory.aggregate( {$unwind: \"$stock\" }, {$match: {\"stock.size\": \"M\"}}, { $sort: {\"stock.quantity\": 1}})\n[\n {\n _id: 2,\n item: 'def',\n stock: { size: 'M', color: 'black', quantity: 10 }\n },\n {\n _id: 3,\n item: 'ijk',\n stock: { size: 'M', color: 'blue', quantity: 15 }\n },\n {\n _id: 1,\n item: 'abc',\n stock: { size: 'M', color: 'white', quantity: 67 }\n }\n]\n$groupstockdb.inventory.aggregate( {$unwind: \"$stock\" }, {$match: {\"stock.size\": \"M\"}}, { $sort: {\"stock.quantity\": 1}}, {$group: {_id: \"$stock\" } } )\n[\n { _id: { size: 'M', color: 'black', quantity: 10 } },\n { _id: { size: 'M', color: 'blue', quantity: 15 } },\n { _id: { size: 'M', color: 'white', quantity: 67 } }\n]\n",
"text": "Hi @Upendra_Kumar\nWelcome to the community forum!!As per the query mentioned in the above topic, find().sort() would work on per-document basis and hence this would not mutate inside the document.However, the desired output can be achieved using aggregation or by using aggregation operator $sortArrayHowever, here are two queries which would give the desired output:Output:Adding a $group to group the documents based on stock field:Please Note: The second aggregation query, the index would not be usable for the other stages other than $match.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
] |
Sort operations on compound multikey index
|
2022-05-10T02:49:57.616Z
|
Sort operations on compound multikey index
| 1,569 |
null |
[
"replication",
"storage"
] |
[
{
"code": "{\"t\":{\"$date\":\"2022-05-10T12:01:43.871+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.871+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.871+02:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.871+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.872+02:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.872+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.872+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.872+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.873+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":2945870,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"xxx\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.873+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.6\",\"gitVersion\":\"212a8dbb47f07427dae194a9c75baec1d81d9259\",\"openSSLVersion\":\"OpenSSL 1.1.1o 3 May 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.873+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"NAME=\\\"Arch Linux\\\"\",\"version\":\"Kernel 5.17.5-arch1-1\"}}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.873+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongodb.conf\",\"net\":{\"bindIp\":\"0.0.0.0\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"wiredTiger\":{\"engineConfig\":{\"cacheSizeGB\":8}}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\",\"quiet\":true}}}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.873+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:43.874+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=8192M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:44.717+02:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":4671205, \"ctx\":\"initandlisten\",\"msg\":\"This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.\"}\n{\"t\":{\"$date\":\"2022-05-10T12:01:44.717+02:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":4671205,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":653}}\n{\"t\":{\"$date\":\"2022-05-10T12:01:44.717+02:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n--rapair{\"t\":{\"$date\":\"2022-05-10T12:05:00.006+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.007+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.007+02:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.007+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.008+02:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.008+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.008+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.008+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.009+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":2945898,\"port\":27017,\"dbPath\":\"/var/lib/mongodb/\",\"architecture\":\"64-bit\",\"host\":\"epyc\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.009+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.6\",\"gitVersion\":\"212a8dbb47f07427dae194a9c75baec1d81d9259\",\"openSSLVersion\":\"OpenSSL 1.1.1o 3 May 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.009+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"NAME=\\\"Arch Linux\\\"\",\"version\":\"Kernel 5.17.5-arch1-1\"}}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.009+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"repair\":true,\"storage\":{\"dbPath\":\"/var/lib/mongodb/\"}}}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.075+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.075+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=128314M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.990+02:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":4671205, \"ctx\":\"initandlisten\",\"msg\":\"This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.\"}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.990+02:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":4671205,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":653}}\n{\"t\":{\"$date\":\"2022-05-10T12:05:00.990+02:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n",
"text": "MongoDB version: 5.0.6\nSystem: Arch Linux kernel: 5.17.5-arch1-1I have 400GB mongodb database on single host with a few collections. Before last weekend my friend started to transfer one of the collections from this db to another replica set. It seems all data was copied, but might be still running some background mongodb operation while the host was shut down . Because I did a system upgrade on Sunday. I use Arch Linux, so mongodb wasn’t upgraded as it is an external package there.Right now mongo refuses to start.I tried to make --rapair, but it fails:It seems to me this ticket should not apply to our use case as mongodb was not upgraded.Will appreciate any kind of help.",
"username": "kriestof"
},
{
"code": "2021-12-29T18:43:36.040+0100 I CONTROL [initandlisten] db version v4.0.6\n2021-12-29T18:43:36.040+0100 I CONTROL [initandlisten] git version: caa42a1f75a56c7643d0b68d3880444375ec42e3\nThis version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.",
"text": "Ok, it seems a bit worse. Probably mongodb was unexpectedly updated a few months back, but it was not reloaded. This is why right now this bug occurs.I went in my logs back to previous mongodb restart and it seems it was on db version 4.0.6Then after upgrading to 5.0.6 I guess ticket might be relevant (This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.). The thing is, it was stated the bug is resolved for 5.0.6. So what are your recommendations right now?",
"username": "kriestof"
},
{
"code": "",
"text": "Seems to be fixed. Downgraded mongodb to 4.0.6. No data loss. So now I can prepare data for safe migration.",
"username": "kriestof"
},
{
"code": "This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.mongodumpmongorestore",
"text": "Hi @kriestof welcome to the community!I’m glad that you’re able to get the deployment going again. However I wanted to add a couple of things:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Database fails to start
|
2022-05-10T10:10:17.805Z
|
Database fails to start
| 3,931 |
null |
[] |
[
{
"code": "",
"text": "Is it the RAM on the MongoDB server side? Or the RAM on the application side?Also, how is it determined that certain documents should be in RAM while others should be on disk?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Hi @Big_Cat_Public_Safety_ActI’m not sure I fully understand the question. Are you talking about the MongoDB server or an application that’s connected to it?I cannot say about any application, but for the MongoDB server itself, a document primarily resides on disk. If requested (e.g. by a query), the selected documents will be loaded from disk into the WiredTiger cache to be processed and returned to the client. The WiredTiger cache itself would contain the most recent requested documents, and the oldest content would be replaced by newer content according to the workload.However please note that the explanation above is an extremely high level description of what’s going on in the server. Details may vary, and this is a subject of much technical details.Please see FAQ: MongoDB Storage for more information.Best regards\nKevin",
"username": "kevinadi"
}
] |
Subset Pattern - What RAM are the documents located in?
|
2022-05-01T15:51:40.109Z
|
Subset Pattern - What RAM are the documents located in?
| 1,405 |
null |
[
"node-js"
] |
[
{
"code": "Failed to install dependencies\nfailed to transpile node_modules/rest-facade/tests/module.tests.js. \"rest-facade\" is likely not supported yet. unknown: Unexpected reserved word 'package' (4:4)\n",
"text": "Greetings,I was attempting to create an auth0 function, and endpoint to access Auth0 Management API via Realm. When attempting to add the dependency I got this error:Are there any workarounds when there are dependency issues like this? Thanks in advance.",
"username": "ajedgarcraft"
},
{
"code": "",
"text": "Hi Andrew,Thanks for posting your question.It seems to only error if you try installing rest-facade 1.16.2 and above.\nI tried installing version 1.16.1 which worked.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "Failed to install dependencies\n\nfailed to transpile node_modules/rest-facade/tests/module.tests.js. \"rest-facade\" is likely not supported yet. unknown: Unexpected reserved word 'package' (4:4)\n",
"text": "@Mansoor_Omar Thank you for the response. The issue happens when attempting to install auth0. Unfortunately this is what happens when I try to install earlier versions that use previous versions of rest-facade:Is there anything I can do to work around this?",
"username": "ajedgarcraft"
}
] |
Function Dependencies Issue [node-auth0]
|
2022-04-19T00:33:28.142Z
|
Function Dependencies Issue [node-auth0]
| 1,951 |
null |
[
"java"
] |
[
{
"code": "",
"text": "How to implement bulkWrite async in Java?",
"username": "AARZOO_MANOOSI"
},
{
"code": "// 1. Ordered bulk operation - order is guaranteed\ncollection.bulkWrite(\n Arrays.asList(new InsertOneModel<>(new Document(\"_id\", 4)),\n new InsertOneModel<>(new Document(\"_id\", 5)),\n new InsertOneModel<>(new Document(\"_id\", 6)),\n new UpdateOneModel<>(new Document(\"_id\", 1),\n new Document(\"$set\", new Document(\"x\", 2))),\n new DeleteOneModel<>(new Document(\"_id\", 2)),\n new ReplaceOneModel<>(new Document(\"_id\", 3),\n new Document(\"_id\", 3).append(\"x\", 4))))\n.subscribe(new ObservableSubscriber<BulkWriteResult>());\n",
"text": "Hi @AARZOO_MANOOSI,How to implement bulkWrite async in Java?For async, please see MongoDB Java Driver: Reactive Stream. An example of an ordered BulkWrite operations:Please see also Reactive Stream Tutorials: Bulk Writes for more informationRegards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks and appreciated !!",
"username": "AARZOO_MANOOSI"
}
] |
bulkWrite async in Java
|
2022-04-28T18:56:04.860Z
|
bulkWrite async in Java
| 2,597 |
null |
[] |
[
{
"code": "npx realm-cli pull --remote=msshop2-nwtia\n",
"text": "when i ran this:i get outdated function, not the same function in the realm.\nive already asked for bug report with chat on mongodb dashboard, but its been more than 1 week with reply.when i export the app in mongodb realm, iam getting the latest update.",
"username": "James_Tan1"
},
{
"code": "",
"text": "i mean no reply from chat",
"username": "James_Tan1"
},
{
"code": "",
"text": "Hi James,I’m not sure if you’ve solved this already since this was posted a while ago.If you’re not getting a pull of concurrent app configurations, it’s possible that you have drafts enabled and have not deployed the latest draft changes for your function before doing the pull request.Regards",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "hi Mansoor,\nit is live working function, and working as updated.\nonly pull is not giving updated one",
"username": "James_Tan1"
},
{
"code": "",
"text": "Hi James,Which realm-cli version do you have installed?What time/date was the last time this problem happened?Regards",
"username": "Mansoor_Omar"
}
] |
Realm-cli pull is not update to date
|
2021-10-05T15:30:09.164Z
|
Realm-cli pull is not update to date
| 1,470 |
[
"mdbw22-hackathon"
] |
[
{
"code": "",
"text": "Hello Hackers…We hope you are all deep into your projects now and the finish line is in sight!We are livestreaming again tomorrow and we’d love for some brave souls to join us to share their progress so far? Anyone up for it? If so, you will earn this fine item of clothing -\nScreenshot 2022-04-25 at 17.49.52957×1089 117 KB\nand of course, our eternal kudos and gratitude for your bravery and sense of community!C’mon - don’t be shy! Just reply to this post and we’ll swing an invite your way…and in time (postage/shipping delays notwithstanding), you’ll be wearing exclusive hacakthon swag!",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Hi, I do have participated in world hackathon!",
"username": "SHIV_THE_GR8"
},
{
"code": "",
"text": "Thanks for the reply. Do you have something to demo with us on the Livestream? If so, let me know, and I’ll pop you an invite.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Show your project - get swag!
|
2022-05-11T15:30:58.820Z
|
Show your project - get swag!
| 3,115 |
|
null |
[
"aggregation",
"queries",
"atlas-search"
] |
[
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"mainColor\": [\n {\n \"type\": \"stringFacet\"\n },\n {\n \"store\": false,\n \"type\": \"string\"\n }\n ],\n \"seats\": [\n {\n \"type\": \"stringFacet\"\n },\n {\n \"store\": false,\n \"type\": \"string\"\n }\n ]\n }\n }\n}\n[\n {\n \"$search\": {\n \"facet\": {\n \"operator\": {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"path\": \"mainColor\",\n \"query\": \"Beige\"\n }\n },\n {\n \"text\": {\n \"path\": \"seats\",\n \"query\": \"2\"\n }\n }\n ]\n }\n },\n \"facets\": {\n \"mainColor\": {\n \"type\": \"string\",\n \"path\": \"mainColor\"\n },\n \"seats\": {\n \"type\": \"string\",\n \"path\": \"seats\"\n }\n }\n },\n \"count\": {\n \"type\": \"total\"\n }\n }\n },\n {\n \"$facet\": {\n \"data\": [\n {\n \"$project\": {\n \"mainColor\": 1,\n \"seats\": 1\n }\n }\n ],\n \"meta\": [\n {\n \"$replaceWith\": \"$$SEARCH_META\"\n },\n {\n \"$limit\": 1\n }\n ]\n }\n },\n {\n \"$set\": {\n \"meta\": {\n \"$arrayElemAt\": [\n \"$meta\",\n 0\n ]\n }\n }\n }\n]\n",
"text": "We have a product catalog and would like to build a facet search functionality for it, similar to the one in most web shops.Take the following simplified search index:Now I would like to build a query that returns the result set, as well as the total count and the facet buckets.\nThis is my best attempt so far:In the result now the buckets contain only one entry.\nBut what I need is that each facet is not effected by its own filter field, but only by the other filter fields.\nOtherwise, the user has never the possibility to switch the filter to another color, for the same seats.Is it possible to achieve this?\nIt would be fine if I have to split it in one query for the data and one query for the facets.\nBut the only way I see now is to make one query for each facet, which is not feasible, especially if there are 10+ facets.",
"username": "Mathias_Mahlknecht"
},
{
"code": "operatormainColorseats",
"text": "Hi @Mathias_Mahlknecht , welcome to our forums! I believe what you’re experiencing is the current, intended behavior. Facets return counts over the search results returned from the given operator. Since the operator specifies that mainColor and seats must match specific values, the facets only return buckets for those specified values.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "Hi @Elle_Shwer, that’s what I thought. But is there currently a way how I can achieve what I want, without querying each facet individually? And if not, is there any feature in the pipeline, that could help me with this in a future release?",
"username": "Mathias_Mahlknecht"
},
{
"code": "",
"text": "Not that I am aware of, will pose this question to the team though. Feel free to also channel feature request here: https://feedback.mongodb.com/\nWe heavily monitor this site for new ideas.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Get correct atlas search facet results
|
2022-05-05T07:50:08.453Z
|
Get correct atlas search facet results
| 2,328 |
null |
[] |
[
{
"code": "",
"text": "Hi! anyone can tell me what is that number after the client ip?2022-05-10T21:09:18.455+0000 I NETWORK [thread1] connection accepted from 127.0.0.1:42969 #5 (3 connections now open)\n2022-05-10T21:09:18.455+0000 I NETWORK [thread1] connection accepted from 127.0.0.1:42970 #6 (4 connections now open)I tried to find which file handle the log output on the source code to figure out but with no luck.\nAnyone know?",
"username": "andres_cozme"
},
{
"code": "\n \n if (!quiet) {\n log() << \"connection refused because too many open connections: \" << connectionCount;\n }\n return;\n } else if (usingMaxConnOverride && _adminInternalPool) {\n ssm->setServiceExecutor(_adminInternalPool.get());\n }\n \n if (!quiet) {\n const auto word = (connectionCount == 1 ? \" connection\"_sd : \" connections\"_sd);\n log() << \"connection accepted from \" << session->remote() << \" #\" << session->id() << \" (\"\n << connectionCount << word << \" now open)\";\n }\n \n ssm->setCleanupHook([this, ssmIt, quiet, session = std::move(session)] {\n size_t connectionCount;\n auto remote = session->remote();\n {\n stdx::lock_guard<decltype(_sessionsMutex)> lk(_sessionsMutex);\n _sessions.erase(ssmIt);\n connectionCount = _sessions.size();\n \n 42969",
"text": "Hi @andres_cozme,You can search the relevant version of server source code for a unique string like “connection accepted from” to find relevant log lines. Based on the log format in your output, you’re working with a version older than MongoDB 4.4 (which switched to structured logging using JSON).For example, matching this line in the MongoDB v4.2 source:The value after the IP address (eg 42969) is the ephemeral port associated with the TCP/IP connection.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie! thanks for the quick reply.The first thing I tried was searching for the string \"connection accepted from \" using the github searcher on the mongo repository but I didn’t found anything. Maybe I’m using it wrong.\nimage1625×506 27.3 KB\nbest regards\nAndres",
"username": "andres_cozme"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Unknow number after ip from client
|
2022-05-11T11:58:27.339Z
|
Unknow number after ip from client
| 2,871 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.