image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "node-js", "atlas-device-sync", "react-native", "react-js" ]
[ { "code": " const result = useQuery(MyModel);\n const items = result.sorted('name');\n\n console.log(items.length); //This code works fine and returns the number of items.\n console.log(items.isValid()); //This code works fine and returns true.\n\n console.log(items); //This code will crash the App.\n items.map(item => console.log(item)); // Another example that crashes app.\n\"expo\": \"~45.0.0\",\n\"react\": \"17.0.2\",\n\"react-native\": \"0.68.2\",\n\"realm\": \"^10.17.0\",\n\"react-native-reanimated\": \"~2.8.0\",\n\"@realm/react\": \"^0.3.0\",\n\"styled-components\": \"^5.3.5\",\n\"typescript\": \"~4.3.5\"\n<GestureHandlerRootView style={{ flex: 1 }}>\n <ThemeProvider theme={theme}> // <-- from Styled Components\n <AppProvider id={SYNC_CONFIG.appId}> // < -- from @realm/react\n <PortalProvider>\n <UserProvider fallback={SignIn}> // < -- from @realm/react\n <AppSyncWrapper> // <-- This is a wrapper to RealmProvider. Code is below.\n <Routes /> // <-- from @react-navigation/native\n </AppSyncWrapper>\n </UserProvider>\n </PortalProvider>\n </AppProvider>\n </ThemeProvider>\n </GestureHandlerRootView>\nimport React, { ReactNode } from 'react';\nimport { useUser } from '@realm/react';\n\nimport { MainRealmContext } from './realm/RealmContext';\n\ninterface IProps {\n children: ReactNode;\n}\n\nexport function AppSyncWrapper({ children }: IProps) {\n const user = useUser();\n const { RealmProvider } = MainRealmContext;\n\n return (\n <RealmProvider sync={{ user, partitionValue: 'portal' }}>\n {children}\n </RealmProvider>\n );\n}\nimport { createRealmContext } from '@realm/react';\n\nimport { AnswerOption } from './models/Form/AnswerOption';\nimport { Author } from './models/Form/Author';\nimport { Form } from './models/Form/Form';\nimport { Question } from './models/Form/Question';\nimport { Section } from './models/Form/Section';\n\nconst MainRealmContext = createRealmContext({\n schema: [\n Form,\n Author,\n Section,\n Question,\n AnswerOption,\n ]\n});\n\nexport { MainRealmContext };\nimport Realm from “realm”;\nimport { Author } from “./Author”;\nimport { Section } from “./Section”;\n\ntype IForm = {\n_id?: Realm.BSON.ObjectId;\n_partition: string;\nactive?: boolean;\nauthor?: Author;\nname: string;\nsections: Realm.List;\n}\n\ntype IFormObject = IForm & Realm.Object;\n\nclass Form {\n_id?: Realm.BSON.ObjectId;\n_partition: string;\nactive?: boolean;\nauthor?: Author;\nname: string;\nsections: Realm.List;\n\nstatic schema: Realm.ObjectSchema = {\nname: ‘Form’,\nproperties: {\n_id: ‘objectId?’,\n_partition: ‘string’,\nactive: ‘bool?’,\nauthor: ‘Author’,\nname: ‘string’,\nsections: ‘Section[]’,\n},\nprimaryKey: ‘_id’,\n}\n}\n\nexport { Form, IFormObject }```", "text": "Hello friends I come to ask for help because I’m developing an application using React Native, Expo and Realm and I got stuck at one point because my app is crashing instantly after I try to see the data from a query.\nI really wish someone could help as I’m stuck on this problem and can’t move forward.How to describe the problem:\nWhen I perform a query to list the data that are in the database through the useQuery(MyModel) command, the data is apparently loaded correctly, because I can count how many records there are in the database.\nHowever, if I try to interact with these records, the app closes instantly. Even if it’s a simple console.log(result). A few examples below:To add more context, I’d like to share some of the libraries I’m using in the project.I’m using partition-based strategy for syncing.Authentication is working normally and the App does not display any logs or error messages, including in the Realm UI logs panel.I made the integration using the @realm/react lib, according to the most recent examples that are in the Github repository.This is my App structure:AppSyncWrapper code:And this is how i have created the Context:And all Models have an structure like this:", "username": "Diego_Jose_Goulart" }, { "code": "realm-jsreact-native-reanimated v2realmreact-native-reanimatednpm install [email protected]", "text": "realm-js and react-native-reanimated v2 do not play well together. We have a beta version of realm that does work with react-native-reanimated. Can you try to npm install [email protected]?", "username": "Andrew_Meyer" }, { "code": "", "text": "Hello @Andrew_Meyer , first of all thanks for trying to help!Unfortunately Installing version 10.20.0-beta.5 led me to the Realm Missing Constructor error, but I’m already using expo-dev-client. Same as this thread.I couldn’t solve this problem either.", "username": "Diego_Jose_Goulart" }, { "code": "", "text": "@Diego_Jose_Goulart Can you try doing a clean rebuild of your app?", "username": "Andrew_Meyer" }, { "code": "expo start --clear --dev-client", "text": "@Andrew_Meyer es! But how can I do a clean rebuild? Would be expo start --clear --dev-client ?", "username": "Diego_Jose_Goulart" }, { "code": "", "text": "@Andrew_Meyer I did some testing and even on a clean install using expo and expo-dev-cli I was not able to work around the above error. Could this be a bug?", "username": "Diego_Jose_Goulart" }, { "code": "", "text": "@Diego_Jose_Goulart Possibly. File an issue on our github repo and i’ll investigate it after the weekend.", "username": "Andrew_Meyer" }, { "code": "", "text": "@Andrew_Meyer issue opened: 4621 on github repo.\nThanks for your help.", "username": "Diego_Jose_Goulart" }, { "code": "", "text": "hi,\nyou can use either realm@hermes or [email protected] version", "username": "amit_singh" }, { "code": "", "text": "my app is still crashing @amit_singh“styled-components”: “^5.3.5”,\n“realm”: “^11.0.0-rc.2”,\n“react-native-reanimated”: “^2.8.0”,\n“react-native”: “^0.66.2”,", "username": "Usman_Saleem" } ]
App crashes after using useQuery or Realm.objects()
2022-06-02T23:01:58.478Z
App crashes after using useQuery or Realm.objects()
6,288
null
[]
[ { "code": " public class RepairDiagnosticLevel : EmbeddedObject\n {\n public int DiagnosticLevelID { get; set; }\n public string DisplayName { get; set; }\n public IList<RepairDiagnosticLevel> DiagnosticLevel { get; }\n }\n", "text": "I have an object that contains this (simplified) objectAs you can see the list is recursive. When I log in to my app, it attempts to data sync this new model (and its parent) but I get this errorCycles containing embedded objects are not currently supported: ‘RepairDiagnosticLevel.diagnosticLevel’can anyone confirm that Realm sync should allow recursion like this and if so, what might cause this exception when logging in?The collection does not currently exist in MongoDB. I’m in dev mode so expecting my new model to create the new collection.\nThank you.", "username": "John_Atkins" }, { "code": "IList<RepairDiagnosticLevel>", "text": "Unfortunately we do not currently support cyclic lists containing embedded objects (which would be your nested recursive/cyclic list IList<RepairDiagnosticLevel> in this case) so you may want to consider restructuring your data model to either avoid recursion/cycles in the list or not use embedded objects as the list’s elements.", "username": "Gagik_Amaryan" }, { "code": "", "text": "Thank you Gagik for the quick reply.Understood.", "username": "John_Atkins" } ]
Realm Sync of nested recursive list
2022-11-16T10:16:02.878Z
Realm Sync of nested recursive list
1,099
null
[ "indexes" ]
[ { "code": "COLLSCANIXSCANFETCHSHARD_MERGESHARDING_FILTER", "text": "Is there some list with all the possible names for stages on explain result?On documentation https://docs.mongodb.com/v4.4/reference/explain-results/#explain-results just appears:What about PROJECTION stage or SORTING stage?", "username": "Jonathan_Alcantara" }, { "code": "", "text": "Welcome to the MongoDB Community Forums @Jonathan_Alcantara !The list of possible stages will vary by MongoDB server version and type of query you are explaining, but the most complete list (including some internal stages you may never see in explain output) would be in the source code, eg: mongo/stage_types.h at master · mongodb/mongo · GitHubThere are also a few stages specific to sharded explain: mongo/cluster_explain.cpp at master · mongodb/mongo · GitHubThe documentation mentions some of the common stages but does not attempt to be an exhaustive list as the notion is that the stages should be “human-readable”.If there are specific stages you would like to learn more about in the context of explain results, please share your explain output.Regards\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for your reply.\nI did that post because on the certification exam I had a question where I had select from a list of responses all the correct stages of the explain output.My surprise was that I didn’t found a list of all that stages on the documentation.Is a bit strange that the only place with a complete list is on source code.Thanks to that I guess that I failed that question on the exam.", "username": "Jonathan_Alcantara" }, { "code": "", "text": "Hi @Jonathan_Alcantara,Thank you for providing additional context.The certification exam questions should only cover information that is included in the documentation and University courses, however I will pass your feedback on to the team so they can review any relevant cert questions about aggregation stages in the explain output.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi, I have the same question in the November certification exam. I believe the SORT stage should be available in the output of explain if the query plan can not use an index for sorting. However, I’m not sure if PROJECTION is a correct answer to that exam question. I selected that option of PROJECTION in the exam because I found this stage in the next question showing the output of a explain.", "username": "Zhen_Qu" }, { "code": "", "text": "@Stennie_X For mongo 5 version what are complete number of stages, certification wise? Still it is not settled.", "username": "Mugurel_Frumuselu" } ]
Complet list stages names on explain result
2021-11-21T17:55:42.389Z
Complet list stages names on explain result
2,925
null
[ "atlas-search" ]
[ { "code": "[{\n \"$search\": {\n \"index\": \"bookSearchIndex\",\n \"compound\": {\n \"must\": [\n {\n \"moreLikeThis\": {\n \"like\": [\n {\n authors: [\"Frank\"]\n }\n ]\n }\n }\n ]\n }\n }\n }\n]\n\"explain\": {\n \"path\": \"compound.must\",\n \"type\": \"DefaultQuery\",\n \"args\": {\n \"queryType\": \"MatchNoDocsQuery\"\n }\n }\n[{\n \"$search\": {\n \"index\": \"bookSearchIndex\",\n \"compound\": {\n \"must\": [\n {\n \"moreLikeThis\": {\n \"like\": [\n {\n authors: [\"Donna\"]\n }\n ]\n }\n }\n ]\n }\n }\n }\n]\n\"explain\": {\n \"path\": \"compound.must\",\n \"type\": \"TermQuery\",\n \"args\": {\n \"path\": \"authors\",\n \"value\": \"donna\"\n }\n }\n", "text": "Hi everyone,I am trying to use the new moreLikeThis operator and encountered some behaviour that I cannot explain. I have an Atlas search index on a books collection that includes title, authors, genres, etc. I am using Atlas search for recommendations whenever an exact match cannot be found (given other criteria such as postcode, distance, city, etc.). I noticed that whenever the search term is encountered only once in the relevant field in my collection, I get 0 moreLikeThis results back. If it is more than once, then I get results. For example, for a search like this I get 0 results back since I have only 1 book by Frank Tallis.The explain() for above is:When I search for a term that is encountered at least twice in the collection, I get results. For example:I have two books, each by a different author with the first name Donna. The explain() for above is:This behaviour repeats across other fields (like title). I have two books: “Prisoner of Azkaban” and “Prisoners of Geogrpahy”. If I search under “prisoner”, I get recommendations, but 0 recommendations if I search under ‘geography’. Is it that the algorithm does not index/take into account terms that are encountered only once in the collection? If that is the case, I am better off using the “should” compound operator.Many thanks for any explanations/advice.Kind regards,\nGueorgui", "username": "Gueorgui_58194" }, { "code": "", "text": "Hi @Gueorgui_58194,Your experience sounds accurate to me. The purpose of MoreLikeThis is to take some large text (100+ words) and extract 25 good words to query on, presenting similar records. As I understand it, there are some hardcoded limitations in there to not select words that appear only once. For your use case, I would recommend not to use MLT if the text is not long, and just use a normal text query OR supply a larger document for Atlas Search to find similar records.I’m not sure using should/must would make a difference here, did you test that out?", "username": "Elle_Shwer" }, { "code": "db.movies.aggregate([\n {\n \"$search\": {\n moreLikeThis: {\n like:\n {\n \"title\": \"The Godfather\",\n \"genres\": \"action\"\n }\n }\n }\n },\n { \"$limit\": 5},\n {\n $project: {\n \"_id\": 0,\n \"title\": 1,\n \"released\": 1,\n \"genres\": 1\n }\n }\n])\n", "text": "Hi Elle,Thanks very much for the prompt response and sorry for my late reply. The example given in the documentation for moreLikeThis (example 1) uses moreLikeThis for two short terms:In the context of that example, using moreLikeThis would mean that if there were a single movie with ‘Godfather’ in the title, there would be 0 results returned back (the example returns several results because ‘Godfather’ appears more than once). I personally found it counterintuitive that moreLikeThis would not return a single document with an exact match. In my use case, I was hoping that where other search criteria such as distance, postcode, city, etc, did not yield search results, I could relax those search criteria and use moreLikeThis for the text fields to expland the search. However, in view of the way moreLikeThis operates, I ultimately implemented my own recommendation algorithm with text/phrase queries and the compound operators.Many thanks for clarifying how moreLikeThis operates.Kind regards,\nGueorgui", "username": "Gueorgui_58194" } ]
Atlas moreLikeThis with single similar document
2022-11-07T11:36:19.092Z
Atlas moreLikeThis with single similar document
1,402
null
[ "aggregation", "queries", "data-modeling" ]
[ { "code": "profiles{\n name: String,\n locations: [ Location{ adm1_id, adm3_id } ],\n product: enum.PRO|BASIC\n}\nadm1_idadm1_id{\n name: String,\n locations: [Location{adm1_id, adm3_id, product}]\n}\n{\n $match // query\n $unwind // separate docs based on `locations`\n $filter // filter based on...?\n ...?\n}\n", "text": "I have a data modelling problem and would greatly appreciate any help.I’ll try to lay down the scenario via a simplified example:Our data model has profiles for businesses:We need to query them with and without a location (based on either adm1_id, adm3_id or both), sorting the results so that PRO-product profiles appear first, non-PRO profiles come after that sorted by a specific, separate sort field. This is now done and it works.Now, approximately 2 days before product launch, the specs changed We now need to enable multiple locations per profile, but, so that any PRO-products would be location specific, meaning:Number two is where my brain snaps.How would you model, index & query this? I want to avoid resorting to multiple queries if at all possible, since we also have to support pagination via skip, which already makes things very slow, but the best I can come up with is:…but, per my understanding, this requires two queries, one two fetch profiles which have PRO-product, and another to fetch the rest. I’m not very experienced with aggregates, maybe there’s a solution hidden there. From the little I could gather, maybe something like……could work?Thanks for reading this far.", "username": "ilari" }, { "code": "db.profiles.aggregate([\n // Match only profiles with given location\n {\n \"$match\": {\n \"locations.adm1_id\": \"new-york\"\n }\n },\n // Unwind locations\n {\n \"$unwind\": \"$locations\" \n },\n // The unwinded working set might have profiles with non-relevant locations.\n // Match only unwinded profiles with given location.\n {\n \"$match\": {\n \"locations.adm1_id\": \"new-york\" \n }\n },\n\n // Sort profiles with highlighted location first, second order by\n // orderKey-field, so that the next stage picks up the highlighted profile.\n {\n \"$sort\": { \n \"locations.highlight\": -1,\n \"orderKey\": 1\n }\n },\n // Now we might have duplicates if a profile had two locations with same\n // adm1_id, so group by _id, grabbing the `$first` values\n {\n \"$group\": {\n \"_id\": \"$_id\",\n \"name\": {\n \"$first\": \"$name\"\n },\n \"locations\": {\n \"$first\": \"$locations\"\n },\n \"orderKey\": {\n \"$first\": \"$orderKey\"\n }\n }\n },\n // $group messes up the sort ordering, so we sort again!\n {\n \"$sort\": {\n \"locations.highlight\": -1,\n \"orderKey\": 1\n }\n },\n // We want to skip & limit our results, but are also interested in the total amount of results.\n {\n \"$facet\": {\n \"paginatedResults\": [\n {\n \"$skip\": 24 // skip whatever is needed\n },\n {\n \"$limit\": 24 // limit to page size\n },\n // use project to make sure we only return safe fields,\n // $group stage is optional and used only if adm1_id was provided.\n {\n \"$project\": {\n \"name\": 1,\n \"locations\": 1,\n \"orderKey\": 1\n }\n }\n ],\n // Get total count of result documents\n \"totalCount\": [\n {\n \"$count\": \"count\"\n }\n ]\n }\n }\n ]);\n.find()", "text": "In case someone else has a similar problem, I’ve come up with at least a solution that works, not necessarily the solution:It is obviously slower than a plain .find() on the index, but a typical query within a collection of 5000 documents takes approx. ~12-20ms, or ~20-30ms, measured from the node client.Any ideas how to achieve the same, but better and faster, are very much welcome. My aggregate-knowledge is very limited.", "username": "ilari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
"Conditional sort", a data modelling problem
2022-11-15T17:02:11.071Z
&ldquo;Conditional sort&rdquo;, a data modelling problem
1,947
null
[ "crud", "compass", "mongodb-shell", "golang" ]
[ { "code": "", "text": "Hi, everyone\nI tried to use golang to insert about 700 billions of documents into one collection on my computer with 2.2 GHz 4 Core Intel Core i7, 16Gb RAMmy program:\nI perform concurrent method which have 50 gorotines, every gorotines will get one documents from the unbuffered channel and insert one document to the database using InsertOne() funcmy result:\nAfter let me computer running for a whole night, I wake up and find about 400 millions was inserted on mongodb, while in the mid night, I woke up and find the numbers of documents is about 200 miilions.\nI am a new-bie and want to tell you guys about more things i discover and confused.my confusion:\nI think it’s because the number is to big, the number of collection showed in the compass is N/A, so I go to mongosh, use db.collection.countcollections() to get the number, however, it usually takes mins to respond, so that I can not directly know what’s going on in the database.my question:\nIs there any better way to insert the millions of data quickly? use other func like buld.insert() or InsertMany()?\nwhy the performance seems to be lower when there’re more documents?\nthe perfromance of mongodb is slow when it counting millions of documents, is it normal?", "username": "inokiyo" }, { "code": "", "text": "Sorry, it’s 700 millions, not billions", "username": "inokiyo" }, { "code": "", "text": "Hi @inokiyo welcome to the community!Is there any better way to insert the millions of data quickly?I would say that the insert speed depends primarily on the hardware. Although you can create a highly parallel script to do the insertion, there’s only so many cores that process the work. A larger server with more processing power will definitely be able to do this faster.If applicable, using an official tool such as mongoimport may be beneficial, since it was written with the best insert performance in mind (given the hardware).the perfromance of mongodb is slow when it counting millions of documents, is it normal?If it still does a heavy insertion work when you execute this command, then you’re asking the server to do even more work, so yes it will be slow If you don’t need high precision, you might be able to use db.collection.estimatedDocumentCount() that returns the number using the collection’s metadata instead, which will be less precise but would not impose extra work on the server.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "I think it’s because the number is to big, the number of collection showed in the compass is N/A, so I go to mongosh, use db.collection.countcollections() to get the number, however, it usually takes mins to respond, so that I can not directly know what’s going on in the database.That’s odd that it takes so long to count the number of collections. Is it possible that you’re inserting many documents in many collections instead of inserting many documents into one collection? Creating more than 10,000 collections is known to cause serious performance problems.As far as performance when there are many documents in a collection, you may need to add indexes to the collection to prevent having to scan many documents. What indexes you need to add depends on what operations you’re trying to do.", "username": "Matt_Dale" } ]
Insert billions of data into mongodb
2022-10-15T15:55:18.396Z
Insert billions of data into mongodb
3,944
null
[ "dot-net", "crud" ]
[ { "code": "", "text": "Hello,\nI am trying the build application for mongodb on c#. I have a multiple questions about that.1-)In SQL, i can migrate (seed) datas with Entity Framework Core(ModelBuilder.HasData). I have a lot of Data(documents) and i do not want to seed my MongoDb databases with MongoDbCompass. Should I write a scipt in C# with db.InsertMany options? What is the best approach for MongoDb?\n2-)I wrote a services for CRUD operations. In APIController, i am using this service. I can feed my database with this service. However, my api is directly interacting with database. I want to create repositories but my service and repositories looks the same. What should i do?\n3-) In my service, i am using db.InsertOneAsync, ReplaceOneAsync, DeleteOneAsync vs… Should i save the changes after that operation with collection.Save commands or After Insert, replace and delete command database automatically save the changes?", "username": "Emre_Polat" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is collection.Save and migrations is necessary?
2022-11-15T18:22:46.344Z
Is collection.Save and migrations is necessary?
1,204
null
[]
[ { "code": "", "text": "Hi i want to add a date range for a student enrollment ie. if someone enroles in sep22 and enrolment ends23. i also need to set a date for when a course starts and finishes. Im new to Mongo and really not sure on how to add this…\ni have three collections. student, course and lecturer.", "username": "Jean_Smith" }, { "code": "", "text": "Hi @Jean_SmithI’m not sure what you really need here. Do you want to set a parameter so that someone cannot enroll beyond a certain date? That sounds like something that’s easier to enforce in the application side rather than the database side. It’s better for testing too, since you can write tests for your code that ensures that it behaves as expected with no corner cases.However if you need help with MongoDB in particular (queries, design, etc.), could you post some example documents, the desired outcome, and your attempts so far?Best regards\nKevin", "username": "kevinadi" } ]
How do i create a Date Range
2022-11-13T02:07:18.708Z
How do i create a Date Range
964
null
[]
[ { "code": "", "text": "When I call new Date(“11/15/2022, 21:51:00”) in a Realm function, it returns an Invalid Date string; however, when I run it in a JS environment, it works fine.", "username": "Prince_Shrestha1" }, { "code": "new Date(“11/15/2022, 21:51:00”)new Date('2022-11-15T21:51:00')", "text": "Hi @Prince_Shrestha1 welcome to the community!I don’t think presently Atlas functions recognize the form new Date(“11/15/2022, 21:51:00”).Is it possible to use the ISO-8610 format instead? e.g. new Date('2022-11-15T21:51:00') ?Best regards\nKevin", "username": "kevinadi" } ]
Calling new Date in a Realm function returns an invalid date
2022-11-14T08:42:25.379Z
Calling new Date in a Realm function returns an invalid date
1,163
null
[ "python", "atlas-cluster", "transactions" ]
[ { "code": "", "text": "Hi!! I was running a python application on shared M0 cluster and google cloud and was able to insert 60000 data via insert_many in pymongo without write conflict, as soon as I upgraded the cluster to dedicated using Azure the exact same script started returning Writer Conflict, I tried to use AWS Cloud, upgrading again to M20 , but with no success. How does the shared cluster not return Writer Conflict Error whilst the dedicated, supposedly more potent, does?", "username": "Marina_De_Souza_Ripper" }, { "code": "", "text": "Hi @Marina_De_Souza_Ripper - Welcome to the community.Could you provide the following information:Additionally, can you confirm if the below steps are somewhat correct in terms of producing this issue?Additionally, the write conflict mentioned is unlikely to be caused by the underlying cloud provider since it is generally the result of two or more processes trying to update the same document.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Write Conflict when using payed cluster and Azure
2022-11-15T00:52:46.545Z
Write Conflict when using payed cluster and Azure
1,134
null
[ "golang" ]
[ { "code": "ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\n\tdefer cancel()\n\n\tclientOptions := options.Client().ApplyURI(URL)\n", "text": "First I gonna apologize with my bad English.So Im using golang with mongo db ( mongo atlas service )for my connection my code is look like thisSo now we face issue about query and not timeout for 700k ms", "username": "management_zabtech" }, { "code": "", "text": "\nf8106fa8-ec7f-42c9-a18a-1ad273969b1e1280×617 64.3 KB\n", "username": "management_zabtech" }, { "code": "ContextContextmongo.ConnectInsertfunc main() {\n\topts := options.Client().ApplyURI(URL)\n\tclient, err := mongo.Connect(context.Background(), opts)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tcoll := client.Database(\"test\").Collection(\"test\")\n\n\tctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\n\tdefer cancel()\n\t_, err = coll.InsertOne(ctx, bson.D{{\"key\", \"value\"}})\n\tif err != nil {\n\t\tpanic(err)\n\t}\n}\nContextmongo.Connect", "text": "Hey @management_zabtech, thanks for the question! Operation timeout is controlled by the Context passed to an operation function. The Context passed to mongo.Connect has no effect on operation timeout.Here’s an example that times out an Insert operation after 10 seconds:Where are you using the Context you create in your example code? Is it in mongo.Connect or in an operation function?", "username": "Matt_Dale" }, { "code": "", "text": "Thx sir , I got advise by Create index in performance advisor menu , after created everything is fine", "username": "management_zabtech" } ]
I need help with mongo atlas service
2022-11-14T06:30:25.876Z
I need help with mongo atlas service
1,605
null
[ "node-js", "mongoose-odm", "connecting" ]
[ { "code": "", "text": "Hi, I hope someone helps me, I’m not new to mongodb, I’m over 2 years old, everything was going great, I use mongoose with nodejs, until now all my projects that used mongodb stopped, I thought it was a problem with replit where I’m hosted everything, but then I realized that mongoose can’t make the connection and gives the following error: “MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted. Make sure your current IP address is on your Atlas cluster’s IP whitelist”, I have enabled all IPs as I have always done in these last 2 years, I have not even changed the code, the mongoose package was in the version v5 with the code to connect from that version that I have always used, try now to update it and use the updated mongoose code but it still doesn’t work, I need help please.", "username": "DeathAbyss" }, { "code": "", "text": "It is possible your connection string is changed, especially if you are on the free tier. I suggest starting to check that first.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I check it all time, i also reset the password", "username": "DeathAbyss" }, { "code": "", "text": "Then, there might be a breaking change or a breaking bug in mongoose. have you tried to downgrade to what were you using before, if you remember the settings?Since you haven’t shared any configuration, we can only point to possible causes.“could not connect” or “connection timed out” are mostly due to IP restriction, hostname/IP change, or driver update incompatibility. since you say you checked the first two, please focus on the mongoose version for a while.", "username": "Yilmaz_Durmaz" }, { "code": " log(chalk.green(`${settings.line}`));\n log(chalk.blue(\"[INFO]: Loading Mondodb....\"));\n log(chalk.green(`${settings.line}`));\n\n try {\n const options = {\n autoIndex: false, // Don't build indexes\n maxPoolSize: 10, // Maintain up to 10 socket connections\n serverSelectionTimeoutMS: 5000, // Keep trying to send operations for 5 seconds\n socketTimeoutMS: 45000, // Close sockets after 45 seconds of inactivity\n family: 4 // Use IPv4, skip trying IPv6\n };\n await connect(uri, options).then(() => {\n log(chalk.green(`${settings.line}`));\n log(chalk.blue(\"[INFO]: Ready MongoDB ✅\"));\n log(chalk.green(`${settings.line}`));\n }).catch((e) => {\n console.log(e);\n })\n } catch (e) {\n console.log(e);\n }\n", "text": "“could not connect” or “connection timed out” are mostly due to IP restriction, hostname/IP change, or driver update incompatibility. since you say you checked the first two, please focus on the mongoose version for a while.i fork and now is working, but i will notify here if it broke again, my mongoose version is the last with the best code to connect this:", "username": "DeathAbyss" }, { "code": "", "text": "i fork and now is workingit is nice to hear. maybe it was a temporary connection issue from replit servers.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "it is nice to hear. maybe it was a temporary connection issue from replit servers.the code is on replit ar hosting it good, it just say it cant connect to the database, and now one of them broke again.", "username": "DeathAbyss" }, { "code": "", "text": "the code is on replitis it private? or we may fork and try to investigate. if you haven’t done so yet, you may set up “secrets” to hold you server/password details. Can’t speak for the code itself.", "username": "Yilmaz_Durmaz" }, { "code": "db.ts", "text": "I can’t give u the project because I don’t have permissions, but I can give u the db.ts file that run the mongoose code: db.ts - Pastebin.com", "username": "DeathAbyss" }, { "code": "import db from \"./DB/db\";\ndb();\nimport db from \"./DB/db\";\n(async () =>{\nawait db();\n})()\n", "text": "and i import that like this:let me try it with await", "username": "DeathAbyss" }, { "code": "\"type\":\"module\"loadmodules", "text": "“await” might do some tricks. and you may not need to wrap depending on project type (\"type\":\"module\")Anyways, I guess I could wrap your above code correctly. I just had to comment out loadmodules and IIFE block at the end.took me a while to adapt missing parts and some “import” issues, but it does connect to my cluster with no problem.", "username": "Yilmaz_Durmaz" }, { "code": "\"type\": \"module\"", "text": "It works with await, is working rn, i use typescript for that i need (\"type\": \"module\") and the loadmodules is my system to make my own functions in typescript to mongoose, thanks for help.", "username": "DeathAbyss" }, { "code": "", "text": "nice to hear it is working ", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoose failed to connect and Atlas MongoDB expect to whitelist IP while all IP has allowed help
2022-11-14T18:00:17.943Z
Mongoose failed to connect and Atlas MongoDB expect to whitelist IP while all IP has allowed help
3,957
null
[ "aggregation" ]
[ { "code": "[\n {\n \"_id\": ObjectId(\"5a934e000102030405000001\"),\n \"classId\": ObjectId(\"635d2f796804b95ce6a9e5d1\"),\n \"email\": \"[email protected]\",\n \"kelas\": {\n \"classes\": {\n \"classCode\": \"VII B\",\n \"classId\": ObjectId(\"635d2f796804b95ce6a9e5d1\"),\n \"className\": \"B\"\n },\n \"mainClass\": \"VII\"\n },\n \"name\": \"STUDENT 1\",\n \"phone\": \"08712345678\",\n \"regDate\": \"29/10/2022\",\n \"tags\": []\n },\n \n {\n \"_id\": ObjectId(\"5a934e000102030405000002\"),\n \"classId\": ObjectId(\"635d2f796804b95ce6a9e5d1\"),\n \"email\": \"[email protected]\",\n \"kelas\": {\n \"classes\": {\n \"classCode\": \"VII C\",\n \"classId\": ObjectId(\"635d2f836804b95ce6a9e5d2\"),\n \"className\": \"C\"\n },\n \"mainClass\": \"VII\"\n },\n \"name\": \"STUDENT 2\",\n \"phone\": \"08712345678\",\n \"regDate\": \"29/10/2022\",\n \"tags\": []\n },\n \n {\n \"_id\": ObjectId(\"5a934e000102030405000003\"),\n \"classId\": ObjectId(\"635d2f796804b95ce6a9e5d1\"),\n \"email\": \"[email protected]\",\n \"kelas\": {\n \"classes\": {\n \"classCode\": \"VII D\",\n \"classId\": ObjectId(\"635d2f8e6804b95ce6a9e5d3\"),\n \"className\": \"D\"\n },\n \"mainClass\": \"VII\"\n },\n \"name\": \"STUDENT 3\",\n \"phone\": \"08712345678\",\n \"regDate\": \"29/10/2022\",\n \"tags\": []\n },\n \n {\n \"_id\": ObjectId(\"5a934e000102030405000004\"),\n \"classId\": ObjectId(\"635d2f796804b95ce6a9e5d1\"),\n \"email\": \"[email protected]\",\n \"kelas\": {\n \"classes\": {\n \"classCode\": \"VII E\",\n \"classId\": ObjectId(\"635d2f966804b95ce6a9e5d4\"),\n \"className\": \"E\"\n },\n \"mainClass\": \"VII\"\n },\n \"name\": \"STUDENT 4\",\n \"phone\": \"08712345678\",\n \"regDate\": \"29/10/2022\",\n \"tags\": []\n }\n ]\n", "text": "hello friends, I have a document that contains students and classes (2 collections students and classes).each student has a mainclassid and classid.\nthis is an example document and the result of the aggregation that I made.\nthis code return 20 document from 5 documentMongo playground: a simple sandbox to test and share MongoDB queries onlinethe results shown do not match my expectations,\nI hope that every student will have a different classid like the example below", "username": "Nuur_zakki_Zamani" }, { "code": "\"kelas\"\"filteredClasses\"db.users.aggregate([\n {\n \"$match\": {\n \"companyId\": ObjectId(\"635c70892e8cfaf4a7d49a3f\")\n }\n },\n {\n \"$lookup\": {\n \"from\": \"tbl_classes\",\n \"localField\": \"mainClassId\",\n \"foreignField\": \"_id\",\n \"as\": \"kelas\"\n }\n },\n {\n \"$unwind\": \"$kelas\"\n },\n {\n \"$project\": {\n \"name\": 1,\n \"classId\": 1,\n \"email\": 1,\n \"phone\": 1,\n \"regDate\": {\n \"$dateToString\": {\n \"date\": \"$regDate\",\n \"format\": \"%d/%m/%Y\"\n }\n },\n \"tags\": 1,\n \"filteredClasses\": {\n \"$filter\": {\n \"input\": \"$kelas.classes\",\n \"as\": \"classes\",\n \"cond\": {\n \"$eq\": [\n \"$classId\",\n \"$$classes.classId\"\n ]\n }\n }\n }\n }\n }\n])\n$filter\"classId\"\"kelas.classes\"", "text": "Hi @Nuur_zakki_Zamani,Would the following aggregation get your desired output? Please note: instead of \"kelas\" from your expected output, the corresponding would be \"filteredClasses\" in the playground link:The main difference is I had used a $filter to get the matching \"classId\" values from the \"kelas.classes\" array.If you still require further assistance, please let me know which particular fields or certain details are incorrect. Otherwise if you believe this works for you, please alter it accordingly and test thoroughly to ensure it suits all your use case(s) and requirements.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "alhamdulillah… thank you very much bro @Jason_Tran … exactly what I wanted. ", "username": "Nuur_zakki_Zamani" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lookup 2 collection
2022-11-14T08:24:10.599Z
Lookup 2 collection
1,217
null
[]
[ { "code": "", "text": "Configured the promethus. All necessary steps followed Still getting below error\n{“t”:{\"$date\":“2022-11-14T23:56:12.296+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn56901”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:“mms-monitoring-agent”,“authenticationDatabase”:“admin”,“remote”:“40.85.156.35:47758”,“extraInfo”:{},“error”:“BadValue: SCRAM-SHA-256 authentication is disabled”}}", "username": "Pannu" }, { "code": "", "text": "SCRAM-SHA-256 authentication is disabledit seems your app is set to use SCRAM-SHA-256, but this method is disabled on the server.please refer to this page: Configure MongoDB Authentication and Authorization — MongoDB Cloud Manager", "username": "Yilmaz_Durmaz" } ]
Need Help for Error: SCRAM-SHA-256 authentication is disabled
2022-11-16T00:26:18.875Z
Need Help for Error: SCRAM-SHA-256 authentication is disabled
2,171
https://www.mongodb.com/…_2_1024x708.jpeg
[ "performance" ]
[ { "code": "show dbsdb.adminCommand( { listDatabases: 1}){\"t\":{\"$date\":\"2021-05-18T16:51:38.567+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn839\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"appName\":\"MongoDB Shell\",\"comma\nnd\":{\"listDatabases\":1.0,\"nameOnly\":false,\"lsid\":{\"id\":{\"$uuid\":\"eed01c17-7afa-4207-9144-b7aec97721fa\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1621326575,\"i\":1}},\"signature\":{\"hash\":{\"$b\ninary\":{\"base64\":\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\"subType\":\"0\"}},\"keyId\":0}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":405,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":5}},\"ReplicationStateTr\nansition\":{\"acquireCount\":{\"w\":5}},\"Global\":{\"acquireCount\":{\"r\":5}},\"Database\":{\"acquireCount\":{\"r\":4}},\"Collection\":{\"acquireCount\":{\"r\":20014}},\"Mutex\":{\"acquireCount\":{\"r\":4}},\"oplog\":{\"acquire\nCount\":{\"r\":1}}},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":715}}\ndurationMillisnameOnly: trueshow dbsdb.stats()show tables", "text": "Hi.On my 4.4.2 deployment, I’ve created one database with 20000 collections. When I issue show dbs or db.adminCommand( { listDatabases: 1}) in mongo shell for several minutes, mongod will always log the slowQuery message:As the number of collections get bigger, the durationMillis gets larger.Another thing I’ve noticed that if nameOnly: true is given, it’ll be fast and no slow query.The following metrics are those that are noticeable. We can see the latency of commands goes up.\nimage2370×1640 325 KB\n", "username": "Lewis_Chan" }, { "code": "_id", "text": "Hi @Lewis_Chan,20k collections is a lot to handle for a single replica set (RS). Usually MongoDB doesn’t recommend more than 10k per RS. I’m not surprised to see the performance go down.https://www.mongodb.com/article/schema-design-anti-pattern-massive-number-collections/One collection corresponds to at least 2 files: 1 for the data, 1 for the _id index, and probably more index files if you are doing this correctly and probably more data files if you have a lot of data in this collection (as the files are split at some point).\nThe OS has to keep all these files open and 20k times 3/4/5 per collection is just too much.So the real question here is: why do you have so many collections? Can you find a strategy to reduce that number?", "username": "MaBeuLux88" }, { "code": "", "text": "Actually some customers/applications just use mongo like that. 10000 is really small number, comparing to mysql, although it has shared tablespace to alleviate overheads.image1296×189 17.8 KBI’m curious whether mongo team can guarantee all your customers don’t create collections as they like ? When customers do that no matter what kind of business logic is, is there always a way to reconstruct the original schema ? If there’s no way, maybe mongodb is not suitable for that application ?", "username": "Lewis_Chan" }, { "code": "", "text": "In the screenshot you shared, you could replace MySQL by MongoDB and it would still work. Both systems rely on the same OS constraints and obey the same rules.Sharding could be a solution here though if this large number of collections is the only way. 10k collections maximum per replica set is a generally accepted good practice, but you could totally use a 10 shards cluster and scale easily to 10*10k collections.If you are building a multi-tenant application, I would recommend checking out this presentation. Multi-tenant application are usually where this problem of large number of connections occurs.Learn the basics of Cloud Computing Architectures and how MongoDB Atlas and Realm helps you build your next cloud architecture.Also, remember that the document design is a crucial step to be successful with MongoDB.Data that is accessed together should be stored together", "username": "MaBeuLux88" }, { "code": "", "text": "Has this issue now been solved ?I mean it’s a little bit cheap to say anything > 10k collections is nok ?!If listDatabases is executed via MongoShell with nameOnly: true then it’s as expected fast - the question is what is happening beside listDatabases in the above commandandwhy can’t this be speed up ?", "username": "John_Moser1" }, { "code": "", "text": "I mean it’s a little bit cheap to say anything > 10k collections is nok ?!Although WiredTiger does not put a hard limit on the maximum number of collections, frequently other limits come into play, such as hardware limitations as @MaBeuLux88 alluded to. Sharding is one method to alleviate this concern, but certain schema design patterns can also limit this.why can’t this be speed up ?MongoDB is constantly improving, and the story of a large number of collections has gotten better in newer versions. The development team understands this limitation can be frustrating and there are use cases that necessitates a large number of collections. SERVER-21629 is an ongoing effort to make this better in the future.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi KevinThanks for the reply - if I run MongoDB Operator (non sharding ! ← because of listDatabases) just as a replicaset, I recently created 500k databases each with 5 collection → 5M files.It runs ok - but the problem is now, if you “shutdown” then there seems a problem with the MongoDB Operator/Replicaset - the 3rd instance shutdowns correctly, the 2nd and first seem to fail to shutdown properly which causes a “repair” then these instances are “powered on”.So in theory it is much much better - but the stuff “around” has problems.", "username": "John_Moser1" }, { "code": "", "text": "if you “shutdown” then there seems a problem with the MongoDB Operator/Replicaset - the 3rd instance shutdowns correctly, the 2nd and first seem to fail to shutdown properly which causes a “repair” then these instances are “powered on”.There were earlier issues with shutdown in the past, such as SERVER-44595 and SERVER-44094, but they were both fixed.If you’re seeing a shutdown-related issue in the latest supported versions (4.2.23, 4.4.17, 5.0.13, or 6.0.2), could you open a new thread and provide more details such as MongoDB version, relevant logs, and any errors you’re seeing? Any detail that can help reproduce it will be great.Best to be in a new thread though, since this thread is all about listDatabases Best regards\nKevin", "username": "kevinadi" } ]
listDatabases slow with nameOnly=false when many collections exist
2021-05-18T09:20:59.848Z
listDatabases slow with nameOnly=false when many collections exist
5,264
null
[]
[ { "code": "class Product : EmbeddedRealmObject {\n var name: String = \"\"\n var category: String = \"\"\n var description: String? = \"\"\n var price: Float = 0F\n var imagine: String? = null\n}\n", "text": "This is my object. How I can insert an image to the database in “image” field.", "username": "Ciprian_Gabor" }, { "code": "", "text": "Don’t. Put the image in the file system and put its path into the db.\nMongoDB does not excel at storing BLOBs or CLOBs. It’s better just to point to them.", "username": "Jack_Woehr" }, { "code": "", "text": "But if you must store BLOBs in MongoDB, use GridFS", "username": "Jack_Woehr" }, { "code": "", "text": "How other users can see the image if its stored on my file system? How I could upload the image?", "username": "Ciprian_Gabor" }, { "code": "", "text": "I’m assuming this is a web application. When the user requests the image, load it via your web application. I do this in PHP + Apache all the time.", "username": "Jack_Woehr" }, { "code": "1. Take the image in the binary, upload it to S3\n2. Delete the image document from MongoDB \n3. Set the image_url field in the corresponding product document to the new S3 image\n", "text": "I agree with @Jack_Woehr here. Storing images in a database is not only an anti-pattern, but also it is not cost-effective. Generally images should be stored in cheaper cold-storage like S3.That being said, our customers generally go one of 2 routes:If the images are small and you can guarantee that, put them in your documents. Its not the most scalable approach, but in some cases it is fine and the simplest approach.Two-Phase Upload. Add a new object type called Images that is a separate non-embedded object. When you need to upload an image for a product, insert a new “image object” with the binary data along with a link or reference to the product it represents (the _id perhaps). Then, you can have a Database Trigger listen to the “images” collection for insert events and do the following:See this article for more details: Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps | MongoDB", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks for the answer. I am developing a mobile application. The images will just be about food. What is the best approach to store them so the users can see them?", "username": "Ciprian_Gabor" }, { "code": "data:image/png;base64, ", "text": "What is the best approach to store them so the users can see them?for pretty small size images, you maythis second method can fill up your database quickly if your app is not implemented correctly, or the number of files goes out of hand. you would not want a million files of size 100KB.", "username": "Yilmaz_Durmaz" }, { "code": "products_idimages_id", "text": "We had a talk and you selected to go with saving images into the database. here are a few things to note:this second option will make two trips to query the database, but will separate two an important logic problem. also add possibility to independently process images.you may also extend this and have 2 API endpoints, one to process product details and the other for images. this may complicate the app a bit, but will have more pros than cons I believe.One last thing about your mobile app. Mobile data is something precious. So if you can move your image reshape/encode to it, even if it might be slow when processing the uploads and they may not consciously notice, your users will appreciate it for the use of internet bandwidth.", "username": "Yilmaz_Durmaz" } ]
How to save an image in my obkect?
2022-11-09T22:27:01.407Z
How to save an image in my obkect?
3,471
null
[ "sharding" ]
[ { "code": "", "text": "we have a 6 Shards Cluster with 3 routers and 1 config replica set, we started getting below error when connecting from Routers, couldnt find any specific issue related to the cluster, i can see cluster available up and running, and issue seems intermittentcom.mongodb.MongoNodeIsRecoveringException: Command failed with error 11600 (InterruptedAtShutdown): ‘interrupted at shutdown’ on server XXX:XXX. The full response is {​​​​​​​\"ok\": 0.0, “errmsg”: “interrupted at shutdown”, “code”: 11600, “codeName”: “InterruptedAtShutdown”, “operationTime”: {​​​​​​​\"$timestamp\": {​​​​​​​\"t\": 88608, “i”: 2}​​​​​​​}​​​​​​​, “$clusterTime”: {​​​​​​​\"clusterTime\": {​​​​​​​\"$timestamp\": {​​​​​​​\"t\": 88608, “i”: 2}​​​​​​​}​​​​​​​, “signature”: {​​​​​​​\"hash\": {​​​​​​​\"$binary\": {​​​​​​​\"base64\": “XXXXXXXXXXXX”, “subType”: “00”}​​​​​​​}​​​​​​​, “keyId”: XXXXXXXXXXXXXX}​​​​​​​}​​​​​​​}​​​​​​​ at com.mongodb.internal.connection.ProtocolHelper.createSpecialException(ProtocolHelper.java:242) ~[mongodb-driver-core-4.1.1.jar:na] at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:171) ~[mongodb-driver-core-4.1.1.jar:na] at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:359) ~[mongodb-driver-core-4.1.1.jar:na] at com.mongodb.internal.connection.InternalStreamConnection.receive(InternalStreamConnection.java:316) ~[mongodb-driver-core-4.1.1.jar:na] at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:215) ~[mongodb-driver-core-4.1.1.jar:na] at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:144) ~[mongodb-driver-core-4.1.1.jar:na] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]", "username": "Bhaskar_Avisha" }, { "code": "", "text": "Hi @Bhaskar_Avisha\nFirst of all, welcome to MongoDB Community Forum… Command failed with error 11600 (InterruptedAtShutdown):I believe error 11600 means that the client is trying to do an operation on a server that is shutting down.So, if you provide the driver with a connection URI that specifies a replica set, the driver should reconnect to the new primary as soon as it’s available. See Connection String URI Format, specifically the replica set option section.I hope it works for you.\nLet us know if it or still persists.In case of any doubts please feel free to reach out.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thanks @Kushagra_KesavI am getting this error , when application is connecting through mongos , this is not a direct connection to replica set.", "username": "Bhaskar_Avisha" }, { "code": "mongosmongo", "text": "Hi @Bhaskar_Avisha,As far as I can tell, the com.mongodb.MongoNodeIsRecoveringException is a possible error that happens when you’re connected to a replica set. From the linked page:An exception indicating that the server is a member of a replica set but is in recovery mode, and therefore refused to execute the operation. This can happen when a server is starting up and trying to join the replica set.However I don’t think there’s enough information here to determine what’s going on. Could you post more details:Best regards,\nKevin", "username": "kevinadi" }, { "code": " Jan 10 23:20:10 ip-***.eu-west-1.compute.internal [cluster-ClusterId{value='61dc9f7a153a5d03282359a1', description='null'}-demo-shard-00-01.l76nh.mongodb.net:27017] WARN c.g.r.c.config.MongoDbConfiguration: MongoDB serverHeartbeatFailed: ServerHeartbeatFailedEvent{connectionId=connectionId{localValue:12, serverValue:269}, elapsedTimeNanos=950823445, awaited=true, throwable=com.mongodb.MongoNodeIsRecoveringException: Command failed with error 11600 (InterruptedAtShutdown): 'interrupted at shutdown' on server ***.l76nh.mongodb.net:27017. The full response is {\"operationTime\": {\"$timestamp\": {\"t\": 1641853208, \"i\": 11}}, \"ok\": 0.0, \"errmsg\": \"interrupted at shutdown\", \"code\": 11600, \"codeName\": \"InterruptedAtShutdown\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1641853208, \"i\": 15}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"5cTlPKELM4icjdNscfA1g85B75s=\", \"subType\": \"00\"}}, \"keyId\": 7006031879656701953}}}} com.mongodb.event.ServerHeartbeatFailedEvent@583a7488\nJan 10 23:20:10 ip-***.eu-west-1.compute.internal org.springframework.data.mongodb.UncategorizedMongoDbException: Query failed with error code 11600 and error message 'interrupted at shutdown' on server demo-shard-00-01.l76nh.mongodb.net:27017; nested exception is com.mongodb.MongoQueryException: Query failed with error code 11600 and error message 'interrupted at shutdown' on server ***.l76nh.mongodb.net:27017\n \tat org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:140)\n \tat org.springframework.data.mongodb.core.ReactiveMongoTemplate.potentiallyConvertRuntimeException(ReactiveMongoTemplate.java:2814)\n \tat org.springframework.data.mongodb.core.ReactiveMongoTemplate.lambda$translateException$90(ReactiveMongoTemplate.java:2797)\n \tat reactor.core.publisher.Flux.lambda$onErrorMap$28(Flux.java:6910)\n \tat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)\n", "text": "Hi @Bhaskar_Avisha, have you got your issue resolved?\nWe’re facing a similar problem, our client application keeps loosing the connection to the Atlas cluster and since we are using tailable cursors, our app has to restart upon each interruption which is highly annoying.\nThe client connects to the cluster via ‘mongodb+srv://’ URL", "username": "Pavel_Grigorenko" }, { "code": "Code: 11600;\nCodeName: InterruptedAtShutdown;\nCommand: { \"getMore\" : NumberLong(\"9001353061637322596\"), \"collection\" : \"some.collection\" };\nErrorMessage: interrupted at shutdown;\nResult: { \"ok\" : 0.0, \"errmsg\" : \"interrupted at shutdown\", \"code\" : 11600, \"codeName\" : \"InterruptedAtShutdown\" };\nConnectionId: { ServerId : { ClusterId : 1, EndPoint : \"Unspecified/some.router.name:27017\" }, LocalValue : 290 };\nErrorLabels: System.Collections.Generic.List`1[System.String]\n\nMongoDB.Driver.MongoNodeIsRecoveringException: Server returned node is recovering error (code = 11600, codeName = \"InterruptedAtShutdown\").\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ProcessResponse(ConnectionId connectionId, CommandMessage responseMessage)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocol[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.Command[TResult](ICoreSession session, ReadPreference readPreference, DatabaseNamespace databaseNamespace, BsonDocument command, IEnumerable`1 commandPayloads, IElementNameValidator commandValidator, BsonDocument additionalOptions, Action`1 postWriteAction, CommandResponseHandling responseHandling, IBsonSerializer`1 resultSerializer, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.AsyncCursor`1.ExecuteGetMoreCommand(IChannelHandle channel, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.AsyncCursor`1.GetNextBatch(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.AsyncCursor`1.MoveNext(CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorExtensions.ToList[TDocument](IAsyncCursor`1 source, CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.ToList[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\nCode: 11600;\nCodeName: InterruptedAtShutdown;\nCommand: { \"find\" : \"some.collection\", \"filter\" : {somefilter} };\nErrorMessage: Encountered non-retryable error during query :: caused by :: interrupted at shutdown;\nResult: { \"ok\" : 0.0, \"errmsg\" : \"Encountered non-retryable error during query :: caused by :: interrupted at shutdown\", \"code\" : 11600, \"codeName\" : \"InterruptedAtShutdown\", \"operationTime\" : Timestamp(1642587846, 583), \"$clusterTime\" : { \"clusterTime\" : Timestamp(1642587850, 1385), \"signature\" : { \"some signature\" } } };\nConnectionId: { ServerId : { ClusterId : 2, EndPoint : \"Unspecified/some.router.name:27017\" }, LocalValue : 4228 };\nErrorLabels: System.Collections.Generic.List`1[System.String]\n\nMongoDB.Driver.MongoNodeIsRecoveringException: Server returned node is recovering error (code = 11600, codeName = \"InterruptedAtShutdown\").\n at MongoDB.Driver.Core.Operations.RetryableReadOperationExecutor.ExecuteAsync[TResult](IRetryableReadOperation`1 operation, RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ReadCommandOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindCommandOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\nsystemctl stop mongodsystemctl stop mongosmongodmongosReadPreference = Primary\t\t\t\tReadPreference = SecondaryPreferred4.0.122.11.4\t\t\tvar settings = new MongoClientSettings\n\t\t\t{\n\t\t\t\tServers = \"{router names}\",\n\t\t\t\tConnectionMode = ConnectionMode.Automatic,\n\t\t\t\tMaxConnectionIdleTime = TimeSpan.FromMinutes(10),\n\t\t\t\tMaxConnectionLifeTime = TimeSpan.FromMinutes(30),\n\t\t\t\tMaxConnectionPoolSize = 100,\n\t\t\t\tMinConnectionPoolSize = 1,\n\t\t\t\tReadPreference = ReadPreference.Primary, // and could be SecondaryPreferred\n\t\t\t\tSocketTimeout = TimeSpan.Zero,\n\t\t\t\tWaitQueueTimeout = TimeSpan.FromMinutes(2),\n\t\t\t\tWriteConcern = WriteConcern.W1,\n\t\t\t\tConnectTimeout = TimeSpan.FromSeconds(15),\n\t\t\t\tReadConcern = ReadConcern.Default,\n\t\t\t\tServerSelectionTimeout = TimeSpan.FromSeconds(15),\n\t\t\t};\n\nmongo", "text": "Hi, everyone!\nWe also encountered this problem while upgrading our MongoDB shard cluster from 4.0.12 to 4.2.17.\nDuring the upgrade we were observing this kind of error messages in our logs(I edited out the collection names, filters, signature, and router names.)The upgrade process was as follows:As we observed, the aforementioned errors were showing up right after stopping the process. And also we saw the errors while connecting with ReadPreference = Primary and \t\t\t\tReadPreference = SecondaryPreferred.To answer @kevinadi questions:What we want to know is how to upgrade a MongoDB shard cluster more flawlessly. Any advice is really appreciated.", "username": "Vladimir_Beliakov" }, { "code": "", "text": "We’re running into this same issue while doing a rolling upgrade from 4.0 → 4.2 .\nAll clients are running java.Any updates or solutions available?\nThanks,\nTom", "username": "Tom_Duerr" }, { "code": "InterruptedAtShutdownInterruptedAtShutdown", "text": "Hi @Tom_Duerr welcome to the community!Are you seeing the same message InterruptedAtShutdown during the upgrade process from the client side, and only during the upgrade and not during any other time?If yes, the message InterruptedAtShutdown just means that the driver/client is in the middle of an operation, and it’s being stopped by the server since the server is shutting down. Most newer drivers implements retryable writes and retryable reads to make this situation smoother, but the error can still happen when the operation in question are not retryable (see the linked page for more details about this).I don’t believe this is an issue per se since 1) the server is shutting down, and 2) the operation got killed because the server needs to shut down. However if you see this error when the server is not shutting down, then this may be unexpected and may need further investigation.Best regards\nKevin", "username": "kevinadi" } ]
com.mongodb.MongoNodeIsRecoveringException: Command failed with error 11600 (InterruptedAtShutdown)
2021-04-01T16:53:33.028Z
com.mongodb.MongoNodeIsRecoveringException: Command failed with error 11600 (InterruptedAtShutdown)
17,020
null
[ "indexes" ]
[ { "code": "{\n field_1: [\"A\", \"B\", \"C\"]\n}\n\nvs\n\n{\n A: true,\n B: true,\n C: true,\n}\n{\n field_1: [\"A\", \"B\"]\n}\n\nvs\n\n{\n A: true,\n B: true,\n}\n", "text": "1: Which method requires more storage space.\n2. Which method is better for query performance, assuming indexes.", "username": "Big_Cat_Public_Safety_Act" }, { "code": "mongosh > db.big_cat_1.find()\n{ _id: ObjectId(\"6372fea50b3dad06b2ca1c8e\"), flags: [] }\n{ _id: ObjectId(\"6372feaa0b3dad06b2ca1c8f\"), flags: [ 'A' ] }\n{ _id: ObjectId(\"6372feae0b3dad06b2ca1c90\"), flags: [ 'B' ] }\n{ _id: ObjectId(\"6372feb80b3dad06b2ca1c92\"), flags: [ 'C' ] }\n{ _id: ObjectId(\"6372fed30b3dad06b2ca1c93\"),\n flags: [ 'A', 'B' ] }\n{ _id: ObjectId(\"6372feda0b3dad06b2ca1c94\"),\n flags: [ 'A', 'C' ] }\n{ _id: ObjectId(\"6372fedf0b3dad06b2ca1c95\"),\n flags: [ 'B', 'C' ] }\n{ _id: ObjectId(\"6372feed0b3dad06b2ca1c96\"),\n flags: [ 'A', 'B', 'C' ] }\nmongosh > db.big_cat_2.find()\n{ _id: ObjectId(\"6372ff6d0b3dad06b2ca1c97\") }\n{ _id: ObjectId(\"6372ff730b3dad06b2ca1c98\"), A: true }\n{ _id: ObjectId(\"6372ff820b3dad06b2ca1c99\"), B: true }\n{ _id: ObjectId(\"6372ff850b3dad06b2ca1c9a\"), C: true }\n{ _id: ObjectId(\"6372ff8a0b3dad06b2ca1c9b\"), A: true, B: true }\n{ _id: ObjectId(\"6372ff8d0b3dad06b2ca1c9c\"), A: true, C: true }\n{ _id: ObjectId(\"6372ff910b3dad06b2ca1c9d\"), B: true, C: true }\n{ _id: ObjectId(\"6372ff980b3dad06b2ca1c9e\"),\n A: true,\n B: true,\n C: true }\n\n/* same as big_cat_2 but with better field names compared to A , B , C */\nmongosh > db.big_cat_3.find()\n{ _id: ObjectId(\"6373011f0b3dad06b2ca1c9f\") }\n{ _id: ObjectId(\"637301240b3dad06b2ca1ca0\"), flag_A: true }\n{ _id: ObjectId(\"637301280b3dad06b2ca1ca1\"), flag_B: true }\n{ _id: ObjectId(\"6373012b0b3dad06b2ca1ca2\"), flag_C: true }\n{ _id: ObjectId(\"637301330b3dad06b2ca1ca3\"),\n flag_A: true,\n flag_B: true }\n{ _id: ObjectId(\"6373013b0b3dad06b2ca1ca4\"),\n flag_A: true,\n flag_C: true }\n{ _id: ObjectId(\"637301400b3dad06b2ca1ca5\"),\n flag_B: true,\n flag_C: true }\n{ _id: ObjectId(\"637301480b3dad06b2ca1ca6\"),\n flag_A: true,\n flag_B: true,\n flag_C: true }\nmongosh > db.big_cat_1.stats().size\n380\nmongosh > db.big_cat_2.stats().size\n224\nmongosh > db.big_cat_3.stats().size\n284\n", "text": "Q1 - Which method requires more storage space.Easy to test, with 3 collections with one document for each possible cases.And now let see some stats:Even with good names the A:true version seems to be a winner compared to the array at this point.ButQ2 - Which method is better for query performance, assuming indexes.With the array version you need 1 and only 1 index, that is { “flags” : 1 }. With the Boolean fields, it depends of your use cases, but you need at least 3 indexes {A:1},{B:1},{C:1} for the most basic queries where you only query 1 of the flags. If you to find({A:true,C:true}) often then you might need more indexes. So the array version would be the winner.But other questions you must ask are:Q3 - Which model provides easier updates? How do you update when you want to turn a flag on? How do you update to turn a flag off?Q4 - Which model is easier to migrate? For example, how do you update the model when you need a new flag? How do you update the model when a flag is deprecated? What happen to the indexes?", "username": "steevej" }, { "code": "", "text": "Regarding the index on the array, even though on the surface it’s only one index, under the hood mongodb creates a multi key index on each of the element of the array? This would be similar to the multiple indexes required for the flags?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "mongosh > db.big_cat_2.stats().indexSizes\n{ _id_: 36864, A_1: 20480, B_1: 20480, C_1: 20480 }\nmongosh > db.big_cat_1.stats().indexSizes\n{ _id_: 36864, flags_1: 20480 }\n", "text": "Yes itcreates a multi key indexBut I am not too sure if itwould be similar to the multiple indexesI hope it has some optimization. For the small set of documents it looks like there is some:And each index is an open file, so less resources taken by the array.I would probably go with the array because it is a more flexible model, easier to migrate. (See Q4)", "username": "steevej" } ]
Storing an array vs binary key value pairs in MongoDB
2022-11-14T05:48:57.782Z
Storing an array vs binary key value pairs in MongoDB
1,927
null
[ "aggregation", "atlas-search" ]
[ { "code": "name[\n { name: \"Facebook\" },\n { name: \"The North Face\" },\n { name: \"Facebook Donations\" },\n { name: \"Prop Face\" },\n { name: \"Simplifying Payments with Facebook Pay\" }\n { name: \"Facebook Advertising\" },\n { name: \"facebook pay\" }\n]\n { name: \"Facebook\" },\n { name: \"facebook pay\" },\n { name: \"Facebook Donations\" },\n { name: \"Facebook Advertising\" },\n { name: \"Prop Face\" },\n { name: \"The North Face\" },\n { name: \"Simplifying Payments with Facebook Pay\" }\nStringAutocomplete{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n }\n }\n },\n \"storedSource\": true\n}\n{\n $search: {\n index: 'default',\n text: {\n path: 'name',\n query: 'face'\n }\n }\n}\n[\n { name: \"Prop Face\" },\n { name: \"The North Face\" }\n]\nautocomplete{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"autocomplete\"\n }\n ]\n }\n },\n \"storedSource\": true\n}\n{\n $search: {\n index: 'default',\n autocomplete: {\n path: 'name',\n query: 'face',\n }\n }\n}\n[\n { name: \"Facebook Donations\" },\n { name: \"Facebook Advertising\" },\n { name: \"facebook pay\" },\n { name: \"Simplifying Payments with Facebook Pay\" },\n { name: \"Facebook\" },\n { name: \"Prop Face\" },\n { name: \"The North Face\" },\n]\n", "text": "Hi!\nIm experiencing a bit of weird results for my simple search use case.\nWe want to search for vendors, each vendor document has name property.\nMost of vendors names are few words at most (i.e not full sentences).Having these documents:the expected results is something similar to this:As first try, iv’e tried setting a search index as String field, Autocomplete attempt will follow.\nThis is the index definition:this is the query:returnswhich completely doesn’t make sense, probably i’m doing something wrong?when moving to autocomplete index with this definition:having this query:i’m getting these results:Which much closer to the desired result but still “Facebook” comes after “Simplifying Payments with Facebook Pay” for some reason, i would expect it to be found first.any ideas or suggestions?", "username": "Benny_Kachanovsky1" }, { "code": "", "text": "any input someone? ", "username": "Benny_Kachanovsky1" }, { "code": "[\n { name: \"Facebook Donations\" },\n { name: \"Facebook Advertising\" },\n { name: \"facebook pay\" },\n { name: \"Simplifying Payments with Facebook Pay\" },\n { name: \"Facebook\" },\n { name: \"Prop Face\" },\n { name: \"The North Face\" },\n]\nautocompleteautocomplete\"Facebook\"\"face\"\"Prop Face\"\"The North Face\"\"Facebook\"\"face\"", "text": "Hi @Benny_Kachanovsky1,i’m getting these results:Which much closer to the desired result but still “Facebook” comes after “Simplifying Payments with Facebook Pay” for some reason, i would expect it to be found first.Every document returned by an Atlas Search query is assigned a score based on relevance, and the documents included in a result set are returned in order from highest score to lowest.Additionally, as per the autocomplete operator documentation:autocomplete offers less fidelity in score in exchange for faster query execution.I am a bit confused regarding the search term versus expected output. Could you clarify why documents containing the term \"Facebook\" should rank higher than exact matches for the query term \"face\" such as \"Prop Face\" or \"The North Face\"? I would think that since \"Facebook\" is only a partial match for the term \"face\", it should be ranked lower?Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "FacebookSimplifying Payments with Facebook PayFacebook", "text": "Hi, thanks for answering.\nAlthough Facebook and Simplifying Payments with Facebook Pay should have similar score because they have the same partial match, i would like that Facebook result will have higher scroe.\nIs there a way to achieve it?", "username": "Benny_Kachanovsky1" }, { "code": "Facebook\"Facebook\"\"face\"\"Facebook\"\"facebook\"compound\"Facebook\"{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"name\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\" : \"string\"\n },\n {\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\n$searchDB> db.names.aggregate(\n\n{\n $search: {\n index: 'default',\n compound: {\n must: [\n {\n autocomplete: {\n path: 'name',\n query: 'face'\n }\n }\n ],\n should: [\n {\n phrase: {\n path: 'name',\n query: 'facebook',\n score: {'boost': {'value': 10}} /// <--- Boosted score for the text query \"facebook\"\n }\n }\n ]\n }\n }\n})\n[\n { _id: ObjectId(\"636c29d90e5724148fb0ea04\"), name: 'Facebook' },\n {\n _id: ObjectId(\"636c29d90e5724148fb0ea00\"),\n name: 'Facebook Donations'\n },\n {\n _id: ObjectId(\"636c29d90e5724148fb0ea01\"),\n name: 'Facebook Advertising'\n },\n { _id: ObjectId(\"636c29d90e5724148fb0ea02\"), name: 'facebook pay' },\n {\n _id: ObjectId(\"636c29d90e5724148fb0ea03\"),\n name: 'Simplifying Payments with Facebook Pay'\n },\n { _id: ObjectId(\"636c29d90e5724148fb0ea05\"), name: 'Prop Face' },\n { _id: ObjectId(\"636c29d90e5724148fb0ea06\"), name: 'The North Face' }\n]\n$search", "text": "Hi @Benny_Kachanovsky1,i would like that Facebook result will have higher scroe.\nIs there a way to achieve it?Could you advise the use case details for documents containing \"Facebook\" to appear higher when querying for the term \"face\"? I.e., Why the partial match should rank higher than a full match. Is this, for example, due to it being a sponsored vendor and therefor should be scored higher for your use case? Again, this is a partial match rather than full match.In saying so, one possible way to bring these set of results and have the document containing only \"Facebook\" at the top would be to specify \"facebook\" as one part the query itself. You could use the compound operator and boost the score for \"Facebook\" depending on your use case.Example below uses the following index definition in my test environment:Example $search pipeline:Output:The above example index definition and $search pipeline was used against a test collection containing only the above 7 documents. If you believe this may help, please thoroughly test and adjust the examples accordingly to verify if it suits your use case and requirements.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi!\nThanks a lot!\nYou’re right, facebook is partial match and therefore should rank higher.\nBut, in the example above “Simplifying Payments with Facebook Pay” comes before “Facebook” but both should have the same match when searching for “face”.\nIs there a way to rank higher docs that starts with the given search term?", "username": "Benny_Kachanovsky1" }, { "code": "\"face\"\"face\"", "text": "Thanks for the additional details.I think you have a specific idea on how the rankings when someone search for the term \"face\". Unfortunately, the search algorithm is not customizable to provide the very specific ordering which you have mentioned. You can sort of do this by using score boosting as per my previous example, but at this point you are fighting against the algorithm and may find many corner cases that doesn’t match your idea of how the ranking should look like.If you need a very specific ranking given a specific term that goes against the search algorithm, I think an alternate way forward is to catch the term \"face\" before it goes into the search algorithm, and return a result set that’s ranked exactly as you wanted it. It will also be more maintainable, since you’re not depending on the search algorithm.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Autocomplete search results prefix match
2022-11-08T16:13:05.412Z
Autocomplete search results prefix match
2,919
null
[ "dot-net", "atlas-cluster" ]
[ { "code": "var settings1 = MongoClientSettings.FromConnectionString(\"mongodb+srv://arik:[email protected]/?w=majority\");\nvar settings2 = MongoClientSettings.FromConnectionString(\"mongodb+srv://arik:[email protected]/?w=majority\");\n\nConsole.WriteLine(settings1 == settings2);\nvar settings1 = MongoClientSettings.FromConnectionString(\"mongodb+srv://arik:[email protected]/?retryWrites=true\");\nvar settings2 = MongoClientSettings.FromConnectionString(\"mongodb+srv://arik:[email protected]/?retryWrites=true\");\n\nConsole.WriteLine(settings1 == settings2);\n", "text": "Result is False even though they are the same.After playing around with it I realized when removing w=majority the comparison works.\nFor exampleThe result is True.Why is this the behavior?Thanks.", "username": "Arik_Shapiro" }, { "code": "", "text": "After looking at the code of the Equals method I found the problem.\nYou guys forgot to implement the equality operator of the class WriteConcern.\nTherefore it’s comparing by the default operator (by reference).", "username": "Arik_Shapiro" } ]
Is the equality operator of MongoClientSettings bugged?
2022-11-15T21:14:06.334Z
Is the equality operator of MongoClientSettings bugged?
976
null
[ "aggregation" ]
[ { "code": "", "text": "When I search a certain query and have lots of results with the same search score, I’ve noticed that each time I run the query, the default sorting of results tend to change each time I run the query.My concern is that when I do pagination with $limit & $skip I may not get expected results because the sort order is changing for subsequent $skip queries.If I could $sort on the searchScore first, then _id second (or created date or whatever), I could solve that issue in a simple way.The examples I’ve seen require you to $project the $meta.searchScore before you can $sort on it.The problem with that is then I need to specify all the other fields I want to $project. I want to include all the other fields without specifying them because they could change and I don’t want to have to remember to update the search code whenever another field is added to the collection.Is there a way I can tell $project to include all fields, or, is there a way to $sort on searchScore without using the $project stage?", "username": "djedi" }, { "code": "\"$sort\": {\n \"score\": { \"$meta\": \"textScore\" },\n \"created\": -1\n}\n", "text": "I found the solution to my issue in the docs. ", "username": "djedi" }, { "code": "textScoresearchScore", "text": "Actually, this isn’t working as expected. There seems to be a bug. Why does it not order by searchScore? Is there a difference between textScore and searchScore?\nimage3660×840 204 KB\n", "username": "djedi" }, { "code": "textScoresearchScoresearchScoretextScoretextScore$search", "text": "Hi @djedi,Actually, this isn’t working as expected. There seems to be a bug. Why does it not order by searchScore? Is there a difference between textScore and searchScore ?Could you provide a few sample documents and the full pipeline used? I would like to perform some testing on my own environment to see if I’m able to replicate the same results.Additionally, could you confirm the MongoDB version in use that had produced the results you have shown?Update : I tried to change searchScore to textScore for my own test environment but did not receive any output for a projection of textScore in a $search pipeline.Thanks in advance,\nJason", "username": "Jason_Tran" }, { "code": "searchScore$sortscore$projectsearchScoretextScoresearchScore", "text": "Hi @djedi,Just providing a quick update for this post.Why does it not order by searchScore?Regarding searchScore, documents in the result set are returned in order from highest score to lowest for Atlas Search by default so you don’t need to include an addition descending $sort on the score. The screenshot is cut out but my guess is that the first stage shown in that screenshot is a $project for the searchScore but let me know if this is not the case. If so, it does appear that the results are appearing from highest to lowest score as expected.Is there a difference between textScore and searchScore?textScore is related to native text search where as searchScore is used in Atlas Search.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Search sorting by searchScore without $project?
2022-11-11T20:50:07.849Z
Search sorting by searchScore without $project?
2,294
null
[ "golang", "transactions" ]
[ { "code": "abc := []string{\"A\", \"B\", \"C\"}\n\nfilter = bson.M{\"age\": m.age, \"status\": m.Status, \"mode\": bson.M{\"$in\": abc}, \"code\": m.code}\noption = created_at -1\n", "text": "Im new in mongo db , So I try to query last transaction with that have some string that in array\nmy code is", "username": "management_zabtech" }, { "code": "", "text": "@management_zabtech can you post more of your code, what results or errors you’re encountering, and examples of the data you’re working with?", "username": "Matt_Dale" } ]
Sir How to use $in with Findone
2022-10-27T16:37:08.227Z
Sir How to use $in with Findone
1,411
https://www.mongodb.com/…_2_1024x537.jpeg
[ "chicago-mug" ]
[ { "code": "", "text": "\nRoadshow1920×1007 78.9 KB\nREGISTER HERE : MongoDB x Google Cloud Roadshow ChicagoGoogle Cloud and MongoDB Atlas offer developers the smart infrastructure, sophisticated data intelligence and analytics, and developer-centric tools they need to power modern, data-driven cloud applications.Join the MongoDB and Google Cloud teams for invited-only in-person hands-on training in Chicago on November 15th, 2022! In this Workshop, we will teach you how to use MongoDB Atlas on Google Cloud to help you build better applications.You will have 1:1 support from our experts, secure some swag, and enjoy meeting and networking with your peers.Choosing MongoDB Atlas on Google Cloud not only ensures your databases run according to operational and security best practices, but it is also the best way to combine Google’s constant innovation with the best database for building modern applications in the cloud.We will be joined by our strategic partner Accenture, who will provide a brief overview of our better together value and solutions that will benefit your customers.Can’t wait to see you there!Event Type: In-Person\nLocation: Google Office | Chicago - 320 N Morgan St, Suite 600 Chicago, IL, 60607", "username": "bein" }, { "code": "", "text": "\nSome pictures from this amazing event at Chicago Google’s office!", "username": "bein" } ]
MongoDB Atlas & Google Cloud - North America Roadshow
2022-11-09T21:55:26.186Z
MongoDB Atlas &amp; Google Cloud - North America Roadshow
3,016
null
[ "spark-connector" ]
[ { "code": "", "text": "My app server creates queries and inserts data into mongo based on live user actions. This is important and should take precedence over reading from Mongo by Spark for data analysis, which runs concurrently. At present we get timeouts when trying do live action-based queries during Spark read tasks.How do I throttle down the load mongo-spark-connector puts on Mongo so that my live input can continue to be inserted while Spark is reading from Mongo?UPDATE: Maybe a clue to controlling load from Spark could be what the load is related to. Number of partitions, number of cores for the job, something in the Spark config or Job config?", "username": "Patrick_Ferrel" }, { "code": "", "text": "There are a few things to consider. First, make sure your spark job connections are specifying a read preference of secondary or secondaryPreferred. This will ensure the read burden off of the primary. If you are also writing and still have issues you may want to add additional \"mongoS\"s and use log files to further troubleshoot where the bottle neck is.", "username": "Robert_Walters" }, { "code": "", "text": "We understand how to scale Mongo but that is not our problem. The problem we face is that Spark can be HUGE in terms of the number of cores used. We should not have to scale Mongo to support this since we only use Spark for a few hours per week to generate ML models. The rest of the time mongo performs quite well for data input, which comes in continuously. What we need to do is scale the mongo-spark-connector so it doesn’t overload the mongo nodes we have already scaled to fit our live load (+ some margin). In short we need to scale the load put on mongo from Spark, not scale Mongo to handle Spark load, which (without some way to throttle mongo-spark-connector) is FAR in excess of what is normally needed by the system.For example when we write from Spark we can “repartition” the Spark Dataset and this indirectly throttles the connections made for the write.But for input there is no dataset to partition until the Spark read happens.Hope this helps explain the issue and many thanks for your attention. We love Mongo and hope that solving this will allow others to use it in a similar way.", "username": "Patrick_Ferrel" }, { "code": "", "text": "Throttling does not really make much sense in Spark since by slowing down mongo-spark-connector operations you are likely holding on to some precomputed results that take up memory and executors with their resources linger around longer potentially preventing other jobs from running on the cluster.You can decrease number of executor and/or cores and perhaps increasing number of partitions though Spark mongo driver will try to insert as quickly as possible. The best you can do is too make Spark less efficient but why would you want to do that at the expense of overall resource utilization!?", "username": "Andre_Piwoni" }, { "code": "", "text": "Making sense or not, throttling might be needed if (as is our case) your bottleneck restriction is resources allocated to the mongo db. We must write huge amounts of information (~GB) with limited RU/s allocated. Therefore, we would like to prevent it to write once the maximum limit has been achieved and to wait for the time period to end, writing another batch once a new time period begins", "username": "Bernardo_Cortez" } ]
How to throttle mongo-spark-connector
2020-06-17T02:13:16.672Z
How to throttle mongo-spark-connector
5,287
null
[ "storage" ]
[ { "code": "2022-10-31T16:34:23.151-0400 I NETWORK [conn703] received client metadata from 127.0.0.1:51710 conn703: { driver: { name: \"mongoc / mongocxx\", version: \"1.18.0-pre / 3.5.1-pre\" }, os: { type: \"Windows\", name: \"Windows\", version: \"6.2 (9200)\", architecture: \"x86_64\" }, platform: \"(null)cfg=0x0204170063 CC=MSVC 1926 CFLAGS=\"/DWIN32 /D_WINDOWS /W3\" LDFLAGS=\"/machine:x64\"\" }\n2022-10-31T16:34:24.417-0400 I COMMAND [conn701] command DB.EventData_Events command: find { find: \"EventData_Events\", filter: { time: { $gte: 16666436642813825 }, time: { $lte: 16672484642813825 }, $or: [ { type: \"BeginScriptExecution\", object: \"ExecutionEngine\" }, { type: \"EndScriptExecution\", object: \"ExecutionEngine\" }, { type: \"AbortScriptExecution\", object: \"ExecutionEngine\" } ] }, sort: { time: 1 }, $db: \"DB\", lsid: { id: UUID(\"bb478ccd-6474-44d6-b115-eb2ac331c0ee\") } } planSummary: IXSCAN { type: 1 }, IXSCAN { type: 1 }, IXSCAN { type: 1 } keysExamined:990 docsExamined:1980 hasSortStage:1 fromMultiPlanner:1 replanned:1 replanReason:\"cached plan was less efficient than expected: expected trial execution to take 107 works but it took at least 1070 works\" cursorExhausted:1 numYields:102 nreturned:77 reslen:12587 locks:{ Global: { acquireCount: { r: 103 } }, Database: { acquireCount: { r: 103 } }, Collection: { acquireCount: { r: 103 } } } storage:{ data: { bytesRead: 12674484, timeReadingMicros: 87289 } } protocol:op_msg 133ms\n2022-11-01T10:43:11.007-0400 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('624cc9606d3a000032006183'), prevEnd: { $lte: 16604460000000000 }, nextStart: { $gte: 16601364000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"a3fa82c1-13d3-4a7c-97eb-0469b550d569\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:22904105064 keysExamined:16454 docsExamined:807 hasSortStage:1 numYields:136 nreturned:101 reslen:413908 locks:{ Global: { acquireCount: { r: 137 } }, Database: { acquireCount: { r: 137 } }, Collection: { acquireCount: { r: 137 } } } storage:{ data: { bytesRead: 6276792, timeReadingMicros: 82753 } } protocol:op_msg 119ms\n2022-11-03T20:42:56.497-0400 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('624cc9606d3a0000320060e2'), prevEnd: { $lte: 16675225763723166 }, nextStart: { $gte: 16673364000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"909964a5-221a-4cd6-9aea-c6d1664ec1cf\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:24526763640 keysExamined:576 docsExamined:575 hasSortStage:1 numYields:10 nreturned:101 reslen:413908 locks:{ Global: { acquireCount: { r: 11 } }, Database: { acquireCount: { r: 11 } }, Collection: { acquireCount: { r: 11 } } } storage:{ data: { bytesRead: 11460886, timeReadingMicros: 104965 } } protocol:op_msg 122ms\n2022-11-04T10:33:49.900-0400 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('624cc9606d3a000032006183'), prevEnd: { $lte: 16669676080000000 }, nextStart: { $gte: 16666452000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"5d8ece5a-bb02-4e02-ad19-6b872ce20b28\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:22585501155 keysExamined:1906 docsExamined:446 hasSortStage:1 numYields:22 nreturned:101 reslen:413892 locks:{ Global: { acquireCount: { r: 23 } }, Database: { acquireCount: { r: 23 } }, Collection: { acquireCount: { r: 23 } } } storage:{ data: { bytesRead: 10478844, timeReadingMicros: 112739 } } protocol:op_msg 123ms\n2022-11-04T10:35:17.264-0400 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('624cc9606d3a000032006183'), prevEnd: { $lte: 16663628080000000 }, nextStart: { $gte: 16660404000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"5d8ece5a-bb02-4e02-ad19-6b872ce20b28\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:24141883352 keysExamined:3107 docsExamined:479 hasSortStage:1 numYields:29 nreturned:101 reslen:413908 locks:{ Global: { acquireCount: { r: 30 } }, Database: { acquireCount: { r: 30 } }, Collection: { acquireCount: { r: 30 } } } storage:{ data: { bytesRead: 9330948, timeReadingMicros: 87027 } } protocol:op_msg 112ms\n2022-11-07T13:38:00.927-0500 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('624cc9606d3a0000320060e2'), prevEnd: { $lte: 16678462807352986 }, nextStart: { $gte: 16675344000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"7aea5768-6949-4684-af35-923824fce68e\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:23866781782 keysExamined:963 docsExamined:962 hasSortStage:1 numYields:15 nreturned:101 reslen:413908 locks:{ Global: { acquireCount: { r: 16 } }, Database: { acquireCount: { r: 16 } }, Collection: { acquireCount: { r: 16 } } } storage:{ data: { bytesRead: 22359105, timeReadingMicros: 168883 } } protocol:op_msg 191ms\n2022-11-07T13:38:14.395-0500 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('624cc9616d3a000032006d60'), prevEnd: { $lte: 16678462940239568 }, nextStart: { $gte: 16675344000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"7aea5768-6949-4684-af35-923824fce68e\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:24418897787 keysExamined:1928 docsExamined:1927 hasSortStage:1 numYields:32 nreturned:101 reslen:413908 locks:{ Global: { acquireCount: { r: 33 } }, Database: { acquireCount: { r: 33 } }, Collection: { acquireCount: { r: 33 } } } storage:{ data: { bytesRead: 44095443, timeReadingMicros: 318462 } } protocol:op_msg 370ms\n2022-11-07T13:38:14.678-0500 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('624cc9616d3a000032006d64'), prevEnd: { $lte: 16678462945151585 }, nextStart: { $gte: 16675344000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"7aea5768-6949-4684-af35-923824fce68e\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:24306548295 keysExamined:958 docsExamined:957 hasSortStage:1 numYields:15 nreturned:101 reslen:413908 locks:{ Global: { acquireCount: { r: 16 } }, Database: { acquireCount: { r: 16 } }, Collection: { acquireCount: { r: 16 } } } storage:{ data: { bytesRead: 20625114, timeReadingMicros: 148546 } } protocol:op_msg 162ms\n2022-11-07T13:42:22.042-0500 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('62700c91261600001f0a1924'), prevEnd: { $lte: 16678465418985200 }, nextStart: { $gte: 16675344000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"7aea5768-6949-4684-af35-923824fce68e\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:24487356266 keysExamined:898 docsExamined:897 hasSortStage:1 numYields:13 nreturned:101 reslen:413908 locks:{ Global: { acquireCount: { r: 14 } }, Database: { acquireCount: { r: 14 } }, Collection: { acquireCount: { r: 14 } } } storage:{ data: { bytesRead: 18312795, timeReadingMicros: 125804 } } protocol:op_msg 142ms\n2022-11-07T13:43:33.358-0500 I COMMAND [conn699] command DB.ParameterData_ValueSegments command: find { find: \"ParameterData_ValueSegments\", filter: { series: ObjectId('624cc9606d3a000032006183'), prevEnd: { $lte: 16678466132266302 }, nextStart: { $gte: 16675344000000000 } }, sort: { prevEnd: 1 }, $db: \"DB\", lsid: { id: UUID(\"7aea5768-6949-4684-af35-923824fce68e\") } } planSummary: IXSCAN { series: 1, nextStart: 1, prevEnd: 1 } cursorid:24752876176 keysExamined:898 docsExamined:897 hasSortStage:1 numYields:11 nreturned:101 reslen:413908 locks:{ Global: { acquireCount: { r: 12 } }, Database: { acquireCount: { r: 12 } }, Collection: { acquireCount: { r: 12 } } } storage:{ data: { bytesRead: 19010953, timeReadingMicros: 121548 } } protocol:op_msg 130ms\n2022-11-07T16:17:34.544-0500 F CONTROL [thread704] *** unhandled exception 0xC0000006 at 0x00007FF7D360A400, terminating\n2022-11-07T16:17:34.544-0500 F CONTROL [thread704] *** stack trace for unhandled exception:\n2022-11-07T16:17:34.916-0500 I - [thread704] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\btree\\bt_discard.c(366) __free_skip_array+0x70\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\btree\\bt_discard.c(202) __free_page_modify+0x131\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\btree\\bt_discard.c(108) __wt_page_out+0x17c\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\btree\\bt_split.c(2078) __split_multi+0x28a\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\btree\\bt_split.c(2121) __wt_split_multi+0x18a\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\evict\\evict_page.c(383) __evict_page_dirty_update+0x1d5\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\evict\\evict_page.c(192) __wt_evict+0x26a\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\evict\\evict_lru.c(2207) __evict_page+0x2f7\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\evict\\evict_lru.c(1125) __evict_lru_pages+0x100\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\evict\\evict_lru.c(314) __wt_evict_thread_run+0x162\nmongod.exe ...\\src\\third_party\\wiredtiger\\src\\support\\thread_group.c(31) __thread_run+0x4d\nucrtbase.dll o_exp+0x5a\nKERNEL32.DLL BaseThreadInitThunk+0x14\n2022-11-07T16:17:34.917-0500 I CONTROL [thread704] failed to open minidump file C:\\Program Files\\MongoDB\\Server\\4.0\\bin\\mongod.2022-11-07T21-17-34.mdmp : Access is denied.\n", "text": "I have an application that uses mongoc/mongocxx for storing customer data. I recently had mongod crash. Here’s a relevant section from the mongod.log.", "username": "Jimmy_Harrington" }, { "code": "{ driver: { name: \"mongoc / mongocxx\", version: \"1.18.0-pre / 3.5.1-pre\" }\n", "text": "You seem to have a pretty old server that uses a “pre” release of the driver:is there any chance to upgrade the server, at least within its 4.xx family?", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi @Jimmy_Harrington welcome to the community!In addition to what @Yilmaz_Durmaz said, I would also add that MongoDB 4.0 series is out of support and will not receive any bugfixes anymore, so even if we get to the bottom of this, it’s either 1) not going to get fixed, and 2) likely to already fixed in newer versions if it’s crashing the server.Please upgrade to a supported version (4.2.23, 4.4.17, 5.0.13, or 6.0.2), and if this happens again, we would be very interested in replicating this issue Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for the replies, I’ll look into upgrading Mongo and I’ll make a new post if I get the same crash again.", "username": "Jimmy_Harrington" } ]
MongoDB 4.0.27 crash
2022-11-14T17:49:17.204Z
MongoDB 4.0.27 crash
1,713
null
[ "aggregation" ]
[ { "code": "{\n \"$group\": {\n \"_id\": \"$itemId\",\n \"products\": { $push: \"$ROOT\" }\n }\n}\n{\n \"$limit\": 10\n}\n", "text": "I have a data set of 500k docs in a collection. I am trying to group the data based on an itemId and display data in a paginated form.\nThis is the pipeline I am trying.The issue here is that the grouping stage will group all 500k docs before only returning 10 docs. Is there a way to limit the grouping to just 10 docs and returning the data.", "username": "Priyanshu_Agrawal" }, { "code": "name_of_collection = \"the_name_of_your_collection\" \npipeline = [\n { \"$limit\" : 10 } ,\n { \"$group\" : {\n \"_id\" : \"$itemId\" ,\n \"itemId\" : { \"$first\" : \"$itemId\" }\n } }\n { \"$lookup\" : {\n \"from\" : name_of_collection ,\n \"as\" : \"products\" ,\n \"localField\" : \"itemId\" ,\n \"foreignField\" : \"itemId\"\n } }\n]\ndb.getCollection( name_of_collection ).aggregate( pipeline )\n", "text": "That is a very interesting problem for which I do not have a very good solution. But it is a starting point.As stated, it is not a very good solution but only a starting point, because you may end up with 1 to 10 documents in the result set. You will get only 1 document if the first 10 documents have the same itemId.\nYou may use $sample, rather than $limit to increase the odds of getting 10 different idemId.\nThem, while I am ashamed of writing that, you may then call the aggregation multiple times until you get 10 different itemId.Since you $lookup with itemId, you definitively want an index.While I write I think. So here another idea that came to me that is probably much better that the one above, but since I spent time writing the above I want to keep it.Create a materialized view that keeps a list of unique itemId. Start the aggregation on the materialized view, do the limit and group like above. The materialized view will ensure you always get 10 documents.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Grouping efficiently with limit
2022-11-13T08:57:33.825Z
Grouping efficiently with limit
1,048
null
[ "dot-net" ]
[ { "code": "", "text": "Can I use Realm.io and data sync within a Maui Blazor Hybrid app?", "username": "Steve_Wasielewski" }, { "code": "", "text": "Hi Steve,\nunfortunately we don’t test on Blazor and Blazor Hybrid. So you can give it a try, and you can open a ticket on our github repo if things don’t work. We’ll take it from there and evaluate if we can start to support for the platform.Andrea", "username": "Andrea_Catalini" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can I use Realm.io and data sync within a Maui Blazor Hybrid App?
2022-11-15T02:47:02.965Z
Can I use Realm.io and data sync within a Maui Blazor Hybrid App?
1,338
null
[]
[ { "code": "", "text": "HiTrying to install the Mongodb 5.0 and 6.0 for the below Hardware version and Ubunto 20.04 getting the below error .● mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\nActive: failed (Result: core-dump) since Tue 2022-11-15 15:13:08 IST; 17s ago\nDocs: https://docs.mongodb.org/manual\nMain PID: 47687 (code=dumped, signal=ILL)Nov 15 15:13:08 mrpl-OptiPlex-745 systemd[1]: Started MongoDB Database Server.\nNov 15 15:13:08 mrpl-OptiPlex-745 systemd[1]: mongod.service: Main process exited, code=dumped, status=4/ILL\nNov 15 15:13:08 mrpl-OptiPlex-745 systemd[1]: mongod.service: Failed with result ‘core-dump’.\nmrpl@mrpl-OptiPlex-745:~$Hardware Version", "username": "Manjunath_Swamy" }, { "code": "", "text": "Hi @Manjunath_SwamyYour CPU does not meet the requirements to run 5.0+These versions require the AVX feature set Production Notes - Platform SupportBroadly your options are:See the below posts for previous discussion and advice.", "username": "chris" } ]
Not able to start the mongodb Services in 5.0 and 6.0
2022-11-15T10:04:03.227Z
Not able to start the mongodb Services in 5.0 and 6.0
3,101
https://www.mongodb.com/…e_2_1024x958.png
[]
[ { "code": "", "text": "HelloI am unable to finish the course unit “Getting Started with MongoDB Atlas”. As soon as i click on “Next” on the page\nMongoDB Course 11120×1048 146 KB\nI get a feedback page where I only can exit the course\nMongoDB Course 21122×1048 136 KB\nalthough I have only completed 25% of the course.\nMongoDB Course 31272×611 72.6 KB\n", "username": "Armin_Heinzer" }, { "code": "", "text": "I have just realised that the 25% does not refer to the whole course, but to the successfully completed assessments.I went through all the assessments and answered them correctly, but if you don’t explicitly click on “Check Answer”, they are not counted.", "username": "Armin_Heinzer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to finish Unit "Getting Started with MongoDB Atlas"
2022-11-15T11:47:31.957Z
Unable to finish Unit &ldquo;Getting Started with MongoDB Atlas&rdquo;
1,927
null
[ "containers" ]
[ { "code": "", "text": "There was a very small alpine image that many were using: GitHub - mvertes/docker-alpine-mongo: MongoDB Dockerfile based on light alpine container\nBut it is no longer maintained. This image is so much smaller than the only official MongoDB image (which is based on Ubuntu). Why do we all need to have a Ubuntu image if MongoDB runs fine on Alpine or even potentially “scratch” images? Am I really the only one running mongo who needs to optimize their resources?\nCheers!", "username": "q3DM17" }, { "code": "", "text": "Its github page reads \"Development has stopped since mongodb has been removed from alpinelinux packages due to SPL licenseBoth docker and alpinelinux packages were maintained by Marc. But changes in licencing affect the maintainability.I don’t know how it is now, but it is clear that someone should be willing to take the mantle of being a maintainer for community addition if packages compile successfully.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "SPL licenseI saw that but just don’t understand what that means - whose license is restricting whom?\nI would think MongoDB would want to maintain it considering alpine’s security and performance profile.", "username": "q3DM17" }, { "code": "", "text": "Honestly, I don’t have much of understanding licencing types, so can’t say how it goes in this situation.the last package for alpine is a release candidate for version 4.0. We now have version 6.x out there. it is a pretty darn long time.you may instead compile a customized version, but then at least will come bug fixes and you will need to recompile the server, and more to update docker, with each minor release.this needs patience and we will wait until someone has that patience for all others.", "username": "Yilmaz_Durmaz" } ]
Why is there no MongoDB Alpine Image?
2022-11-14T08:39:04.590Z
Why is there no MongoDB Alpine Image?
6,960
null
[ "compass" ]
[ { "code": "", "text": "Hey,We’re in the process of securing our environment and looking for a way to provide our developers with a secure way to establish a connection with a local client. We want to avoid adding a Network Access rule with whitelisting 0.0.0.0/0 and yet allow our devs to connect using AWS authentication.We already tried 2 methods:What could be the way to establish a connection using a proxy or something similar, to an Atlas Cluster?Thank you all in advance!", "username": "Gal_Amiram" }, { "code": "", "text": "did you try SSH tunneling on a spare machine for this purpose? Can you describe how you set it up for port forwarding? It is possible you may have missed a step.This thread may help for this case: linux - How to forward a port from one machine to another? - Unix & Linux Stack Exchange", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hey Yilmaz,\nThanks for replying!!I’m using MongoDB Compass and configured the SSH (in the Advanced Connection Options/SSH tab) so the host is a host which its IP is whitelisted in the MongoDB Atlas cluster. The SSH connection is established successfully.I’m not using an ordinary SSH tunnel, as described in the link provided.Best,", "username": "Gal_Amiram" }, { "code": "", "text": "configured the SSH (in the Advanced Connection Options/SSH tab)Umm, is it putty on windows? if so, the tunneling is only from developer machines to that host machine. you also need a second step to forward that host to your cluster, set on the host itself.dev pc port 27017 → tunnel → middle host port 27017 → forwarding → atlas cluster port 27017port numbers may change to your needs.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "No… it’s in Compass\n\nScreenshot 2022-11-14 at 22.43.591148×1288 99.5 KB\nIt should be the native way to connect through SSH.", "username": "Gal_Amiram" }, { "code": "", "text": "We are trying to establish a connection to a MongoDB Atlas Cluster:\nmongodb+srv://the-domain-of-the-cluster.mongodb.netThe cluster is acessible by a specific IP, of a machine where we install an openssh-server, let’s call it the bastion.We’d like to use the bastion as a proxy or something like that to be able to establish a connection, using a local client such as MongoDB Compass, to the Atlas Cluster, which only allow the bastion IP to connect.Hope that makes it clear now ", "username": "Gal_Amiram" }, { "code": "GatewayPortsiptables", "text": "First, thanks for this image, I did not know Compass could do that.Second, this is still the same as putty or other ssh commands. Compass uses your dev pc’s ssh port to tunnel to that host, but it connects to a data port on that machine that is supposed to run a mongodb server. Since this middle host does not run data server, hence comes the connection error.you need to “also” set that host to forward that port you try to connect from Compass to Atlas cluster’s port. this is the second leg of forwarding you are missing.I have done 2-machine setups previously but this one is about connecting “at least” 3 machines back to back. it is actually how hackers use to connect to a target through many leaps.GatewayPorts and iptables methods in the link I provided mentions about how to do this briefly. you may search these terms for more details on official pages.by the way, this method, as you see, requires a spare middle server. and you seem to have one. that is why I am trying to make it out with this method.I am just sorry I do not have the exact steps to do the setup.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This sound too complex to me. Atlas is using SRV record to list the cluster nodes. I was assuming the Compass SSH capability knows to configure the tunnel to forward traffic to the nodes.So the combination of providing an SRV endpoint (mongodb+srv://) with SSH server that is allowed to the cluster should makes it possible to connect to the cluster… Hope that this make sense.", "username": "Gal_Amiram" }, { "code": "", "text": "I don’t know about SRV internals but “tunneling” is always from point A to point B through SSH port (mostly port 22) with secure encrypted connection. “Forwarding”, on the other hand, takes your packets from a designated port and redirects it as is to another machine. Here we want to go to point C. if SRV needs a port range, then the middle host should provide that too.this also the case when you set your wifi router to your laptop for torrent programs or some games. have you ever done that to your router?anyways, you are getting the grasp of it. Compass has A to B. you also need B to C to complete it. Compass handles the range from A to B, you may need a range from B to C (I guess it is mandatory at least for replicaset connections)it might sound too complex as of now, but if you can keep reading and trying (trial&error) you will see it is just a bunch of commands to set some configuration. a few hours or a few days, it all depends on you. you just need to encourage yourself ", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I have found a script to set iptables on the middle host here: Add/Update iptable NAT port forward rule based on hostname instead of ip address (github.com)you will need direct addresses of cluster servers. Open your Atlas page, Deploymeny->Database->Connect. there select “your application” and choose a pretty old driver version that gives you a connection string with “mongodb://xxx-shard-00-00.yyy…”. use those addresses in the script and incremental ports on the host (17-18-19 etc) for each server.then use these ports within your connection string along with the ip of middle host.The downside of this method is that the IP addresses of servers may change in time and you need to run the script to fix it.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thanks for all the effort Yilmaz!We ended up using sshuttle to initiate a VPN like connection over SSH, similar to what you suggested.", "username": "Gal_Amiram" }, { "code": "pingping", "text": "that seems a nice one. saves from checking IP changes as long as it is maintained and bug free it is great.by the way, in case you revert to iptables method, I have found a possible ping bug in the script I linked: ping result may give a different output. my alpine setup gave “data bytes” instead of “bytes of data”. so head up.", "username": "Yilmaz_Durmaz" } ]
Connect to MongoDB Atlas Cluster without external IP
2022-11-14T17:55:05.085Z
Connect to MongoDB Atlas Cluster without external IP
4,807
null
[ "dot-net" ]
[ { "code": "Could not load file or assembly 'MongoDB.Driver, Version=2.18.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified\n", "text": "Writing a simple POC for a project I’m working on and it seems if you need to include the mongocsharpdriver. From what I read I thought that this was deprecated, but if I attempt to run my code without this dependency I get an error message:This goes away after I add the mongocsharpdriver from nuget. Am I doing something wrong? or is this somehow expected behaviour?", "username": "Jeff_Patton" }, { "code": "<ItemGroup>\n <PackageReference Include=\"MongoDB.Driver\" Version=\"2.18.0\" />\n</ItemGroup>\n", "text": "Hi,You should have another kind of problem because it works.You can try this project and see if it works.main/dotnetLearn mongodb quickly with examples. Contribute to iso8859/learn-mongodb-by-example development by creating an account on GitHub.", "username": "Remi_Thomas" }, { "code": "", "text": "Thanks, I’ll look at that one…I’m still getting that load file error after adding the csharpdriver, which is weird as it worked immediately after i added it, but now it’s not working…so i’m game to look at anything! Also, not sure how much this matters, i’m targetting .net 6 and a console app", "username": "Jeff_Patton" }, { "code": "", "text": "Reinstall .NET.NET is a developer platform with tools and libraries for building any type of app, including web, mobile, desktop, games, IoT, cloud, and microservices.", "username": "Remi_Thomas" }, { "code": "", "text": "That seems a little drastic…but I\"m not against it…I’ve just cloned the repo you suggested and the code appears to be working, I was able to connect to my instance. What I’m confused about is what the difference is between the code that I wrote and what you provided. Guess I know what my weekend is going to be like!Thanks!", "username": "Jeff_Patton" }, { "code": "using MongoDB.Driver;\n\nnamespace MongoDBDriver\n{\n public class MongoDB\n {\n public string? ConnectionString { get; set; }\n public MongoClient? Client { get; set; }\n public MongoDatabaseBase? DatabaseBase { get; set; }\n\n public MongoDB(string ConnectionString)\n {\n MongoClientSettings settings = MongoClientSettings.FromConnectionString(ConnectionString);\n Client = new MongoClient(settings);\n }\n }\n}\n", "text": "The re-install didn’t work, which I didn’t think it would since the cloned repo worked just fine. Not really sure what the big difference is, my code is super simple as I was just getting started, the only real difference is the repo is outputting an exe, and i’m building a dll.literally the extent of my code, was just trying to test a connection to the server.", "username": "Jeff_Patton" }, { "code": "", "text": "Remi,Looking for a bit of insight as I’m struggling with this, ultimately I’d like to write a PowerShell module to handle the mongo stuff for me, and for other projects I can simply create a class library project and things work out fine. But it appears that for MongoDb.Driver to work it needs to be a console app? I’ve copied the code above into a new ‘Console App’ project and it just works.Do you, or anyone, have insight into why this would be true?", "username": "Jeff_Patton" }, { "code": "mkdir test\ncd test\ndotnet new console\ndotnet add package MongoDB.Driver\ncode .using MongoDB.Bson;\nusing MongoDB.Driver;\n// See https://aka.ms/new-console-template for more information\nvar client = new MongoClient();\nConsole.WriteLine(client.GetDatabase(\"test\").GetCollection<BsonDocument>(\"test\").FindSync(_=>true).ToList().ToJson());\n", "text": "Not sure you can user PowerShell, it can use .NET lib but you lose the app context.If you need to execute script now you can have very simple .NET Console structure, works like Powershell.Open Program.cs and do your script here\nBetter use VSCode and type\ncode .This is not powershell but this is very simple.", "username": "Remi_Thomas" } ]
MongoDB.Driver 2.18 .net > 4.72
2022-11-11T13:54:35.539Z
MongoDB.Driver 2.18 .net &gt; 4.72
2,555
null
[]
[ { "code": "{ if, commandRequested, dateRequested (in String) }", "text": "hey all, I’m really trying very hard to understand how this works. But I’m hitting a wall every time.I have a collection of documents that has analytics records of, here’s an example of 1 record:\n{ if, commandRequested, dateRequested (in String) }I’m trying to get an average of Mon, Tue, Wed, Thu, Fri, Sat and Sun in a Chart.\nBut so far the chart types i’ve tried is giving me a Total count on those days.\nHow does this even work?", "username": "ThreeM" }, { "code": "", "text": "Hi @ThreeM -First of all, if you want to do any kind of date arithmetic or processing, you’ll need your dates to be proper Date types, not strings. You can use Charts to convert types, although it will always be slower to convert on the fly than it would if you stored them as the correct type in the first place.Once you have your datey dates, you can create the chart you want by:\nimage1159×830 74 KB\nHTH\nTom", "username": "tomhollander" }, { "code": "", "text": "Hello @tomhollander, I was happy for awhile when I tot it was that easy.\nBut I think there’s some difference in the data we have.\nMy data isn’t aggregated of runtimes per record. But instead, each record is a frequency (record) by itself.More like…\nWeek 1\nFunction A | Monday\nFunction B | Tuesday\nFunction A | Tuesday\nFunction A | Wednesday\nFunction B | Tuesday\nFunction A | TuesdayWeek 2\nFunction B | Monday\nFunction B | Tuesday\nFunction B | Tuesday\nFunction A | Wednesday\nFunction A | Tuesday\nFunction A | TuesdayNow, I want it to chart it like an average because I would like to know if my users interact with which function more often on Monday or Tuesday or whichever day it is.Anyway i could work something out?", "username": "ThreeM" }, { "code": "", "text": "Hmm, I’m not sure I understand. If you can show me an actual document it may help.Tom", "username": "tomhollander" }, { "code": "\"_id\":{\"$oid\":\"6372599db90576219d8edb10\"},\n\"dateRequested\":\"2022-10-24T10:07:35.025Z\",\n\"chatType\":\"private\"\n,\"requesterName\":\n\"John Doe\",\n\"command\":\"/raincheck\",\n\"language_code\":\"en\"\n}\n", "text": "Sorry for the late reply…Here’s a sample document, i have 6,000 of records.\nCommands differs based on what commands are being called by the user.But i just want to know how active are users using my service for each day of the week.\nI have a chart right now that just deals with an aggregated counts of all these function calls, but i wan to know the average amount so that i can decipher which day of the week is most active.", "username": "ThreeM" }, { "code": "dateRequestedmean ( _id )command", "text": "OK - I’m not sure why you can’t use my approach then? Put dateRequested (as a date) in your X axis and bin by day of the week. Put mean ( _id ) in the Y axis. Maybe put command in the Series channel if you want to break it down that way?", "username": "tomhollander" }, { "code": "", "text": "\n2022-11-15_17-46-311246×671 69.2 KB\n\nDoesn’t seems like an option for me to select? Is it supposed to be a new calculated field?", "username": "ThreeM" }, { "code": "", "text": "Sorryv my bad, you can’t calculate the average of _id because it’s a string not a number. What exactly do you want to calculate the average of? If you use count(_id) it will show the total number of events per day which may be useful for you?", "username": "tomhollander" } ]
Total Average Frequency for Day of the Week
2022-11-05T10:35:18.189Z
Total Average Frequency for Day of the Week
2,952
null
[ "queries", "node-js" ]
[ { "code": "const { AsyncLocalStorage } = require('async_hooks')\nconst asyncLocalStorage = new AsyncLocalStorage()\nasyncLocalStorage.run({ requestId: Math.random() }, async () => {\n const findStream = await mongoConnection.collection('test').find(filter).stream();\n findStream.on('end', () => {\n asyncLocalStorage.getStore(); // requestId is lost here\n });\n});\n", "text": "Hi, When using MongoDB node.js driver with AsyncLocalStorage with Readable stream, asyncLocalStorage context is lostminimal example of reproduction:Reproduced on:NodeJs: 16.15.1\nmongoDB node.Js version: 3.7.3", "username": "Saif_Abusaleh" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
AsyncLocalStorage context is lost when working with Readable stream
2022-11-14T14:09:22.483Z
AsyncLocalStorage context is lost when working with Readable stream
1,345
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "a { \n academic_year: { type:String} \n} \nb { \n b1: {type:Number, default:0 },\n b2: {type:Number, default:0 },\n b3: [{ b3_1: {type:Number, default:0 }, b3_2: {type:Number, default:0 }, b3_3: {type:Number, default:0 }\n }]\n b4: {type:mongoose.Schema.ObjectId, \n ref: 'a'} \n} \n\nLet's suppose we have below example\na { \n academic_year: \"2021-2022\"\n _id:234lkjlk2342432\n} \n\n\nb { \n b1:1,\n b2: 2,\n b3: [\n\t { b3_1: 5, b3_2: 4, b3_3: 4, },\n\t { b3_1: 1, b3_2: 4, b3_3: 2 }\n\t { b3_1: 5, b3_2: 1, b3_3: 2 }\n ]\n b4: \"234lkjlk2342432\" \n}\nc{ \n academic_year: \"2021-2022\",\n b1:1,\n b2: 2,\n b3: [\n\t { b3_1: 5, b3_2: 4, b3_3: 4,total:13 },\n\t { b3_1: 1, b3_2: 4, b3_3: 2 ,total:7},\n\t { b3_1: 5, b3_2: 1, b3_3: 2,total:8 }\n ],\n\t\t\t \n BigTotal:31,\n\t\n\t\n}\n", "text": "Hi, I am new to mongodb. Till now I have worked with SQL queries. I have the following 2 schemas, that should be joined and the following calculations to be done.the result to return would bebelow where bigtotal= b1+b2+ sum of total field", "username": "Genti_P" }, { "code": "[\n {\n '$lookup': {\n 'from': 'a', \n 'localField': '_id', \n 'foreignField': '_id', \n 'as': 'c'\n }\n }, {\n '$addFields': {\n 'academic_year': {\n '$arrayElemAt': [\n '$c', 0\n ]\n }\n }\n }, {\n '$addFields': {\n 'academic_year': '$academic_year.academic_year'\n }\n }, {\n '$unset': [\n 'c', 'b4'\n ]\n }, {\n '$addFields': {\n 'bigtotal': {\n '$map': {\n 'input': '$b3', \n 'as': 'obj', \n 'in': {\n 'total': {\n '$add': [\n '$$obj.b3_1', '$$obj.b3_2', '$$obj.b3_3'\n ]\n }\n }\n }\n }\n }\n }, {\n '$addFields': {\n 'b3': {\n '$zip': {\n 'inputs': [\n '$b3', '$bigtotal'\n ]\n }\n }\n }\n }, {\n '$addFields': {\n 'b3': {\n '$map': {\n 'input': '$b3', \n 'in': {\n '$mergeObjects': '$$this'\n }\n }\n }\n }\n }, {\n '$addFields': {\n 'bigtotal': {\n '$sum': [\n '$b1', '$b2', {\n '$sum': '$bigtotal.total'\n }\n ]\n }\n }\n }\n]\n[\n {\n _id: ObjectId(\"636cbcf1e1c238226bb5a421\"),\n b1: 1,\n b2: 2,\n b3: [\n { b3_1: 5, b3_2: 4, b3_3: 4, total: 13 },\n { b3_1: 1, b3_2: 4, b3_3: 2, total: 7 },\n { b3_1: 5, b3_2: 1, b3_3: 2, total: 8 }\n ],\n academic_year: '2011-2022',\n bigtotal: 31\n }\n]\nacademic_yearbb", "text": "Hi @Genti_P and welcome to the MongoDB community forum!!Based on the sample example and the expected response , the following aggregation query would be helpful.The output for the above aggregation would look like:However, please note the following recommendations:Also, note that, the above aggregation is dependent on the sample example provided and was not optimised for performance by any means, I would recommend you to throughly test before applying it to the application for entire documents in collection.Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Join and aggregate
2022-11-08T08:38:17.933Z
Join and aggregate
1,856
null
[ "time-series" ]
[ { "code": "", "text": "Hi,\nI have some questions about the timeseries that where introduced with MongoDb5.0. I only found documentation about the limitations but no suggestion for a solution/workaround.Is there any way to create uniqueness of _id fields inside the timeseries? On normal collection we could use a unique index I guess but thats not supported for timeseries. Current workaround is to make a find on _id and filter documents befor inserting. Is there any smarter / more efficent alternative for this?Is there any way to delete timeseries data based on ts or _id that is NOT the TTL of the timeseries?\nDelete many complains that it can only be used on “metadata” fields but in my understanding ts and _id should not be in the meta data because it makes the clustering of the underlaying data useless.Currently we have one collection per “data measuring device (1.500+)”, with the timeseries data from different sub devices inside each collection. Is the idea behind the timeseries having all locations in one timeseries collection and that what is currently the collection name inside the meta field? Because we store other data in that collection that should not be deleted based on time but using 1.500 extra collections to make use of the TTL sounds not efficent. So any idea how to delete only some data inside a timesries collection based on time or should we just use the normal collection/remodell our data?Hope someone has a better understanding of this and can give me some input.", "username": "Jona_Wossner" }, { "code": "$densify$fill", "text": "Hi @Jona_Wossner,It’s important to note that a time series collection is not exactly the same as a normal collection in MongoDB. MongoDB treats time series collections as writable non-materialized views backed by an internal collection. When you insert data, the internal collection automatically organizes time series data into an optimized storage format.In saying the above, I would recommend creating a post regarding the unique indexes on time series collections via the MongoDB feedback engine in which others can vote for and the MongoDB product team can monitor.If there is a hard requirement for uniqueness amongst the inserted data, then perhaps it may be worth investigating a normal collection (with the associated unique indexes created against it) benchmark versus that of a time series collection using your particular workload / data to determine if it is worth trading off the benefits of a time series collection.In saying the above, could you advise further use case details regarding the uniqueness requirement? For example, Is the particular analysis or result set from the time series collection supposed to be void of duplicates?The following pages may be good to go over as well:Regards,\nJason", "username": "Jason_Tran" } ]
Timesries: unique _id and delet_many by ts
2022-11-09T16:32:17.088Z
Timesries: unique _id and delet_many by ts
1,765
null
[]
[ { "code": "db.eventLog.createIndex({\"updatedDate\": 1}, {expireAfterSeconds:15552000<U+202C> } ) \ndb.eventLog.createIndex({\"updatedDate\":1}, {expireAfterSeconds: 15552000<U+202C>,background:true}) \ndb.eventLog.createIndex({ \"updatedDate\" : 1}, {background:true,\"name\" : \"updatedDate\",expireAfterSeconds: NumberLong(15552000<U+202C>)}) \ndb.eventLog.createIndex({ \"updatedDate\" : 1}, {background:true,\"name\" : \"updatedDate\",expireAfterSeconds: { $numberLong: \"15552000<U+202C>\" }}) \n {\n \"v\" : 2,\n \"key\" : {\n \"updatedDate\" : 1\n },\n \"name\" : \"updatedDate_1\",\n \"ns\" : \"bccd_asset_connector.eventLog\",\n \"expireAfterSeconds\" : { $numberLong: \"15552000<U+202C>\" },\n \"background\" : true\n }\ndb.eventLog.createIndex({\"updatedDate\": 1}, { \"expireAfterSeconds\": \"15552000<U+202C>\", background: true }) \n {\n \"v\" : 2,\n \"key\" : {\n \"updatedDate\" : 1\n },\n \"name\" : \"updatedDate_1\",\n \"ns\" : \"bccd_asset_connector.eventLog\",\n \"expireAfterSeconds\" : \"15552000<U+202C>\",\n \"background\" : true\n }\n", "text": "mongo version - 4.0When I create the TTL index throws out ( [js] SyntaxError: illegal character) error.below command fails to create the TTL.but when I change the expireAfterSeconds value from 15552000‬ to 3888000 create TTL success.or change to those command create TTL success, but show the index seems very strange.", "username": "harz_wang" }, { "code": "db.eventLog.createIndex({\"updatedDate\": 1}, {expireAfterSeconds:15552000<U+202C> } ) \nU+202CexpireAfterSecondsdb.eventLog.createIndex({\"updatedDate\": 1}, {expireAfterSeconds: 15552000} ) \n", "text": "Welcome to the MongoDB Community @harz_wang !I noticed your output includes an extra unprintable Unicode character (U+202C) after the expireAfterSeconds value, which seems likely to be the cause of the illegal character error you are encountering in the MongoDB shell.Try creating the index without any extra characters:Regards,\nStennie", "username": "Stennie_X" }, { "code": "expireAfterSeconds: { $numberLong: \"15552000\" }\nexpireAfterSeconds: \"15552000\"\n", "text": "Hello Stennie,\nThank you for your answer.I tried to re-enter the command manually and it worked. Your are right. Thanks.May I ask one more question?\nWhy can the expireAfterSeconds create success by those formats (JSON or String), means the expireAfterSeconds does not value type check, or higher version will have to check the value type?Thank you very much! Have a nice day. ", "username": "harz_wang" }, { "code": "expireAfterSeconds> db.eventLog.createIndex({\"updatedDate\": 1}, {expireAfterSeconds: -15552000} )\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"TTL index 'expireAfterSeconds' option cannot be less than 0. Index spec: { key: { updatedDate: 1.0 }, name: \\\"updatedDate_1\\\", expireAfterSeconds: -15552000.0 }\",\n\t\"code\" : 67,\n\t\"codeName\" : \"CannotCreateIndex\"\n}\n> db.eventLog.createIndex({\"updatedDate\": 1}, {expireAfterSeconds: \"15552000\"} )\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"TTL index 'expireAfterSeconds' option must be numeric, but received a type of 'string'. Index spec: { key: { updatedDate: 1.0 }, name: \\\"updatedDate_1\\\", expireAfterSeconds: \\\"15552000\\\" }\",\n\t\"code\" : 67,\n\t\"codeName\" : \"CannotCreateIndex\"\n}\n> db.eventLog.createIndex({\"updatedDate\": 1}, {expireAfterSeconds: { $numberLong: \"15552000\" } } )\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"TTL index 'expireAfterSeconds' option must be numeric, but received a type of 'object'. Index spec: { key: { updatedDate: 1.0 }, name: \\\"updatedDate_1\\\", expireAfterSeconds: { $numberLong: \\\"15552000\\\" } }\",\n\t\"code\" : 67,\n\t\"codeName\" : \"CannotCreateIndex\"\n}\n", "text": "Why can the expireAfterSeconds create success by those formats (JSON or String), means the expireAfterSeconds does not value type check, or higher version will have to check the value type?Hi @harz_wang,Modern versions of MongoDB server validate the type for expireAfterSeconds which should be a positive integer.For example, in 4.2.23:If you are using a version of MongoDB server older than 4.2 (which is currently the oldest release series that hasn’t reached End of Life yet), I recommend planning to upgrade. There are many bug fixes and improvements, including better validation and error messages.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie\nThank you & have a nice day. Best Regards\nHarz", "username": "harz_wang" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create TTL index fail . The expireAfterSeconds have limit value?
2022-11-11T07:21:50.315Z
Create TTL index fail . The expireAfterSeconds have limit value?
2,250
https://www.mongodb.com/…e_2_1024x512.png
[]
[ { "code": "", "text": "I’m going to set up the OS for MongoDB.\nHowever, what is the trouble of setting a value larger than the recommended value?\n(For example, a value greater than 3 times)Recommended Values → Ex My Valuesulimit_ nofile : 64,000/64,000 → 800,000/800,000ulimit_ nproc : 64,000/64,000 → 800,000/800,000/proc/sys/fs/file-max : 98,000 → 1,000,000/1,000,000/proc/sys/kernel/pid_max : 64,000 → 800,000/800,000/proc/sys/kernel/threads-max : 64,000 → 800,000/800,000/proc/sys/vm/max_map_count : 128,000 → 1,600,000/1,600,000", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi @Kim_Hakseonwhat is the trouble of setting a value larger than the recommended value?Well that usually depends on the hardware involved. If you don’t see any adverse effects, then the hardware can handle the elevated settings. The recommended settings are for a generic hardware, so your actual case might vary.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
About OS Settings
2022-11-14T09:03:24.410Z
About OS Settings
1,037
null
[ "transactions" ]
[ { "code": "operationType.......\n\"source\":{\"version\":\"1.9.2.Final\",\"connector\":\"mongodb\",\"name\":\"metaDB\",\"ts_ms\":1660575739000,\"snapshot\":\"false\",\"db\":\"metaDB\",\"sequence\":null,\"rs\":\"rs01\",\"collection\":\"conntest\",\"ord\":1,\"h\":null,\"tord\":null,\"stxnid\":null,\"lsid\":null,\"txnNumber\":null},\"op\":\"d\",\"ts_ms\":1660575739313,\"transaction\":null}}\n.......\n{\n\"name\":\"mongosrc02\",\n\"config\":{\n \"connector.class\":\"io.debezium.connector.mongodb.MongoDbConnector\",\n \"key.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"key.converter.schema.enable\" : \"true\",\n \"value.converter.schema.enable\" : \"true\",\n \"mongodb.hosts\":\"rs01/10.20.19.172:27017\",\n\t \"mongodb.name\":\"metadbtest\",\n \"mongodb.user\" : \"mongoadm\",\n \"mongodb.password\" : \"mongoadm\",\n \"collection.include.list\": \"metaDB.conntest\",\n \"tombstones.on.delete\" : \"false\" \n\t}\n}\n{\n \"name\": \"mongo-sk-002\",\n \"config\": {\n\"connector.class\":\"com.mongodb.kafka.connect.MongoSinkConnector\",\n\"connection.uri\":\"mongodb://10.20.19.172:27017\",\n\"database\":\"metaDB\",\n\"collection\":\"newconntesthdr\",\n\"topics\":\"metaDB.metaDB.conntest\",\n\"key.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n\"value.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"key.converter.schema.enable\" : \"true\",\n \"value.converter.schema.enable\" : \"true\",\n \"mongo.errors.tolerance\": \"all\",\n \"mongo.errors.log.enable\": \"true\",\n\"mongodb.user\": \"mongoadm\",\n\"mongodb.password\": \"mongoadm\",\n\"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler\"\n}\n}\n", "text": "I want to send data from one collection in mongodb to another collection using kafka.\nThe source connector is debezium, and the sink connector is mongodb sink connector.I used handlers for update and delete, but an error [ Error: operationType field is doc is missing.] occurs.When sending from the debezium source connector to kafka, the operationType field seems to be sent to ‘op’ field, and update and delete are sent to u and d respectively.Is there an option to make this information known to the mongodb sink connector?\nThe options I used for the connector are:[source][sink]", "username": "minjeong.bahk" }, { "code": "\"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler\"change.data.capture.handler=com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler", "text": "You need to use the Denezium CDC handler at the sink\ninstead of\n\"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler\"\nuse\nchange.data.capture.handler=com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler", "username": "Robert_Walters" }, { "code": "", "text": "Did it work out?\nim suffering from same problem even though\nchange.data.capture.handler=com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler\nis applied.", "username": "_DOHEE" } ]
Cdc using mongodb sink connector
2022-08-21T04:30:20.123Z
Cdc using mongodb sink connector
2,521
null
[ "replication" ]
[ { "code": "", "text": "Hello Greetings!!I have 3 node mongoDB replica running in production. but once in a week primary node down/failed. I went through /var/logs/mongodb.log file. but all the logs are informational logs not finding the reason for why mongodb service down/failed (systemctl status mongodb ). Please advise how can I troubleshoot the issue.Thanks", "username": "Sreedhar_Y" }, { "code": "", "text": "do you have a preferred primary set (priority) or do the primary changes and the failure occurs on all 3? if it is the second case you might be looking at the wrong server’s log files.you may check the logs of all 3 nodes, find out the last time the connection got lost and a new primary was assigned, and check that same timestamp proximity again for failures.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you for the information, Mine is setup to assign secondary as primary once primary is down. Subsequent question is, when the primary is down and secondary is primary. how to setup create mongodb connection string. Please advise…", "username": "Sreedhar_Y" }, { "code": "", "text": "I guess it is to use a replicaset connection string where you supply the address of all 3 servers.Other than official documentation, this might also help with many practical examples in it.\nHow do you connect to a replicaset from a MongoDB shell? - Stack Overflowyour app will try all 3 connections if any fails, then if it hits on a secondary, it will be redirected to the current primary (if a secondary preference is not made)", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you …appreciate for the quick help", "username": "Sreedhar_Y" } ]
Primary node going down/failed once in week
2022-11-14T17:36:31.518Z
Primary node going down/failed once in week
1,944
null
[]
[ { "code": "MongoDB.Driver.MongoConnectionException: An exception occurred while sending a message to the server. ---> System.IO.IOException: Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host\n at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)\n at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)\n --- End of inner exception stack trace ---\n at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)\n at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\n at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\n at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.WriteBytes(Stream stream, IByteBuffer buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.SendBuffer(IByteBuffer buffer, CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.SendBuffer(IByteBuffer buffer, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.SendMessages(IEnumerable`1 messages, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquiredConnection.SendMessages(IEnumerable`1 messages, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.ConnectionExtensions.SendMessage(IConnection connection, RequestMessage message, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocol[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.Command[TResult](ICoreSession session, ReadPreference readPreference, DatabaseNamespace databaseNamespace, BsonDocument command, IEnumerable`1 commandPayloads, IElementNameValidator commandValidator, BsonDocument additionalOptions, Action`1 postWriteAction, CommandResponseHandling responseHandling, IBsonSerializer`1 resultSerializer, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteCommandOperationBase.ExecuteAttempt(RetryableWriteContext context, Int32 attempt, Nullable`1 transactionNumber, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.Execute[TResult](IRetryableWriteOperation`1 operation, RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatches(RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.Execute(RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.Execute(IWriteBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteWriteOperation[TResult](IWriteBinding binding, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteWriteOperation[TResult](IClientSessionHandle session, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IClientSessionHandle session, IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.<>c__DisplayClass23_0.<BulkWrite>b__0(IClientSessionHandle session)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionBase`1.<>c__DisplayClass92_0.<UpdateMany>b__0(IEnumerable`1 requests, BulkWriteOptions bulkWriteOptions)\n at MongoDB.Driver.MongoCollectionBase`1.UpdateMany(FilterDefinition`1 filter, UpdateDefinition`1 update, UpdateOptions options, Func`3 bulkWrite)\n at MongoDB.Driver.MongoCollectionBase`1.UpdateMany(FilterDefinition`1 filter, UpdateDefinition`1 update, UpdateOptions options, CancellationToken cancellationToken)\n at Sensiple.Tryvium.Data.MongoDB.CommonConnector.Update(String schemaName, String dataFilter, String data, String logSource) \nvar _collection = Db.GetCollection<BsonDocument>(schemaName); \n\nBsonDocument bsonDocument = new BsonDocument(BsonSerializer.Deserialize<BsonDocument>(dataFilter)); \n\nvar updatedResult = _collection.UpdateMany(bsonDocument, BsonDocument.Parse(\"{$set: \" + BsonSerializer.Deserialize<BsonDocument>(data) + \"}\"));\n", "text": "I got below exception while try to update data in mongodb. Please help me to fix this issue.When I look in my logs I see lot of error messages just like the one below where the driver is getting a socket error when connecting to mongo. The site is still up and this error doesn’t happen on every request, nor does it happen on one operation that should take longer.The version I have used C# driver : “2.10.2” and Azure Cosmos version :3.6\".Code that causes issue,", "username": "Aravinth_Kulasekhara" }, { "code": "", "text": "Hi,The version I have used C# driver : “2.10.2” and Azure Cosmos version :3.6\".It appears that you are trying to connect to Azure Cosmos DB using the official MongoDB dotnet driver. Is this correct?If yes, then unfortunately this configuration is not supported since Cosmos DB is a Microsoft product that provides MongoDB compatibility, and thus it may time to time see some incompatibilities with official MongoDB drivers, which are designed for use with the official MongoDB server.Could you try the code on an official MongoDB server (e.g. a free MongoDB Atlas shared instance for testing purposes) and see if the error persists? If not, then you may want to contact Microsoft support regarding this issue.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "I’ve had the exact same issue (using exact same driver version 2.10.2), and it started exactly after I’ve upgraded from Cosmos 3.2 to 3.6 (also happens with 4.0) and the issue was solved by forcing TLS 1.2var settings = MongoClientSettings.FromConnectionString(connectionString);\nsettings.SslSettings = new SslSettings()\n{\nEnabledSslProtocols = System.Security.Authentication.SslProtocols.Tls12\n};\nreturn new MongoClient(settings);", "username": "Ricardo_Drizin" }, { "code": "", "text": "@Ricardo_Drizin, Hitting a similar issue, Azure CosmosDB 4.0 and MongoDB.Driver 2.18 and have set TLS1.2 as in your example. Your sample is also very similar to the suggested code sample in Azure. Just curios if you have any thoughts", "username": "Jeff_Patton" } ]
MongoDB.Driver.MongoConnectionException: An exception occurred while sending a message to the server
2020-06-21T09:12:27.675Z
MongoDB.Driver.MongoConnectionException: An exception occurred while sending a message to the server
14,303
null
[ "queries", "php" ]
[ { "code": "", "text": "I am unfamiliar with the syntax to pass a variable to a mongo cursor. Currently I have the ‘name’ hardcoded as ‘ABC’. But I want to be able to pass a variable instead of hardcoding ‘ABC’.$client = new MongoDB\\Client(“mongodb://localhost”);\n$dbs = $client ->database_name->table_name; // $client → DBNAME → COLLECTION_NAME\n$result = $dbs → find(array(‘name’ => ‘ABC’));I tried this, but it is not working.$name = ‘ABC’;\n$result = $dbs → find(array(‘name’ => $name));Can someone please provide me with the correct syntax? Thanks", "username": "Andrew_Manager" }, { "code": "", "text": "Hello @Andrew_Manager and welcome to the MongoDB Community Forum!If I understand correctly, the hard-coded version works, but the one with the variable does not. I would suggest initializing your array outside of the find() function and passing the array as a variable.\nThat will give you a chance to verify the array was initialized as you expected by either looking with a debugger or doing a var_dump ()Let us know!\nHubert", "username": "Hubert_Nguyen1" } ]
Php Mongo how to pass a variable to a cursor
2022-09-05T13:58:27.520Z
Php Mongo how to pass a variable to a cursor
2,140
null
[ "queries", "dot-net" ]
[ { "code": "", "text": "Hi ,I am not able to pass filters into Find method , Find method accepts only query…Please help me on this.results = db.collection.Find(Filter)", "username": "mubarak_shaik" }, { "code": "", "text": "Hello @mubarak_shaik, Welcome to the MongoDB community forum,What kind of filters are you talking about?Can you please provide more details, with the example?", "username": "turivishal" }, { "code": "", "text": "Hi @turivishal ,thanks for your reply.i am using something like belowvar builder = Builders.Filter;\nvar query = builder.Eq(“columname”, “test”);var results = _database.GetCollection(“Model”).Find(query).ToList();Find is accepting only , not builder…how to manage this.", "username": "mubarak_shaik" }, { "code": "", "text": "I am not sure what language is this, Can you please provide more details, on what language is? nodejs, c#, etc. so related person can help you.", "username": "turivishal" }, { "code": "", "text": "Hi ,it is c#.net …", "username": "mubarak_shaik" }, { "code": "Builders<BsonDocument>Filter", "text": "your example above seems ok but it is possible you forget to define the type of builder. most examples I see use Builders<BsonDocument>I recommend checking Quick Tour (mongodb.github.io) for C# driver. It starts using Filter about the half of the page after “Get a Single Document with a Filter” section.check the following for more builder examples (filter, projection, sort)\nhttps://mongodb.github.io/mongo-csharp-driver/2.18/reference/driver/definitions/", "username": "Yilmaz_Durmaz" }, { "code": "<Model><Model>", "text": "BsonDocumentHi ,I am using below code snippet.var builder = Builders<Model>.Filter;\nvar query = builder.Eq(“Columnname”, “test”);var res = _database.GetCollection<Model>(“CollectionName”).Find(query).ToList();", "username": "mubarak_shaik" }, { "code": "", "text": "i see there …Find will accept only ImongoQuery…\nimage1293×220 48 KB\n", "username": "mubarak_shaik" }, { "code": "", "text": "Hi ,please some one can help on this.", "username": "mubarak_shaik" }, { "code": "Builders", "text": "I am failing to understand the problem here. When you use Builders and make a filter, you just use it as your query.Can you please share what error is that you are getting?", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi ,this is the error , it is not accepting as query…\nimage1595×270 36.6 KB\n", "username": "mubarak_shaik" }, { "code": "using MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Driver;\n\nvar URI = $\"mongodb+srv://YOUR_CONNECTION_URI/?retryWrites=true&w=majority\";\n\n// Create a document using BSON\n{\n MongoClient dbClient = new MongoClient(URI);\n var database = dbClient.GetDatabase(\"testme\");\n var collection = database.GetCollection<BsonDocument>(\"class\");\n var document = new BsonDocument { { \"id\", 1 }, { \"class\", \"1A\" } };\n await collection.InsertOneAsync(document);\n}\n\n// Find & Read documents with Filter and Model class\n{\n MongoClient dbClient = new MongoClient(URI);\n var database = dbClient.GetDatabase(\"testme\");\n var collection = database.GetCollection<Model>(\"class\");\n\n var builder = Builders<Model>.Filter;\n var filter = builder.Eq(model => model.Class, \"1A\");\n var documents = collection.Find(filter);\n foreach (var doc in documents.ToList())\n {\n Console.WriteLine($\"id: {doc.SId} class: {doc.Class}\");\n }\n}\n\nclass Model\n{\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string? Id { get; set; }\n [BsonElement(\"id\")]\n public int SId { get; set; }\n [BsonElement(\"class\")]\n public string Class { get; set; } = null!;\n}\n", "text": "I have 2 suspicions about the problem: Your class definition has a problem, or somehow you are using a different namespace for one of your variables. Can you please check them first?I have run a simple test right now and it just works (uses new C# top-level coding style):", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I have found the code you referenced and it seems you are using a legacy driver: MongoDB.Driver.Legacy/MongoCollection.csthis changes things. can you please tell us which versions of C# and MongoDB driver you use?I may not be able to cop with the versions you use, so please accept my apology if I stay silent long. It is still plausible if you upgrade your version and the code should work out nicely.", "username": "Yilmaz_Durmaz" }, { "code": "BuildersQueryvar query = Query<Entity>.EQ(e => e.Id, id);\nvar entity = collection.FindOne(query);\n", "text": "Check this link if you have to use legacy driver : Getting Started - v1.11instead of Builders, filtering is done with Query class", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi @Yilmaz_Durmaz ,This is working fine , i was searching builder instead query. But query is working expected good.", "username": "mubarak_shaik" }, { "code": "", "text": "Hi @Yilmaz_Durmaz ,I am using below 2.18 drivers with latest dotnet framework 4.8 with c# 10.0 .\nimage1147×352 15.1 KB\n", "username": "mubarak_shaik" }, { "code": "MongoDB.Driver.LegacyMongoDB.Driverpublic virtual MongoCursor<TDefaultDocument> Find(IMongoQuery query)QueryBuildersQueryBuildersQuery", "text": "Don’t trust what you “see” as installed. It is about what you “use”.Check your code everywhere for a sign of MongoDB.Driver.Legacy first, and also check for MongoDB.Driver version lower than 2.xx (can be 1.11).I am insisting on these terms because of these reasons:this following code which your screen shot refers exists only in Legacy driver code:You are clearly state that Query works (which exists in Legacy code), Builders does not (which is in versions >=2.0).2.0 version upgrade notes tell that Query and others are replaced by BuildersIt is quite possible Visual Studio changed the used library to 1.xx version (or imported the wrong namespace) when you first tried to use Query (which is not part of 2.xx) as you possibly clicked auto-complete without knowing the result. This happens quite frequently to all of us and we mostly recognize the error only when things get pretty ugly and annoying.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi ,Thanks for your instruction.First of all i have not used so far 1.x version.Now i am using 2.18 version and can confirm further , but still need your help on any clue.\nimage526×717 16.4 KB\n", "username": "mubarak_shaik" }, { "code": "", "text": "I don’t have Visual Studio and having a hard time finding out how to replicate your issue with dotnet 6.0there are two packages in nuget repository; “mongocsharpdriver” for the legacy and “MongoDB.Driver” for the new. Both are seemingly at version 2.18, installing the same named dll files, thus making it hard to distinguish.If your project is not a big fat one, can you start a new one and copy source files (settings too if needed) Then add other libraries as usual but use “MongoDB.Driver” to install.Or first try removing all MongoDB packages from your project, and find and install “MongoDB.Driver” package, not “mongocsharpdriver” package.", "username": "Yilmaz_Durmaz" }, { "code": "// Legacy\n{\n MongoClient dbClient = new MongoClient(URI);\n\n var databaseL = dbClient.GetServer().GetDatabase(\"testme\");\n var query = Query<Model>.EQ(model => model.SId, 2);\n var collectionL = databaseL.GetCollection<Model>(\"grades\");\n var documentsL = collectionL.Find(query);\n Console.WriteLine(\"Legacy\");\n foreach(var doc in documentsL.ToList()){Console.WriteLine(doc.SId);};\n}\n// 2.xx\n{\n MongoClient dbClient = new MongoClient(URI);\n\n var database = dbClient.GetDatabase(\"testme\");\n var collection = database.GetCollection<Model>(\"grades\");\n var builder = Builders<Model>.Filter;\n var filter = builder.Eq(model => model.SId, 2);\n var documents = collection.Find(filter);\n Console.WriteLine(\"Current\");\n foreach(var doc in documents.ToList()){Console.WriteLine(doc.SId);};\n}\nvar database = dbClient.GetDatabase(\"testme\");\nvar databaseL = dbClient.GetServer().GetDatabase(\"testme\");\nGetServer()", "text": "Alright, I finally replicated your issue and got to the culprit code causing it. Below is the example code for both Legacy and 2.xx versions. You can check this and the other code I gave above for differences to modernize your code (or at least use it as a starter).The problem is caused by the way you access the database. This is the modern way:And the following is the legacy code to do the same:A very simple, easy to miss, extra code to remove: GetServer().Keep an eye out for similar code fragments in your journey ahead.PS: Working with legacy code is not easy. Yet it teaches new experiences. Took me a day but I consider it a well-spent time.", "username": "Yilmaz_Durmaz" } ]
db.collection<model>.Find(Filter)
2022-11-11T14:42:11.098Z
db.collection&lt;model&gt;.Find(Filter)
5,133
null
[ "node-js", "connecting", "atlas-cluster", "next-js" ]
[ { "code": "mongodb+srv://xxx.yyy.mongodb.net/?authSource=%24external&authMechanism=MONGODB-AWS\[email protected]: Cannot read properties of undefined (reading 'sso_session')\n at /apps/linksapp/node_modules/mongodb/lib/cmap/auth/mongodb_aws.js:192:22\n at processTicksAndRejections (node:internal/process/task_queues:96:5) {\n [Symbol(errorLabels)]: Set(2) { 'HandshakeError', 'ResetPool' }\n}\nMONGODB-X509", "text": "I’ve been working on a Next.js app that uses the latest MongoDB Node.js driver (4.11.0). It runs on AWS EC2 Amazon Linux 2.Connection string:This has worked without any issues for a while now. Today, I deployed a new version of my Next.js app, in which I also upgraded to [email protected]. No issues with the app overall localally (with local MongoDB server), but on AWS the app cannot connect to MongoDB Atlas anymore.Error:I didn’t change anything related to MongoDB. It suddenly just stopped working and from the error message I can’t work out, what’s wrong. I also didn’t change anything with the server config etc. The server was not restarted, just deployed a a new version of my app.I have now changed the mechanism to MONGODB-X509 and it works again. I will keep it that way, but I just wanted to share what happened in case it’s not just me.", "username": "Nick" }, { "code": "MongoAWSError: Cannot read properties of undefined/apps/linksapp/node_modules/mongodb/lib/cmap/auth/mongodb_aws.js:192:22", "text": "MongoAWSError: Cannot read properties of undefinedsome parts of the code returns a null/undefined value thus causing this error./apps/linksapp/node_modules/mongodb/lib/cmap/auth/mongodb_aws.js:192:22and it seems related to auth and explains why your change has worked.I am not pro for aws library so I can only guess the updated version has a breaking change. maybe you have missed an important upgrade note.anyways, if it works, keep it that way until you find out what the actual problem is ", "username": "Yilmaz_Durmaz" }, { "code": "npm install @aws-sdk/[email protected] { S3Client } = require(\"@aws-sdk/client-s3\");\nconst { ListBucketsCommand } = require(\"@aws-sdk/client-s3\");\n\nconst REGION = \"us-east-1\";\nconst s3Client = new S3Client({ region: REGION });\n\nlet run = async () => {\n try {\n const data = await s3Client.send(new ListBucketsCommand({}));\n console.log(\"Success\", data.Buckets);\n return data; // For unit tests.\n } catch (err) {\n console.log(\"Error\", err);\n }\n};\nrun().then(() => console.log(\"Done running\"));\n", "text": "Looks like an issue with AWS Javascript SDK 3.209.0:### Checkboxes for prior research\n\n- [X] I've gone through [Developer Guide](h…ttps://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide) and [API reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest)\n- [X] I've checked [AWS Forums](https://forums.aws.amazon.com) and [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js).\n- [X] I've searched for [previous similar issues](https://github.com/aws/aws-sdk-js-v3/issues) and didn't find any solution.\n\n### Describe the bug\n\nStart getting this error when deploying to AWS Elastic Beanstalk. I am trying to access Secret Manager.\n\n### SDK version number\n\n\"@aws-sdk/client-secrets-manager\": \"^3.209.0\"\n\n### Which JavaScript Runtime is this issue in?\n\nNode.js\n\n### Details of the browser/Node.js/ReactNative version\n\nv16.18.0\n\n### Reproduction Steps\n\n1. Deploy the AWS Secret Manager node.js sample code to Elastic Beanstalk.\n2. You will see the error when trying to get the secret.\n\n### Observed Behavior\n\n```\nNov 13 22:30:24 ip-10-0-2-9 web: TypeError: Cannot read properties of undefined (reading 'sso_session')\nNov 13 22:30:24 ip-10-0-2-9 web: at /var/app/current/node_modules/@aws-sdk/credential-provider-sso/dist-cjs/fromSSO.js:15:21\nNov 13 22:30:24 ip-10-0-2-9 web: at async coalesceProvider (/var/app/current/node_modules/@aws-sdk/property-provider/dist-cjs/memoize.js:14:24)\nNov 13 22:30:24 ip-10-0-2-9 web: at async SignatureV4.credentialProvider (/var/app/current/node_modules/@aws-sdk/property-provider/dist-cjs/memoize.js:33:24)\nNov 13 22:30:24 ip-10-0-2-9 web: at async SignatureV4.signRequest (/var/app/current/node_modules/@aws-sdk/signature-v4/dist-cjs/SignatureV4.js:86:29)\nNov 13 22:30:24 ip-10-0-2-9 web: at async /var/app/current/node_modules/@aws-sdk/middleware-signing/dist-cjs/middleware.js:16:18\nNov 13 22:30:24 ip-10-0-2-9 web: at async StandardRetryStrategy.retry (/var/app/current/node_modules/@aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)\nNov 13 22:30:24 ip-10-0-2-9 web: at async /var/app/current/node_modules/@aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22\nNov 13 22:30:24 ip-10-0-2-9 web: at async getAWSSecrets (/var/app/current/packages/server/dist/infrastructure/settings/config.factory.js:104:21)\nNov 13 22:30:24 ip-10-0-2-9 web: at async InstanceWrapper.ConfigAWSFactory [as metatype] (/var/app/current/packages/server/dist/infrastructure/settings/config.factory.js:61:21)\nNov 13 22:30:24 ip-10-0-2-9 web: at async Injector.instantiateClass (/var/app/current/node_modules/@nestjs/core/injector/injector.js:344:37)\n```\n\n### Expected Behavior\n\nRetrieve the secret without errors.\n\n### Possible Solution\n\nProbably caused by this change https://github.com/aws/aws-sdk-js-v3/pull/4145/files#r1020812785 on https://github.com/aws/aws-sdk-js-v3/pull/4145\n\n@kuhe \n\n### Additional Information/Context\n\n_No response_I rolled back to 3.208.0 and it worked fine for me.You can see the same error with a simple script. First install the package:npm install @aws-sdk/[email protected] run this script:", "username": "Alan_Hamilton1" }, { "code": "", "text": "I sometimes miss the obvious reason: breaking BUG ", "username": "Yilmaz_Durmaz" }, { "code": "package.jsonmongodb@^4.11.0", "text": "It certainly makes sense, but I don’t even have any parts of the AWS SDK in my package.json. I assume, there may be an indirect dependency through mongodb@^4.11.0, but I can’t even see that. I have no idea what one is supposed to do here.", "username": "Nick" }, { "code": "overrides", "text": "If you are using a recent version of npm, you can add an overrides property that allows you to override the versions of nested dependencies.Specifics of npm's package.json handlingNeeds npm 8.3 or later to work.", "username": "Alan_Hamilton1" } ]
Issue connecting to Atlas with Node.js driver using AWS EC2 authentication
2022-11-13T20:34:46.314Z
Issue connecting to Atlas with Node.js driver using AWS EC2 authentication
3,424
null
[]
[ { "code": "addProgressNotification", "text": "When opening a realm with FlexibleSyncConfiguration and setting an addProgressNotification token, the callback is only fired when 100% sync is complete. I’m not getting any updates during the download. This is problematic especially with very large datasets.Is this a bug, or is there an example how to properly get this working with Flexible Sync?", "username": "Tyler_Collins" }, { "code": "", "text": "Hi @Tyler_Collins,Progress notifications are not yet supported in flexible sync, but we are planning on adding support soon.", "username": "Kiro_Morkos" } ]
addProgressNotification only fires at 100% download
2022-11-14T03:23:03.858Z
addProgressNotification only fires at 100% download
1,179
https://www.mongodb.com/…8fcaf3cc8355.png
[ "thailand-mug" ]
[ { "code": "Senior Consulting Engineer, MongoDB", "text": "\nTHMUG960×540 121 KB\nThailand, MongoDB User Group is excited to launch and announce their first meetup in collaboration with The Knowledge Exchange: KX on Saturday, Nov 5. Floor 13The event will include two demo-based sessions that will provide you with a quick introduction to MongoDB Atlas and MongoDB Atlas Search. It will be followed up by a demo of getting your own MongoDB Atlas cluster on Amazon Web Services that is free forever.The last session would be a demo session, in which we will demonstrate how you can increase performance using an index and how to monitor it.We will also have fun Networking Time to meet some of the developers, customers, architects, and experts in the region. Not to forget there will also be, Trivia Swags , and Break. If you are a beginner or have some experience with MongoDB already, there is something for all of you!*Registration open at 13:00 PM Bangkok Time (GMT+7)Event Type: In-Person\n Location: The Knowledge Exchange: KX, Bangkok, Thailand.\n 110 1 Krung Thon Buri Road, Bang Lamphu Lang, Khlong San, Bangkok 10600, Thailand\nFloor: 13To RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Senior Consulting Engineer, MongoDBJoin the Thailand MUG to stay updated with upcoming meetups and discussions in Thailand.", "username": "Piti.Champeethong" }, { "code": "", "text": "Very nice, might join if i’m free\nCan i go to this event via BTS?", "username": "Kan_N_A" }, { "code": "", "text": "Nice to meet you. Yes, you can. The place closes to BTS วงเวียนใหญ่. https://maps.app.goo.gl/4rk6FXPiQcTrzX5G7 ", "username": "Piti.Champeethong" }, { "code": "", "text": "Hi everyone! Thank you to everyone for joining the first meetup in Thailand on 05 Nov for the first meetup, and I hope all of you have enjoyed yourself and learnt something new about MongoDB. For those who were unable to make it, we hope to see you next time. Here are some pictures from the first meetup.\nthmug071920×1440 305 KB\n\nthmug091330×1773 268 KB\n\nthmug101330×1773 243 KB\n\nthmug121330×1773 254 KB\n\nthmug051920×1281 179 KB\n", "username": "Piti.Champeethong" } ]
Thailand MUG: Inaugural In-Person Meetup!
2022-10-10T15:30:48.237Z
Thailand MUG: Inaugural In-Person Meetup!
5,009
null
[ "queries", "python" ]
[ { "code": "{ \"type\": \"command\", \"ns\": \"steel_command.inventory_props_cache\", **\"correlation_id\": \"some_id\",** \"command\": { \"getMore\": 7276652552318456000, \"collection\": \"inventory_props_cache\", \"lsid\": { \"id\": { \"$binary\": { \"base64\": \"G+3BvDN4SSif58z5vhG/sw==\", \"subType\": \"04\" } }", "text": "I’m looking for a way to make reviewing the Atlas Profiler more useful by adding a CorrelationId to my collection queries. I’d like to be able to do something like this (using python):db[“my_collection”].find({}, corr_id=“some_guid”)Then, when looking at the profiler I’d like to be able to see this correlationId in the log:{ \"type\": \"command\", \"ns\": \"steel_command.inventory_props_cache\", **\"correlation_id\": \"some_id\",** \"command\": { \"getMore\": 7276652552318456000, \"collection\": \"inventory_props_cache\", \"lsid\": { \"id\": { \"$binary\": { \"base64\": \"G+3BvDN4SSif58z5vhG/sw==\", \"subType\": \"04\" } }Is there something like this already built into Mongo to allow us to more easily correlate a mongo log to an application-level event?Thank you!\nMatt", "username": "Matt_Jones2" }, { "code": "", "text": "Hi @Matt_Jones2 and welcome in the MongoDB Community !To be honest this feature doesn’t ring a bell here. Maybe I missed it but it never reached my ears.You could submit this idea in the feedback engine if it’s not already suggested. https://feedback.mongodb.com/Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "$comment", "text": "Would the $comment field work?", "username": "Victor_Balakine" }, { "code": "", "text": "Thanks, Victor! That is exactly what I was needing.", "username": "Matt_Jones2" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Pass CorrelationId during mongo queries and commands
2022-05-27T16:09:02.672Z
Pass CorrelationId during mongo queries and commands
2,146
https://www.mongodb.com/…b2958286aa19.png
[ "c-driver" ]
[ { "code": "......\nadd_subdirectory(ThirdParties/poco)\nadd_subdirectory(ThirdParties/mongo-c-driver)\n......\ntarget_link_libraries(tests PUBLIC Poco::Foundation)\ntarget_link_libraries(tests PUBLIC mongo::mongoc_shared)\n......\nCMake Error at CMakeLists.txt:12 (target_link_libraries):\n Target \"tests\" links to:\n\n mongo::mongoc_shared\n\n but the target was not found. Possible reasons include:\n\n * There is a typo in the target name.\n * A find_package call is missing for an IMPORTED target.\n * An ALIAS target is missing.\n", "text": "Hello, I am still new to CMake and try to use the mongoc driver. I don’t want to use install feature, I just wish to have my dependencies as git submodules in my main git repository.\n\nThis is my folder hierarchy with two dependencies submodules, poco and mongoc.Then in my top CMake file I add this:For the last line which links with library mongo_shared, I just copied what I found in the hello_mongo example.\nFor some reason I’ve not grasped yet, when doing this, I get the following error:To fix this, I need to replace “mongo::mongoc_shared” with just “mongoc_shared”, thus removing the namespace mongo, when the same works for namespace Poco.What did I miss there?", "username": "Michael_El_Baki" }, { "code": "add_library(mongo::mongoc_shared ALIAS mongoc_shared)\n", "text": "I could fake it with an alias like this:so I guess it was the find_package which is ultimately responsible for creating the namespace mongo:: but I could not figure out how, comparing with what Poco does, given I do not use find_package for Poco linking.", "username": "Michael_El_Baki" } ]
Using CMake to develop with mongoc lib
2022-11-14T14:17:05.951Z
Using CMake to develop with mongoc lib
2,122
null
[ "aggregation", "queries" ]
[ { "code": "{\n name: \"foo\",\n businessDate: 20221113,\n version: 3\n ...\n}\nname:1, businessDate: -1, version -1 {\n $sort: {\n name: 1,\n businessDate: -1,\n version: -1\n }\n}\n{\n $group: {\n _id: {\n name: '$name',\n date: '$businessDate'\n },\n first: {\n $first: '$$ROOT'\n }\n }\n}, {\n $replaceRoot: {\n newRoot: '$first'\n }\n}\n{$match: { version: 0}}group$groupexplain", "text": "In our project records look something like this:My problem is I need to find the highest versioned record for each name and date. If I have an index on name:1, businessDate: -1, version -1, then I can take the first record for each index. Something like so:This works fine, and appears to be leveraging the index. However it goes a LOT slower than a simple query to find the oldest version: {$match: { version: 0}} - which ideally should be equally fast! Worse if the query is big enough I get out-of-memory errors with my group stage.According to the documentation $group can use an index in this situation to speed things up. However when I did an explain I couldn’t see any evidence for this.Is it possible to check this optimisation is taking effect?How can I get the query that returns the latest version to be just as fast as the query that returns the oldest version?", "username": "Matthew_Shaylor" }, { "code": "", "text": "Hi @Matthew_Shaylor , and welcome back to the community forums. I just tested your query against a very small number of documents and I see that the aggregation is using an index:\nimage1670×537 65.7 KB\nCan you state how many documents are in your collection? It appears that there are other fields that are not being shown. Are there a large number of these fields? Is the document size large? All of this can play into things being slow.Not knowing anything about your data it’s hard to build a representative test to say for sure how to speed things up, but the query should indeed be using an index.", "username": "Doug_Duncan" }, { "code": "GroupBy$count(){\n $match: {\n businessDate: 20221114,\n version: 1\n }\n},\n {\n $sort: {\n name: 1,\n businessDate: -1,\n version: -1\n }\n}\n{\n $group: {\n _id: {\n name: '$name',\n date: '$businessDate'\n },\n first: {\n $first: '$$ROOT'\n }\n }\n}, {\n $replaceRoot: {\n newRoot: '$first'\n }\n}\n$match$groupbyunique", "text": "@Doug_Duncan yeah I can see it uses an index. But it will use that index even if I pick a non-trivial GroupBy such as $count() - so that use of the index alone is not enough to show any optimisation has occured.There are other fields in the documents, average document size is 25kb. There are a few million documents in the collection - though for a given name and date usually there’s only a single document, sometimes up to 3 or 4 but rarely more than that.Here’s another way of looking at the issue: Let’s say I have the following pipline:If I query this way, it should be VERY fast because each groupby will contain a single document (because all index fields are either in the $match filter or the $groupby index, and the collection index is unique). However it seems MongoDb isn’t able to leverage this, so the query is slow. I think that even though the indexes guarantee there’s only a single document in the groupby, the pipeline has to wait until all documents have been processed before returning anything!", "username": "Matthew_Shaylor" } ]
Optimising $first aggregations (SQL having max() equivalent)
2022-11-14T11:59:54.852Z
Optimising $first aggregations (SQL having max() equivalent)
927
null
[ "containers" ]
[ { "code": "", "text": "Why does pulling the latest docker image for mongo yield version 4.4.18? That makes no sense considering the support documents say v6 is the “current stable release” - https://www.mongodb.com/docs/upcoming/release-notes/“docker pull mongo:latest” should download some version of mongo v6.Also, is Windows a requirement of mongo v6? I can’t find any images of it that aren’t based on windows 0_0", "username": "q3DM17" }, { "code": "latestmongomongo", "text": "Hi @q3DM17,The “Docker Official” images on Docker Hub are maintained by the Docker Community. I’m not sure why 4.4 would be tagged as latest, but they seem to have 5.0 and 6.0 available as well: GitHub - docker-library/mongo: Docker Official Image packaging for MongoDB.I would try:docker pull mongo:6.0You can raise issues/questions in their GitHub repo: Issues · docker-library/mongo · GitHub.Also, is Windows a requirement of mongo v6No. Linux, Mac, and Windows are all Supported Platforms per the MongoDB Production Notes.If you search by tag versions on Docker Hub, there appear to be several MongoDB 6.0 versions available for Linux: Docker Hub mongo 6.0. The Docker Hub mongo Readme has more information on supported architectures and image variants they produce.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Considering the \"current stable release\" is v6 and not v4, why does mongo:latest… point to v4.4? https://www.mongodb.com/docs/upcoming/release-notes/", "username": "q3DM17" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Docker pull mongo:latest gives v4.4?
2022-11-14T09:03:24.062Z
Docker pull mongo:latest gives v4.4?
4,025
null
[ "compass" ]
[ { "code": "", "text": "Hello.I want to import the data from Postman like collections that I made there. But I don’t want to use Compass for importing.What is the best way to import the Postman Data Collections and Environment from Postman to Mongodb Atlas, so that I donot néed to do that manually again?I was trying to do with the DATA API in Atlas but I think this is not the right way to that as it is most used for managing the Mongodb collections from Mongodb to Postman. But in my case I need the opposite way i.e. my built in colections from Postman to database (ATLAS).please correct me of I am wrong.Thanks:)", "username": "Anirudh_Suri" }, { "code": "mongoshinsertOneinsertManyitem\nconst collection = fs.readFileSync('/path/to/postman/collection.json', 'utf8');\nconst parsedCollection = JSON.parse(collection);\n\n//switch to `postman` database\nuse('postman');\n\n//insert all as one document\ndb.mydestinationCollection.insertOne(parsedCollection);\n\n//insert all the items in the collection as separate documents\ndb.mydestinationCollection.insertMany(parsedCollection.items);\n\n", "text": "Postman collections and environments are just JSON files, right? You could just load them up into mongosh, JSON parse them and insert all of it or the portion you want with an insertOne/insertMany. Do you want to insert one document for the entire Postman collection or do you only want to insert the item in the Postman collection into MongoDB?Either way, something like this should work", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Thanks a lot. I will try through this way ", "username": "Anirudh_Suri" } ]
Importing the data from Postman to MongoDB Atlas
2022-11-14T12:07:39.261Z
Importing the data from Postman to MongoDB Atlas
1,742
null
[ "node-js" ]
[ { "code": "2022-11-13T19:48:27.138026+00:00 app[web.1]: GET /db/levels/sp/all 200 30001.170 ms - 2\n2022-11-13T19:48:27.468379+00:00 heroku[router]: at=error code=H12 desc=\"Request timeout\" method=POST path=\"/db/levels/sp/favs/0\" host=gswitchcdb.herokuapp.com request_id=60a029ce-814a-496f-aba5-fbd4c355fed7 fwd=\"89.114.110.197\" dyno=web.1 connect=0ms service=30000ms status=503 bytes=0 protocol=https\n2022-11-13T19:48:27.470260+00:00 app[web.1]: POST /db/levels/sp/favs/0 - - ms - -\n2022-11-13T19:48:27.470478+00:00 app[web.1]: error: MongooseServerSelectionError: Server selection timed out after 30000 ms\n", "text": "After upgrading from the free tier to M10, my requests made from my website, to a Heroku node.js app (which I also upgraded), to the MongoDB, get no response (from Heroku to MongoDB Atlas). But it works fine if I do the requests locally. What could have gone wrong? It was working before the upgrades, I’ve checked the username, password, and I have 0.0.0.0/0 IP address in the network access. Here is the relevant log:Any help is much appreciated, thanks in advance.", "username": "Vasco_Freitas" }, { "code": "", "text": "I haven’t done that before but the upgrade might have changed DNS names for your cluster. I suggest checking the connection string first since you already have the allowance from “anywhere”.", "username": "Yilmaz_Durmaz" }, { "code": "&w=majority", "text": "It seems that the issue was indeed related to the connection string. For some reason, on the Heroku version I left out &w=majority at the end of the connection string, which seemed to work before but not now. After adding it, it works as expected. Thanks.", "username": "Vasco_Freitas" }, { "code": "", "text": "I think you should accept your response as the solution because it fits more like to be an answer. Mine was a suggestion, and although it is in the connection string, it is different than what was in my mind.Cheers ", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Request timeout after upgrading cluster to M10
2022-11-13T23:27:50.210Z
Request timeout after upgrading cluster to M10
1,168
null
[ "queries" ]
[ { "code": "Messages: { \"[ ]\" }messages.length == \"\"messages.length == nullmessages.length == \"[ ]\"", "text": "Hi there, I have been using Mongodb for a while now, and love it.Got something I noticed, and haven’t been able to fix on my end… My schema created an entry in a collection, this collection then has (a) messages: { msgcontent, _id, _from } nested array, while pulling the message for example [0] then I am left with Messages: { \"[ ]\" }… In my code I have an if(messages){…}else{ display “you have no messages”… with this pull I am left without empty inbox message, what am I to use when referring to this empty array? I have tried a few options, such as if messages.length == \"\" or if messages.length == null, or if messages.length == \"[ ]\" display the empty inbox message… But no success…Can someone point me in the right direction here? I am lost, and that empty inbox message is one of the cherries on top of the cake, won’t be 100% well without it…Heeeelp x)EDIT: this is NOT a check box, its a pair of brackets.", "username": "Zoo_Zaa" }, { "code": "Atlas rent-shard-0 [primary] test> x= []\n[]\nAtlas rent-shard-0 [primary] test> x.length\n0\nAtlas rent-shard-0 [primary] test> \nAtlas rent-shard-0 [primary] test> x = { _id : 1 }\n{ _id: 1 }\nAtlas rent-shard-0 [primary] test> x._id\n1\nAtlas rent-shard-0 [primary] test> x._Id\n\nAtlas rent-shard-0 [primary] test>\n", "text": "Assuming JS code.The attribute length of an array is a number. Comparing it to strings likemessages.length == “”andmessages.length == “”or with nullmessages.length == nullis definitively wrong. I am not too savvy in JS, but I suspect it has a length equals to the number 0. In mongosh, it does work this way:You might also have typo in your code. I one place you writeMessages:and at other places you usemessageslower case m versus upper case M.Field names are case sensitive:", "username": "steevej" }, { "code": "[ ]", "text": "I’ll test it out and I’ll let you know…btw, that leaves a dilema in my hand now, if [ ] equals to 0 then, the empty messages wont display, cause there is ‘one message’… that one message, is an empty array… If I leave the if with if (messages.length <1) that will hide one message…I could leave an undeletable message, but that doesn’t make sense according to my logic here… Gotta think of something… any ideas?", "username": "Zoo_Zaa" }, { "code": "", "text": "The logic of array is simple.An empty array named messages ( messages = [ ] ) should mean no messages, rather than an empty message.An empty message should be an empty object like { }. So if your array has 1 empty message, it should be equal to [ { } ].", "username": "steevej" }, { "code": "Messages: { [ ] }Messages: { [1]: { msgcontent } }", "text": "I like your logic, yet should be is something I am not sure if it really is… When created the schema the empty inbox message appears, then when added messages, that field becomes an array… Upon deleting the last message, the array is left with Messages: { [ ] } leaving thus a 0 as the empty inbox field, but a 0 being a number leaves the odd event of counting as a message therefore the empty inbox message does not appear… Is there a way to remove completely the array entry?Example: Messages: { [1]: { msgcontent } }Such as $pull and the array is then left with Messages: “” ?I imagine that means I must change the field type, simply doesn’t add up when deleting messages and coding/figuring out that the last deleted message also transforms the field into string…\nBut I will try now with the if (…) { … }else{ with this logic of yours.", "username": "Zoo_Zaa" }, { "code": "\t\t<% if (messages) { %>\n\t\t\t<% for (let nr = 1; nr <= messages.length; nr++) { %>\n\t\t\t\t<div class=\"cardmsgpage\">\n\t\t\t\t\t<div class=\"wraptkn\">Message <%= [nr] %> : <%= messages[nr-1].msgcontent %> </div>\n\t\t\t\t</div>\n\t\t\t<% } %>\n\t\t<% } else { %>\n\t\t\t<div class=\"displayempty\">You currently have no messages.</div>\n\t\t<% } %>\n", "text": "Heres the code:This else is only applied if there are no messages… working this out now with an if within if(messages)…", "username": "Zoo_Zaa" }, { "code": "if (messages)...<% if (messages == 0) {%>\n\t\t<div class=\"displayempty\">You currently have no messages.</div>\n<% } %>\n", "text": "I have successfully done it, I was using the if within the if (messages)...Separated it and used:Worked like a beauty!Thank you.", "username": "Zoo_Zaa" }, { "code": "\n//const messages is an array of unknown size, ranging from a) null, to b) no data to c) array with data. \n\n//check if messages array exists at all (is not null) and if yes, find its length\n if (messages && messages.length !== 0) {\n// messages found code goes here\n}\n\nelse {\n // NO messages found code goes here\n}\n\nmessages.length === 0 && %><div class=\"displayempty\">You currently have no messages</div><%\n", "text": "This is possible, especially if you are working in JSX:", "username": "RENOVATIO" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Nested array pulled, [ ] was left, what is this in the array?
2022-11-12T13:58:38.962Z
Nested array pulled, [ ] was left, what is this in the array?
1,887
null
[ "aggregation" ]
[ { "code": "{\n \"main\": \"xxx\",\n \"phone_numbers\": [\n \"+1234567890\",\n \"+0987654321\"\n ],\n \"branch\": [{\n \"company\": {\n \"name\": \"c_name1\",\n \"head\": \"M.J.\",\n \"location\": {\n \"address\": \"address1\",\n \"phone\": \"phone1\"\n },\n \"url\": \"name1.com\"\n },\n \"title\": {\n \"name\": \"c_name1\",\n \"role\": \"marketing\",\n \"respons\": [\n \"marketing\",\n \"business administration\"\n ]\n },\n \"prev_location\": [\n \"loc_name1\",\n \"loc_name2\"\n ],\n \"summary\": \"summary1\"\n },\n {\n \"company\": {\n \"name\": \"c_name2\",\n \"head\": \"J.S.\",\n \"location\": {\n \"address\": \"address2\",\n \"phone\": \"phone1\"\n },\n \"url\": \"name2.com\"\n },\n \"title\": {\n \"name\": \"c_name2\",\n \"role\": \"HR\",\n \"respons\": [\n \"planning\",\n \"supervising the employment\"\n ]\n },\n \"location_names\": [\n \"loc_name3\",\n \"loc_name4\"\n ],\n \"summary\": \"summary2\"\n }\n ]\n}\n{\n\"main\": \"xxx\",\n\"phone_numbers\": \"+1234567890 , +0987654321\",\n\"branch0-company\": \"name: c_name1; head: M.J.; address: address1; phone: phone1; url: name1.com\",\n\"branch0-title\": \"name: c_name1; role: marketing; respons: marketing,business administration\".\n\"branch0-prev_location\": \"loc_name1; loc_name2\".\n\"branch0-summary\": \"summary1\"\n\"branch1-company\": \"name: c_name2; head: J.S.; address: address2; phone: phone2; url: name1.com\",\n\"branch1-title\": \"name: c_name1; role: marketing; respons: planning, supervising the employment \".\n\"branch1-prev_location\": \"loc_name3\", \"loc_name4\".\n\"branch1-summary\": \"summary2\"\n}\n", "text": "I’m trying to flatten and merge a set of mixed objects\\arrays but failed (\nSample:Desired output:The only thing I managed to do is flattening root array - Mongo playground\nAny help or advice will be appreciated, thanks a lot.", "username": "V_V1" }, { "code": "", "text": "Hi @V_V1 ,Why won’t you write a client side function that parse a returned document this way.Doing this logic inside a projection or aggregation will be over complex compare to the string parsing capabilities of the client side code Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "2Pavel\nIt’s a part of transformation chain (MongoDB JSON → CSV input to full text search platform) .\nThis Is the Way (c) ", "username": "V_V1" }, { "code": "", "text": "Is it a piece of code that does MongoDB Json → CSV?We have atlas search allowing you to do full text search without the need of transformations or flattening…", "username": "Pavel_Duchovny" }, { "code": "", "text": "2Pavel\nYes, sort of \nI know, but it is production platform, so there is no other way…", "username": "V_V1" } ]
Flatten and merge a set of mixed objects\arrays
2022-11-13T13:47:53.665Z
Flatten and merge a set of mixed objects\arrays
1,773
https://www.mongodb.com/…9_2_1024x808.png
[ "queries", "node-js" ]
[ { "code": "", "text": "i am getting Error on mongo database writing operation. i log into my server side mongo db take time more then 1 minutes.i attached Error some time on server side.my code init to write into database when db response it was more then one minuteswhen success database write operation storage 978 bytes\nScreenshot from 2022-11-04 13-20-48-21045×825 86.8 KB\n", "username": "Sanjay_Makwana" }, { "code": "{ w: \"major$\", ... }\n{ w: \"majority\", ... }\n", "text": "Hi @Sanjay_Makwana, welcome to the community.\nSeems like you have mentioned an incorrect value for write concern.\nAs mentioned in the screenshot posted by you, the following configuration is invalid:Which should be replaced by the following:Learn more about writeConcern in our documentation.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "@SourabhBagrecha thanks lots. it was working.", "username": "Sanjay_Makwana" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I am using mongo db cloud on GCP basic plan database option take too long time
2022-11-10T11:04:09.996Z
I am using mongo db cloud on GCP basic plan database option take too long time
1,343
null
[ "database-tools", "backup", "storage" ]
[ { "code": "", "text": "I have a 63GB bson file which generated by mongodump from a collection in MongoDB ClusterA.\nAfter restored to a new MongoDB clusterB, I found that the larger the numInsertionWorkersPerCollection, the more serious the data expansion.\nHere is what I observed:“file bytes available for reuse” in collectionStats result is very low, about 1MB.\nSo I wrote a tool to analyze the padding size in WT data blocks, and found more paddings when storageSize is larger.So, what confused me is:My MongoDB version :4.4.13 (community)", "username": "FirstName_pengzhenyi" }, { "code": "", "text": "", "username": "FirstName_pengzhenyi" }, { "code": "db.collection.stats()", "text": "Hi @FirstName_pengzhenyi welcome to the community!Could you provide the output of db.collection.stats() for all those collections please?Also, I am assuming that you’re working with one collection and not multiple. Is this correct?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "I did the restore operation again, and get the result (MongoDB Version 4.4.13 community):", "username": "FirstName_pengzhenyi" }, { "code": "PRIMARY> db.coll1.stats()\n{\n \"ns\" : \"tenant_cxd.coll1\",\n \"size\" : 67615718696,\n \"count\" : 151810196,\n \"avgObjSize\" : 445,\n \"storageSize\" : 18287190016,\n \"freeStorageSize\" : 274432,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:tenant_cxd/collection-48-3303682114623090744\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 935676,\n \"blocks allocated\" : 948769,\n \"blocks freed\" : 29736,\n \"checkpoint size\" : 18286899200,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 274432,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 18287190016,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 28601,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 4,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 760262,\n \"bytes dirty in the cache cumulative\" : 223673489,\n \"bytes read into cache\" : 237207,\n \"bytes written from cache\" : 70364761333,\n \"checkpoint blocked page eviction\" : 74,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 372,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 36170,\n \"eviction walk target pages histogram - 0-9\" : 8988,\n \"eviction walk target pages histogram - 10-31\" : 5489,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 7706,\n \"eviction walk target pages histogram - 64-128\" : 13987,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 3780,\n \"eviction walks gave up because they restarted their walk twice\" : 10727,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 4670,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 2387,\n \"eviction walks reached end of tree\" : 32947,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 21564,\n \"eviction walks started from saved location in tree\" : 14606,\n \"hazard pointer blocked page eviction\" : 157,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 19293,\n \"in-memory page splits\" : 9536,\n \"internal pages evicted\" : 9034,\n \"internal pages split during eviction\" : 91,\n \"leaf pages split during eviction\" : 9670,\n \"modified pages evicted\" : 18694,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 1,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 3,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 304527404,\n \"pages seen by eviction walk\" : 68164462,\n \"pages written from cache\" : 948319,\n \"pages written requiring in-memory restoration\" : 8714,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 797730\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 17,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 1930120,\n \"pages visited\" : 3762345\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 131072,\n \"compressed pages read\" : 3,\n \"compressed pages written\" : 923906,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 24413\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 335268,\n \"close calls that result in cache\" : 335268,\n \"create calls\" : 11,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 1,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 151810196,\n \"insert key and value bytes\" : 68357901840,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 1,\n \"open cursor count\" : 0,\n \"operation restarted\" : 0,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 670536,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 0,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 696647856,\n \"approximate byte size of transaction IDs in pages written\" : 125936,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 1434275,\n \"internal page multi-block writes\" : 459,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 9878,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 55,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 19609,\n \"page reconciliation calls for eviction\" : 12146,\n \"pages deleted\" : 17,\n \"pages written including an aggregated newest start durable timestamp \" : 19048,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 1880,\n \"pages written including an aggregated oldest start timestamp \" : 18879,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 415598,\n \"pages written including at least one start timestamp\" : 415598,\n \"pages written including at least one start transaction ID\" : 125,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 43540491,\n \"records written including a start timestamp\" : 43540491,\n \"records written including a start transaction ID\" : 15742,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 1,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 3575508992,\n \"totalSize\" : 21862699008,\n \"indexSizes\" : {\n \"_id_\" : 3575508992\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667295531, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667295531, 1)\n}\n", "text": "numInsertionWorkersPerCollection=1:", "username": "FirstName_pengzhenyi" }, { "code": "PRIMARY> db.coll2.stats()\n{\n \"ns\" : \"tenant_cxd.coll2\",\n \"size\" : 67615718696,\n \"count\" : 151810196,\n \"avgObjSize\" : 445,\n \"storageSize\" : 19349106688,\n \"freeStorageSize\" : 995328,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:tenant_cxd/collection-50-3303682114623090744\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 1229055,\n \"blocks allocated\" : 1238528,\n \"blocks freed\" : 16723,\n \"checkpoint size\" : 19348094976,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 995328,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 19349106688,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 28603,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 5,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 102663,\n \"bytes dirty in the cache cumulative\" : 114576824,\n \"bytes read into cache\" : 377949,\n \"bytes written from cache\" : 70886249949,\n \"checkpoint blocked page eviction\" : 67,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 673,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 40884,\n \"eviction walk target pages histogram - 0-9\" : 7072,\n \"eviction walk target pages histogram - 10-31\" : 6162,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 7802,\n \"eviction walk target pages histogram - 64-128\" : 19848,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 2720,\n \"eviction walks gave up because they restarted their walk twice\" : 3502,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 7720,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 3957,\n \"eviction walks reached end of tree\" : 21225,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 17901,\n \"eviction walks started from saved location in tree\" : 22983,\n \"hazard pointer blocked page eviction\" : 127,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 19945,\n \"in-memory page splits\" : 9552,\n \"internal pages evicted\" : 12104,\n \"internal pages split during eviction\" : 122,\n \"leaf pages split during eviction\" : 9671,\n \"modified pages evicted\" : 21753,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 2,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 5,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 323230941,\n \"pages seen by eviction walk\" : 85414700,\n \"pages written from cache\" : 1238296,\n \"pages written requiring in-memory restoration\" : 9122,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 1140033\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 19,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 1115112,\n \"pages visited\" : 3005558\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 131072,\n \"compressed pages read\" : 5,\n \"compressed pages written\" : 1216404,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 21892\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 335267,\n \"close calls that result in cache\" : 335267,\n \"create calls\" : 9,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 1,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 151810196,\n \"insert key and value bytes\" : 68357901840,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 1,\n \"open cursor count\" : 0,\n \"operation restarted\" : 1445,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 670534,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 0,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 2055183120,\n \"approximate byte size of transaction IDs in pages written\" : 411056,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 1314174,\n \"internal page multi-block writes\" : 248,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 9756,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 50,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 22226,\n \"page reconciliation calls for eviction\" : 13129,\n \"pages deleted\" : 19,\n \"pages written including an aggregated newest start durable timestamp \" : 21284,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 929,\n \"pages written including an aggregated oldest start timestamp \" : 21269,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 1124935,\n \"pages written including at least one start timestamp\" : 1124935,\n \"pages written including at least one start transaction ID\" : 441,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 128448945,\n \"records written including a start timestamp\" : 128448945,\n \"records written including a start transaction ID\" : 51382,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 1,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 5042601984,\n \"totalSize\" : 24391708672,\n \"indexSizes\" : {\n \"_id_\" : 5042601984\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667295671, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667295671, 1)\n}\n", "text": "numInsertionWorkersPerCollection=2:", "username": "FirstName_pengzhenyi" }, { "code": "PRIMARY> db.coll4.stats()\n{\n \"ns\" : \"tenant_cxd.coll4\",\n \"size\" : 67615718696,\n \"count\" : 151810196,\n \"avgObjSize\" : 445,\n \"storageSize\" : 20934098944,\n \"freeStorageSize\" : 1286144,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:tenant_cxd/collection-52-3303682114623090744\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 1279637,\n \"blocks allocated\" : 1285906,\n \"blocks freed\" : 10007,\n \"checkpoint size\" : 20932796416,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 1286144,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 20934098944,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 28604,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 5,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 335628,\n \"bytes dirty in the cache cumulative\" : 77408265,\n \"bytes read into cache\" : 57729,\n \"bytes written from cache\" : 70951051955,\n \"checkpoint blocked page eviction\" : 32,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 1375,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 247881,\n \"eviction walk target pages histogram - 0-9\" : 215953,\n \"eviction walk target pages histogram - 10-31\" : 5715,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 8715,\n \"eviction walk target pages histogram - 64-128\" : 17498,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 2211,\n \"eviction walks gave up because they restarted their walk twice\" : 213402,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 4377,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 2454,\n \"eviction walks reached end of tree\" : 435714,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 222452,\n \"eviction walks started from saved location in tree\" : 25429,\n \"hazard pointer blocked page eviction\" : 153,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 23749,\n \"in-memory page splits\" : 9503,\n \"internal pages evicted\" : 12319,\n \"internal pages split during eviction\" : 124,\n \"leaf pages split during eviction\" : 9752,\n \"modified pages evicted\" : 21932,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 2,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 2,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 328264091,\n \"pages seen by eviction walk\" : 53008306,\n \"pages written from cache\" : 1285772,\n \"pages written requiring in-memory restoration\" : 9303,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 1222297\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 32,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 891719,\n \"pages visited\" : 2253825\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 131072,\n \"compressed pages read\" : 1,\n \"compressed pages written\" : 1267545,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 18227\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 335433,\n \"close calls that result in cache\" : 335433,\n \"create calls\" : 10,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 1,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 151810196,\n \"insert key and value bytes\" : 68357901840,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 1,\n \"open cursor count\" : 0,\n \"operation restarted\" : 80535,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 670866,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 0,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 2369781376,\n \"approximate byte size of transaction IDs in pages written\" : 1269568,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 1290984,\n \"internal page multi-block writes\" : 271,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 9677,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 85,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 22233,\n \"page reconciliation calls for eviction\" : 14809,\n \"pages deleted\" : 32,\n \"pages written including an aggregated newest start durable timestamp \" : 17895,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 1071,\n \"pages written including an aggregated oldest start timestamp \" : 17894,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 1249229,\n \"pages written including at least one start timestamp\" : 1249229,\n \"pages written including at least one start transaction ID\" : 1338,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 148111336,\n \"records written including a start timestamp\" : 148111336,\n \"records written including a start transaction ID\" : 158696,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 1,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 5121662976,\n \"totalSize\" : 26055761920,\n \"indexSizes\" : {\n \"_id_\" : 5121662976\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667295751, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667295751, 1)\n}\n", "text": "numInsertionWorkersPerCollection=4:", "username": "FirstName_pengzhenyi" }, { "code": "PRIMARY> db.coll8.stats()\n{\n \"ns\" : \"tenant_cxd.coll8\",\n \"size\" : 67615718696,\n \"count\" : 151810196,\n \"avgObjSize\" : 445,\n \"storageSize\" : 22373642240,\n \"freeStorageSize\" : 1757184,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:tenant_cxd/collection-54-3303682114623090744\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 1295782,\n \"blocks allocated\" : 1299306,\n \"blocks freed\" : 5886,\n \"checkpoint size\" : 22371868672,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 1757184,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 22373642240,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 28606,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 5,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 100832,\n \"bytes dirty in the cache cumulative\" : 56391706,\n \"bytes read into cache\" : 54646,\n \"bytes written from cache\" : 70884387305,\n \"checkpoint blocked page eviction\" : 23,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 3338,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 37434,\n \"eviction walk target pages histogram - 0-9\" : 3801,\n \"eviction walk target pages histogram - 10-31\" : 6059,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 8260,\n \"eviction walk target pages histogram - 64-128\" : 19314,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 975,\n \"eviction walks gave up because they restarted their walk twice\" : 1127,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 4357,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 2162,\n \"eviction walks reached end of tree\" : 9610,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 8632,\n \"eviction walks started from saved location in tree\" : 28802,\n \"hazard pointer blocked page eviction\" : 109,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 28489,\n \"in-memory page splits\" : 9481,\n \"internal pages evicted\" : 12420,\n \"internal pages split during eviction\" : 125,\n \"leaf pages split during eviction\" : 9787,\n \"modified pages evicted\" : 22024,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 2,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 1,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 328824010,\n \"pages seen by eviction walk\" : 50366780,\n \"pages written from cache\" : 1299224,\n \"pages written requiring in-memory restoration\" : 9397,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 1253254\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 38,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 549089,\n \"pages visited\" : 1456538\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 131072,\n \"compressed pages read\" : 1,\n \"compressed pages written\" : 1283105,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 16119\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 335197,\n \"close calls that result in cache\" : 335197,\n \"create calls\" : 14,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 1,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 151810196,\n \"insert key and value bytes\" : 68357901840,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 1,\n \"open cursor count\" : 0,\n \"operation restarted\" : 106424,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 670394,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 0,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 2429533408,\n \"approximate byte size of transaction IDs in pages written\" : 3562680,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 1289520,\n \"internal page multi-block writes\" : 270,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 9636,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 1,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 22227,\n \"page reconciliation calls for eviction\" : 15140,\n \"pages deleted\" : 38,\n \"pages written including an aggregated newest start durable timestamp \" : 15837,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 840,\n \"pages written including an aggregated oldest start timestamp \" : 15837,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 1281782,\n \"pages written including at least one start timestamp\" : 1281782,\n \"pages written including at least one start transaction ID\" : 3864,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 151845838,\n \"records written including a start timestamp\" : 151845838,\n \"records written including a start transaction ID\" : 445335,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 1,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 5234679808,\n \"totalSize\" : 27608322048,\n \"indexSizes\" : {\n \"_id_\" : 5234679808\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667295831, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667295831, 1)\n}\n", "text": "numInsertionWorkersPerCollection=8:", "username": "FirstName_pengzhenyi" }, { "code": "PRIMARY> db.coll16.stats()\n{\n \"ns\" : \"tenant_cxd.coll16\",\n \"size\" : 67615718696,\n \"count\" : 151810196,\n \"avgObjSize\" : 445,\n \"storageSize\" : 23564926976,\n \"freeStorageSize\" : 884736,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:tenant_cxd/collection-56-3303682114623090744\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 1295729,\n \"blocks allocated\" : 1298243,\n \"blocks freed\" : 3975,\n \"checkpoint size\" : 23564025856,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 884736,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 23564926976,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 28606,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 5,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 1451405390,\n \"bytes dirty in the cache cumulative\" : 46278939,\n \"bytes read into cache\" : 0,\n \"bytes written from cache\" : 70823621636,\n \"checkpoint blocked page eviction\" : 97,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 6431,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 36367,\n \"eviction walk target pages histogram - 0-9\" : 2320,\n \"eviction walk target pages histogram - 10-31\" : 5331,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 8372,\n \"eviction walk target pages histogram - 64-128\" : 20344,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 473,\n \"eviction walks gave up because they restarted their walk twice\" : 91,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 3885,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 2012,\n \"eviction walks reached end of tree\" : 6412,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 6466,\n \"eviction walks started from saved location in tree\" : 29901,\n \"hazard pointer blocked page eviction\" : 249,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 34610,\n \"in-memory page splits\" : 9483,\n \"internal pages evicted\" : 12031,\n \"internal pages split during eviction\" : 123,\n \"leaf pages split during eviction\" : 9808,\n \"modified pages evicted\" : 21619,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 2,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 0,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 327443443,\n \"pages seen by eviction walk\" : 49576873,\n \"pages written from cache\" : 1298187,\n \"pages written requiring in-memory restoration\" : 9420,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 1233923\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 44,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 369613,\n \"pages visited\" : 1039622\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 131072,\n \"compressed pages read\" : 0,\n \"compressed pages written\" : 1282938,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 15249\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 335080,\n \"close calls that result in cache\" : 335084,\n \"create calls\" : 23,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 1,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 151810196,\n \"insert key and value bytes\" : 68357901840,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 1,\n \"open cursor count\" : 0,\n \"operation restarted\" : 120127,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 670168,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 0,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 2429548400,\n \"approximate byte size of transaction IDs in pages written\" : 5579072,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 1288798,\n \"internal page multi-block writes\" : 282,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 9621,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 101,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 21968,\n \"page reconciliation calls for eviction\" : 15697,\n \"pages deleted\" : 44,\n \"pages written including an aggregated newest start durable timestamp \" : 14980,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 766,\n \"pages written including an aggregated oldest start timestamp \" : 14980,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 1282158,\n \"pages written including at least one start timestamp\" : 1282158,\n \"pages written including at least one start transaction ID\" : 6038,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 151846775,\n \"records written including a start timestamp\" : 151846775,\n \"records written including a start transaction ID\" : 697384,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 1,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 5264338944,\n \"totalSize\" : 28829265920,\n \"indexSizes\" : {\n \"_id_\" : 5264338944\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667295871, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667295871, 1)\n}\n", "text": "numInsertionWorkersPerCollection=16:", "username": "FirstName_pengzhenyi" }, { "code": "mgeneratejs -n 10000000 '{name:\"$name\", address:\"$address\", text:\"$paragraph\"}' | mongoimport -d test -c test --drop\nmongodump -d test -c test\nmongorestore --nsInclude=test.test --nsFrom=test.test --nsTo=test.test_1 --numInsertionWorkersPerCollection=1 --drop\nmongorestore --nsInclude=test.test --nsFrom=test.test --nsTo=test.test_16 --numInsertionWorkersPerCollection=16 --drop\n ns: 'test.test_1',\n size: 5405415355,\n count: 10000000,\n avgObjSize: 540,\n storageSize: 5708476416,\n ns: 'test.test_16',\n size: 5405415355,\n count: 10000000,\n avgObjSize: 540,\n storageSize: 5687885824,\n", "text": "Hi @FirstName_pengzhenyiI did my own testing but end up with different numbers. Note that I only tested with numInsertionWorkersPerCollection of 1 and 16, since they look to be the extreme ends of your tests.I tried to create an average object size in the ballpark of yours, with 10 million documents totalling about 5 GB in size.Here’s the script I used for the test. I used mgeneratejs to generate random data for testing.Here’s my results:numInsertionWorkersPerCollection=1 total storage size ~5.32 GBnumInsertionWorkersPerCollection=16 total storage size ~5.3 GBI’m using MongoDB 4.4.13 and mongorestore version 100.5.4. The test deployment is a singe-node replica set.In my case, there doesn’t seem to be any size expansion. I’m not sure why your results are very different, but if you can maybe run the script I posted above and see if you’re seeing a different result? Otherwise please post the example documents & the reproduction script for your initial experiment. Also please post details about the MongoDB deployment you’re using.Best regards\nKevin", "username": "kevinadi" }, { "code": "zhangruian-rs_0:PRIMARY> db.user_1400005918.stats()\n{\n \"ns\" : \"pushdb.user_1400005918\",\n \"size\" : 2017049569,\n \"count\" : 92558532,\n \"avgObjSize\" : 21,\n \"storageSize\" : 1004810240,\n \"freeStorageSize\" : 1093632,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:pushdb/collection-0--4493473792861060111\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 42109,\n \"blocks allocated\" : 46786,\n \"blocks freed\" : 9754,\n \"checkpoint size\" : 1003700224,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 1093632,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 1004810240,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 2368,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 4,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 27746,\n \"bytes dirty in the cache cumulative\" : 66852720,\n \"bytes read into cache\" : 9802642216,\n \"bytes written from cache\" : 2983019028,\n \"checkpoint blocked page eviction\" : 0,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 177,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 21077,\n \"eviction walk target pages histogram - 0-9\" : 10958,\n \"eviction walk target pages histogram - 10-31\" : 5597,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 2750,\n \"eviction walk target pages histogram - 64-128\" : 1772,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 3,\n \"eviction walks gave up because they restarted their walk twice\" : 7415,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 3467,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 567,\n \"eviction walks reached end of tree\" : 17747,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 11452,\n \"eviction walks started from saved location in tree\" : 9625,\n \"hazard pointer blocked page eviction\" : 8,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 2144,\n \"in-memory page splits\" : 1054,\n \"internal pages evicted\" : 329,\n \"internal pages split during eviction\" : 3,\n \"leaf pages split during eviction\" : 1132,\n \"modified pages evicted\" : 1435,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 1,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 117303,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 159242466,\n \"pages seen by eviction walk\" : 6732638,\n \"pages written from cache\" : 46486,\n \"pages written requiring in-memory restoration\" : 702,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 139182\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 9,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 944508,\n \"pages visited\" : 2444803\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 131072,\n \"compressed pages read\" : 117272,\n \"compressed pages written\" : 41769,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 4717\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 185289,\n \"close calls that result in cache\" : 185289,\n \"create calls\" : 14,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 277675602,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 92558532,\n \"insert key and value bytes\" : 2462974393,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 277675602,\n \"open cursor count\" : 0,\n \"operation restarted\" : 0,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 649903,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 279327,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 177205344,\n \"approximate byte size of transaction IDs in pages written\" : 535336,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 71589,\n \"internal page multi-block writes\" : 214,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 1268,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 1,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 1952,\n \"page reconciliation calls for eviction\" : 714,\n \"pages deleted\" : 9,\n \"pages written including an aggregated newest start durable timestamp \" : 3503,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 397,\n \"pages written including an aggregated oldest start timestamp \" : 3313,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 11404,\n \"pages written including at least one start timestamp\" : 11404,\n \"pages written including at least one start transaction ID\" : 47,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 11075334,\n \"records written including a start timestamp\" : 11075334,\n \"records written including a start transaction ID\" : 66917,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 15,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 11148996608,\n \"totalSize\" : 12153806848,\n \"indexSizes\" : {\n \"_id_\" : 1232883712,\n \"_id_1_tags.0_1_tags.1_1_tags.2_1_tags.3_1_tags.4_1_tags.5_1_tags.6_1_tags.7_1_tags.8_1_tags.9_1\" : 2455584768,\n \"tagsv2_1\" : 417800192,\n \"tags.4_1\" : 417800192,\n \"_id_1_tagsv2_1\" : 1334550528,\n \"tags.0_1\" : 417800192,\n \"tags.2_1\" : 417800192,\n \"tags.3_1\" : 417800192,\n \"_id_hashed\" : 1520025600,\n \"tags.1_1\" : 417800192,\n \"tags.7_1\" : 417800192,\n \"tags.8_1\" : 417800192,\n \"tags.9_1\" : 427950080,\n \"tags.5_1\" : 417800192,\n \"tags.6_1\" : 417800192\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667976189, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667976189, 1)\n}\n", "text": "I did another test and find some confused question. I have about 12GB bson file and use mongoimport tool to import to mongodb.Here are my two test case:", "username": "zhangruian1997" }, { "code": "zhangruian-rs_0:PRIMARY> db.test.stats()\n{\n \"ns\" : \"test.test\",\n \"size\" : 2017049561,\n \"count\" : 92558532,\n \"avgObjSize\" : 21,\n \"storageSize\" : 1526210560,\n \"freeStorageSize\" : 163840,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:test/collection-446--4493473792861060111\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 65677,\n \"blocks allocated\" : 66929,\n \"blocks freed\" : 1959,\n \"checkpoint size\" : 1526030336,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 163840,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 1526210560,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 2375,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 4,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 48471,\n \"bytes dirty in the cache cumulative\" : 50890771,\n \"bytes read into cache\" : 5778117,\n \"bytes written from cache\" : 3780824788,\n \"checkpoint blocked page eviction\" : 0,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 1214,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 114482,\n \"eviction walk target pages histogram - 0-9\" : 35321,\n \"eviction walk target pages histogram - 10-31\" : 79152,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 9,\n \"eviction walk target pages histogram - 64-128\" : 0,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 16,\n \"eviction walks gave up because they restarted their walk twice\" : 7260,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 14095,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 1746,\n \"eviction walks reached end of tree\" : 31612,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 23133,\n \"eviction walks started from saved location in tree\" : 91349,\n \"hazard pointer blocked page eviction\" : 585,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 3854,\n \"in-memory page splits\" : 1021,\n \"internal pages evicted\" : 400,\n \"internal pages split during eviction\" : 4,\n \"leaf pages split during eviction\" : 1273,\n \"modified pages evicted\" : 1713,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 1,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 70,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 170620003,\n \"pages seen by eviction walk\" : 8363540,\n \"pages written from cache\" : 66845,\n \"pages written requiring in-memory restoration\" : 856,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 39654\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 30,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 183662,\n \"pages visited\" : 248730\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 114692,\n \"compressed pages read\" : 65,\n \"compressed pages written\" : 64476,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 2369\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 185136,\n \"close calls that result in cache\" : 185136,\n \"create calls\" : 23,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 44,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 92557683,\n \"insert key and value bytes\" : 2462956479,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 44,\n \"open cursor count\" : 0,\n \"operation restarted\" : 76325,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 370300,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 0,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 1442649472,\n \"approximate byte size of transaction IDs in pages written\" : 44721696,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 65682,\n \"internal page multi-block writes\" : 80,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 1272,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 5,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 1873,\n \"page reconciliation calls for eviction\" : 1569,\n \"pages deleted\" : 30,\n \"pages written including an aggregated newest start durable timestamp \" : 2263,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 815,\n \"pages written including an aggregated oldest start timestamp \" : 2254,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 63957,\n \"pages written including at least one start timestamp\" : 63957,\n \"pages written including at least one start transaction ID\" : 4308,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 90165592,\n \"records written including a start timestamp\" : 90165592,\n \"records written including a start transaction ID\" : 5590212,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 15,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 43485159424,\n \"totalSize\" : 45011369984,\n \"indexSizes\" : {\n \"_id_\" : 4596736000,\n \"_id_1_tags.0_1_tags.1_1_tags.2_1_tags.3_1_tags.4_1_tags.5_1_tags.6_1_tags.7_1_tags.8_1_tags.9_1\" : 6599782400,\n \"tagsv2_1\" : 1999462400,\n \"tags4_1\" : 2001522688,\n \"_id_1_tagsv2_1\" : 4955938816,\n \"tags0_1\" : 2003283968,\n \"tags2_1\" : 2006695936,\n \"tags3_1\" : 2000535552,\n \"tags1_1\" : 2010071040,\n \"tags7_1\" : 2012762112,\n \"tags8_1\" : 2015150080,\n \"tags9_1\" : 2011406336,\n \"tags5_1\" : 2011742208,\n \"tags6_1\" : 2005483520,\n \"_id_hashed\" : 5254586368\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667976629, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667976629, 1)\n}\n", "text": "", "username": "zhangruian1997" }, { "code": "mgeneratejs -n 50000000 '{name:\"$name\",\"age\": \"$age\", address:\"$address\"}'\nzhangruian-rs_0:PRIMARY> db.after.stats()\n{\n \"ns\" : \"test1.after\",\n \"size\" : 4225699095,\n \"count\" : 50000000,\n \"avgObjSize\" : 84,\n \"storageSize\" : 2333638656,\n \"freeStorageSize\" : 1114112,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:test1/collection-494--4493473792861060111\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 105849,\n \"blocks allocated\" : 106617,\n \"blocks freed\" : 1282,\n \"checkpoint size\" : 2332508160,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 1114112,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 2333638656,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 2381,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 4,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 518574416,\n \"bytes dirty in the cache cumulative\" : 8842199,\n \"bytes read into cache\" : 4957732027,\n \"bytes written from cache\" : 5000818153,\n \"checkpoint blocked page eviction\" : 10,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 755,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 5920,\n \"eviction walk target pages histogram - 0-9\" : 1041,\n \"eviction walk target pages histogram - 10-31\" : 1542,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 1507,\n \"eviction walk target pages histogram - 64-128\" : 1830,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 245,\n \"eviction walks gave up because they restarted their walk twice\" : 70,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 1341,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 430,\n \"eviction walks reached end of tree\" : 1653,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 2088,\n \"eviction walks started from saved location in tree\" : 3832,\n \"hazard pointer blocked page eviction\" : 45,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 3256,\n \"in-memory page splits\" : 947,\n \"internal pages evicted\" : 1338,\n \"internal pages split during eviction\" : 10,\n \"leaf pages split during eviction\" : 970,\n \"modified pages evicted\" : 1847,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 1,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 93216,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 96003530,\n \"pages seen by eviction walk\" : 7364730,\n \"pages written from cache\" : 106583,\n \"pages written requiring in-memory restoration\" : 886,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 178780\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 8,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 248165,\n \"pages visited\" : 815109\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 75380,\n \"compressed pages read\" : 92743,\n \"compressed pages written\" : 104898,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 1685\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 100010,\n \"close calls that result in cache\" : 100010,\n \"create calls\" : 22,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 50000008,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 50000000,\n \"insert key and value bytes\" : 4458831259,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 50000008,\n \"open cursor count\" : 0,\n \"operation restarted\" : 35591,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 250021,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 50001,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 464948608,\n \"approximate byte size of transaction IDs in pages written\" : 2685696,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 136439,\n \"internal page multi-block writes\" : 38,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 973,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 46,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 2026,\n \"page reconciliation calls for eviction\" : 1022,\n \"pages deleted\" : 8,\n \"pages written including an aggregated newest start durable timestamp \" : 1415,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 107,\n \"pages written including an aggregated oldest start timestamp \" : 1414,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 73792,\n \"pages written including at least one start timestamp\" : 73792,\n \"pages written including at least one start transaction ID\" : 642,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 29059288,\n \"records written including a start timestamp\" : 29059288,\n \"records written including a start transaction ID\" : 335712,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 2,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 1489510400,\n \"totalSize\" : 3823149056,\n \"indexSizes\" : {\n \"_id_\" : 1212551168,\n \"age_1\" : 276959232\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667976969, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667976969, 1)\n}\n", "text": "I also use mgeneratejs to generate random data for testing,the script is:And here is my result, the “age” index bloat abourt two times than origin.1.Create index after the document insertion complete.", "username": "zhangruian1997" }, { "code": "zhangruian-rs_0:PRIMARY> db.before.stats()\n{\n \"ns\" : \"test1.before\",\n \"size\" : 4225699095,\n \"count\" : 50000000,\n \"avgObjSize\" : 84,\n \"storageSize\" : 2321723392,\n \"freeStorageSize\" : 225280,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:test1/collection-498--4493473792861060111\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 102046,\n \"blocks allocated\" : 102706,\n \"blocks freed\" : 1177,\n \"checkpoint size\" : 2321481728,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 225280,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 2321723392,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 2381,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"btree compact pages reviewed\" : 0,\n \"btree compact pages selected to be rewritten\" : 0,\n \"btree compact pages skipped\" : 0,\n \"btree skipped by compaction as process would not reduce size\" : 0,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 4,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 2603112693,\n \"bytes dirty in the cache cumulative\" : 8499780,\n \"bytes read into cache\" : 0,\n \"bytes written from cache\" : 5049290552,\n \"checkpoint blocked page eviction\" : 0,\n \"checkpoint of history store file blocked non-history store page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 894,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\" : 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\" : 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\" : 0,\n \"eviction walk passes of a file\" : 2332,\n \"eviction walk target pages histogram - 0-9\" : 224,\n \"eviction walk target pages histogram - 10-31\" : 543,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 827,\n \"eviction walk target pages histogram - 64-128\" : 738,\n \"eviction walk target pages reduced due to history store cache pressure\" : 0,\n \"eviction walks abandoned\" : 75,\n \"eviction walks gave up because they restarted their walk twice\" : 43,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 526,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 299,\n \"eviction walks reached end of tree\" : 779,\n \"eviction walks restarted\" : 0,\n \"eviction walks started from root of tree\" : 943,\n \"eviction walks started from saved location in tree\" : 1389,\n \"hazard pointer blocked page eviction\" : 8,\n \"history store table insert calls\" : 0,\n \"history store table insert calls that returned restart\" : 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\" : 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\" : 0,\n \"history store table reads\" : 0,\n \"history store table reads missed\" : 0,\n \"history store table reads requiring squashed modifies\" : 0,\n \"history store table truncation by rollback to stable to remove an unstable update\" : 0,\n \"history store table truncation by rollback to stable to remove an update\" : 0,\n \"history store table truncation to remove an update\" : 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\" : 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\" : 0,\n \"history store table writes requiring squashed modifies\" : 0,\n \"in-memory page passed criteria to be split\" : 4152,\n \"in-memory page splits\" : 947,\n \"internal pages evicted\" : 468,\n \"internal pages split during eviction\" : 9,\n \"leaf pages split during eviction\" : 959,\n \"modified pages evicted\" : 1425,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 1,\n \"page written requiring history store records\" : 0,\n \"pages read into cache\" : 0,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 95537019,\n \"pages seen by eviction walk\" : 4373058,\n \"pages written from cache\" : 102680,\n \"pages written requiring in-memory restoration\" : 901,\n \"the number of times full update inserted to history store\" : 0,\n \"the number of times reverse modify inserted to history store\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 51061\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 9,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 54220,\n \"pages visited\" : 400443\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 52424,\n \"compressed pages read\" : 0,\n \"compressed pages written\" : 101048,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 1632\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"Total number of times a search near has exited due to prefix config\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 100006,\n \"close calls that result in cache\" : 100008,\n \"create calls\" : 22,\n \"cursor next calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 4,\n \"cursor prev calls that skip due to a globally visible history store tombstone\" : 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 50000000,\n \"insert key and value bytes\" : 4458831259,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 4,\n \"open cursor count\" : 0,\n \"operation restarted\" : 40894,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 200018,\n \"search calls\" : 0,\n \"search history store calls\" : 0,\n \"search near calls\" : 0,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 536914272,\n \"approximate byte size of transaction IDs in pages written\" : 2723424,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 124063,\n \"internal page multi-block writes\" : 30,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 974,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 1,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 1915,\n \"page reconciliation calls for eviction\" : 1029,\n \"pages deleted\" : 9,\n \"pages written including an aggregated newest start durable timestamp \" : 1485,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated newest transaction ID \" : 74,\n \"pages written including an aggregated oldest start timestamp \" : 1483,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 78351,\n \"pages written including at least one start timestamp\" : 78351,\n \"pages written including at least one start transaction ID\" : 663,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 33557142,\n \"records written including a start timestamp\" : 33557142,\n \"records written including a start transaction ID\" : 340428,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"tiered operations dequeued and processed\" : 0,\n \"tiered operations scheduled\" : 0,\n \"tiered storage local retention time (secs)\" : 0,\n \"tiered storage object size\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"rollback to stable history store records with stop timestamps older than newer records\" : 0,\n \"rollback to stable inconsistent checkpoint\" : 0,\n \"rollback to stable keys removed\" : 0,\n \"rollback to stable keys restored\" : 0,\n \"rollback to stable restored tombstones from history store\" : 0,\n \"rollback to stable restored updates from history store\" : 0,\n \"rollback to stable skipping delete rle\" : 0,\n \"rollback to stable skipping stable rle\" : 0,\n \"rollback to stable sweeping history store keys\" : 0,\n \"rollback to stable updates removed from history store\" : 0,\n \"transaction checkpoints due to obsolete pages\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 2,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 1751912448,\n \"totalSize\" : 4073635840,\n \"indexSizes\" : {\n \"_id_\" : 1148047360,\n \"age_1\" : 603865088\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667976989, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1667976989, 1)\n}\n", "text": "APPEND INFOMATION FOR zhangruian1997:\n2.Create index before the document insertion complete.So, what confused me is why the index(age_1) bloat can be so serious?My MongoDB Version is 4.4.13 (community)", "username": "FirstName_pengzhenyi" }, { "code": "", "text": "Hi @zhangruian1997This is an expected outcome at this point. When you pre-create an index, the index is timestamped and the entries in it are written with timestamp information. This is done to support snapshot history retention. When you create an index later, WiredTiger does bulk insert into the index - which is not timestamped, and more efficiently packed. Note that this situation may improve in later versions, but at this moment this is how it works Best regards\nKevin", "username": "kevinadi" } ]
WT file expansion when mongorestore in parallel
2022-10-20T11:54:53.919Z
WT file expansion when mongorestore in parallel
4,069
null
[]
[ { "code": "", "text": "Hello! I’m looking for background on the requirement that AWS PrivateLink must be active in all regions into which you deploy a multi-region cluster.We have a multi-region cluster that will be accessed both from our data center and AWS but at this point we will only be accessing it from one AWS region. In order to set up privatelink I needed to create dummy VPC’s in the other regions, so I’m questioning the reasoning behind that.Since connections are one way (from AWS to mongo atlas) and we have literally no code in those VPC’s, I’m assuming those extra vpc’s in our non-preferred regions are not going to see any traffic even in the case of a cluster failover. Is that correct?", "username": "dnise01" }, { "code": "", "text": "Hi @dnise01 - Welcome to the community I would recommend you please contact the Atlas support team via the in-app chat for this particular question to get confirmation as I believe this partly involves AWS’s internals as well.Regards,\nJason", "username": "Jason_Tran" } ]
AWS PrivateLink required in all regions of multi-region cluster
2022-11-10T21:37:11.683Z
AWS PrivateLink required in all regions of multi-region cluster
937
null
[]
[ { "code": "{\n \"Data_1\":{\n \"Thu Oct 21 2021 11:12:00 GMT+0000 (Coordinated Universal Time)\":\"188245\",\n \"Sun Oct 24 2021 20:14:00 GMT+0000 (Coordinated Universal Time)\":\"193033\"\n ....\n },\n \"Data_2\":{\n \"Thu Oct 22 2021 17:12:00 GMT+0000 (Coordinated Universal Time)\":\"abc123\",\n \"Sun Oct 23 2021 21:19:00 GMT+0000 (Coordinated Universal Time)\":\"123abc\"\n ....\n },\n}\n", "text": "I have a collection of documents which have multiple objects within it, and these objects have dates as keys. Like so:These objects can contain hundreds of key/value pairs and I only want to keep data from within the last 365 days.Currently when I update the document (add new dates and remove old ones) I overwrite the entire object. However as these objects can be reasonably large I believe this is bloating my oplog.Is there a better way to remove key/value pairs that are older than 365 days and add new dates at the same time without having to overwrite the entire object?", "username": "Callum_Boyd" }, { "code": "$set$unset.{\n \"$set\": {\n \"Data_3.Thu Oct 21 2021 11:12:00 GMT+0000 (Coordinated Universal Time)\": \"188245\",\n \"Data_3.Thu Oct 21 2021 11:12:00 GMT+0000 (Coordinated Universal Time)\": \"123abc\"\n }\n}\n{\n \"$unset\": {\n \"Data_3.Thu Oct 21 2021 11:12:00 GMT+0000 (Coordinated Universal Time)\": \"\",\n \"Data_3.Thu Oct 21 2021 11:12:00 GMT+0000 (Coordinated Universal Time)\": \"\"\n }\n}\n", "text": "Hello @Callum_Boyd, Welcome to the MongoDB community forum,I am not sure what query you are using for add dates and remove dates, I guess you are passing the whole object of dates in $set or $unset operators,\nYou have to use . dot notation along with the object name and key name, like this,\nFor add dates:For remove dates:", "username": "turivishal" }, { "code": "", "text": "You should not use data values (your dates) as field names. See the attribute pattern for a much better alternative.You should not store dates as string, it takes more space, it is slower to compare and it takes more bandwidth to transfer.", "username": "steevej" }, { "code": "", "text": "I use the attribute pattern in other collections but had not considered using it here so thank you!I was also not aware of the downside of storing dates as strings so I appreciate the insight", "username": "Callum_Boyd" }, { "code": "", "text": "Thank you @turivishal, this would certainly make a lot more sense than the way I am currently doing it!", "username": "Callum_Boyd" }, { "code": "{\n \"category\":\"Data_X\",\n \"date\":new Date(\"2016-05-18T16:00:00Z\"),\n \"value\":\"123abc\"\n}\n", "text": "you may also store each data like this:and this will enable you to write “Time Series Collection” where you can set the automatic deletion of documents after a certain amount of time. please check the following:", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to add/remove from Object without overwriting everything
2022-11-11T09:09:25.423Z
How to add/remove from Object without overwriting everything
2,931
null
[ "kubernetes-operator" ]
[ { "code": " lifecycle:\n preStop:\n exec:\n command:\n - /bin/bash\n - -c\n - <gracefully shutdown mongodb>\n\n", "text": "Hi thereIt looks to me that mongod’s in the statefulset pods are not properly shut down.I can see during the startup this error “Detected unclean shutdown”. Since there are a lot of collections/files to be scanned the startup takes like > 1h.I was wondering if the shutdown can be performed gracefully like (resp. why is this not built in ?)Regards\nJohn", "username": "John_Moser1" }, { "code": "mongo --eval \"db.getSiblingDB('admin').shutdownServer()\"\n", "text": "how are you shutting down the servers?shutting down mongod instances is done with a database command run on admin database with admin rights.check this link: shutdown — MongoDB Manualyou can also issue a database command from the terminal with mongo shell. here is an example:", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I am talking of kubernetes/operator where you don’t have explicit control of mongod.A pod with a container running mongod can be shutdown any time (for ex. to be scheduled to another node). Not sure if you are aware that the operator manages at least 3 pods in a replicaset.=> so your proposal does not make sense at all.the shutdown needs to be included in the pod’s lifecycle (then as you suggest - a graceful shutdown)", "username": "John_Moser1" }, { "code": "preStopterminationGracePeriodSeconds:60", "text": "I am talking of kubernetes/ operator where you don’t have explicit control of mongod.It is not possible to run a server without some control of your own (or at least of some admin user). each server is an image after all on which you can either give customized parameters or have a new customized image if it misses something in it, for example, mongo shell.And the command to use in your preStop hook can be the one I wrote above.your problem can also be for the default grace period being 30 seconds. if servers take longer, then increase this with terminationGracePeriodSeconds:60 for example. in fact, why don’t you first give this a shot before diving into possibly more complicated waters of troubleshooting?PS: shutdowns of k8s machines are not instant. they are waited up to 30 sec before stopped completely", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Again … I am not asking for possibilities. I assume this is a well known issue and the solution should be well known.PS: we are running an app with 1000 cores on GKE … I think, I should know the basics.", "username": "John_Moser1" }, { "code": "terminationGracePeriodSeconds:60", "text": "If it works for 10k but not for 100k, I would tend to assume that the termination logic is implemented correctly.Note that 100k collections is at least 200k number of files. And 1 more files per index per collection. It is quite possible that the problem is related to something taking too much time to do. The following makes a lot of senseyour problem can also be for the default grace period being 30 secondsso is the proposed idea:increase this with terminationGracePeriodSeconds:60From k8s’ documentation:Once the grace period has expired, the KILL signal is sent to any remaining processesReceiving the KILL signal with generateunclean shutdownand the KILL signal is sent to process that are still running after terminationGracePeriodSeconds, like it might be the case when trying to flush and close 200k files.", "username": "steevej" } ]
Operator: Detected unclean shutdown - mongod seems never to shutdown gracefully
2022-11-12T17:36:00.994Z
Operator: Detected unclean shutdown - mongod seems never to shutdown gracefully
3,953
null
[ "dot-net", "atlas-device-sync" ]
[ { "code": "", "text": "Hi,I have many users logged in my app and realm sync, and switch between user when needed.\nSync occurs only in active user or all logged users are synced dynamically ?thanks", "username": "Sergio_Carbonete" }, { "code": "", "text": "Sync will happen only if you have a Realm instance open, regardless of who the active user is. E.g. if you open 2 realm instances with 2 different users, both will sync.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Many users logged simultaneously in device
2022-11-13T14:54:37.803Z
Many users logged simultaneously in device
1,566
null
[]
[ { "code": "", "text": "Trying out the free tier to get the hang of this…In App services, we’re seeing about 1000 requests get added per day. Can’t figure out how that’s possible. We’re not using it anywhere near to that much.We’ve looked in the App’s logs and the logs don’t show anywhere near this number of requests.How can we begin to understand why we’re seeing this many requests added each day?Thanks", "username": "Christopher_Barber" }, { "code": "", "text": "go to the network section and add restrictions to allowed IPs, preferably only your app’s host IP and developer machine’s. This will prevent access from anywhere else and point out if someone has their hands on your credentials.If the high usage still continues after that, then check your app for scheduled data operations, possibly leftover timers. too many auto-refreshes can cause repeated access to the database.The remaining 2 possibilities in my mind are:", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Many thanks for your post.Is there anywhere I can go to see accurate logs? The app services logs don’t show all of these requests.", "username": "Christopher_Barber" }, { "code": "", "text": "View and Download MongoDB Logs — MongoDB Atlasbut not all cluster types have log support", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Appreciate the help – but can MongoDB / Atlas please respond on this – The docs say “App Services counts the number of requests that an application receives and handles…Requests include function calls, trigger executions, and sync updates, but exclude user authentication and blocked or invalid requests.”We’re seeing 1000 a day and have no triggers or anything like that.I’ve just downloaded the logs from App Services and it’s nowhere near that amount.This is blocking us from moving forward, hoping someone from Atlas can please respond?", "username": "Christopher_Barber" }, { "code": "", "text": "Sure, no problem. Good luck waiting for an answer from an official about the free tier on the community forum.", "username": "Yilmaz_Durmaz" } ]
1000 or so "Requests" per day on free tier in app services?
2022-11-12T16:30:20.535Z
1000 or so &ldquo;Requests&rdquo; per day on free tier in app services?
1,416
null
[]
[ { "code": "", "text": "Can anyone out there spare some time to look at my db. Im struggling to get it together and im not sure if what ive already done is right.\nthank you in advance", "username": "Jean_Smith" }, { "code": "", "text": "Hi @Jean_Smith ,Sure thing!What should we look at? Can you share the use case, the relationships and what are the magnitude of them (if its one to many or many to many then how much are on each side).Additionally, please let us know what are your struggles or in-queries towards MongoDB Data Modeling…Ty\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi, im new to mongodb. Do you have anydesk so i can show you?", "username": "Jean_Smith" }, { "code": "", "text": "Hi @Jean_Smith ,We usually do not operate this way in forumsYou can write your thoughts and questions in a post and we can have a discussion.Otherwise consider MongoDB support or our free university courses which are excellent for new comers:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Database issues with schema and relations
2022-11-13T06:36:53.179Z
Database issues with schema and relations
957
null
[ "transactions", "react-native" ]
[ { "code": "export class Photo extends Realm.Object {\n _id;\n user;\n result;\n picture;\n pictureUrl;\n annotation;\n userInputs;\n createdAt;\n projectId;\n\n static schema = {\n name: \"Photo\",\n properties: {\n _id: \"objectId\",\n user: \"string\",\n result: \"Result?\",\n picture: \"data?\",\n pictureUrl: \"string?\",\n annotation: \"string?\",\n userInputs: \"{}?\",\n createdAt: \"date\",\n projectId: \"objectId\",\n },\n primaryKey: \"_id\",\n };\n}\n\nexport const ResultSchema = {\n name: \"Result\",\n embedded: true,\n properties: {\n traits: \"Trait{}?\",\n errors: \"{}?\",\n score: \"Score?\",\n },\n};\n\nexport const TraitSchema = {\n name: \"Trait\",\n embedded: true,\n properties: {\n value: \"float?\",\n unit: \"string\",\n },\n};\n\nexport const ScoreSchema = {\n name: \"Score\",\n embedded: true,\n properties: {\n score: \"int\",\n total: \"int\",\n minusPoints: { type: \"MinusPoint[]\", default: [] },\n },\n};\n\nexport const MinusPointSchema = {\n name: \"MinusPoint\",\n embedded: true,\n properties: {\n value: \"int\",\n reason: \"string\",\n },\n};\n// Write transaction omitted\n// Read a local image and convert it to base64\nconst picture = await readFile(path, \"base64\");\n// Convert the base64 image to Buffer\nconst binaryBuffer = Buffer.from(picture, \"base64\");\n\nconst newPhoto = realm.create(\"Photo\", {\n _id: new Realm.BSON.ObjectId(),\n user: user.profile.email,\n userInputs: section.userInputs,\n createdAt: new Date(),\n projectId: new Realm.BSON.ObjectId(projectId),\n annotation: \"someString\",\n picture: binaryBuffer,\n})\n*data*", "text": "Hi!\nI’m using the WildAid O-FISH post to create a similar project using react-native SDK with flexible sync.I’m trying to create a Photo object but I get a “no internal field” error.Here’s my Photo modelAnd here’s how I’m creating a new photoI feel like the problem might come from the picture property. I read in the doc that *data* type maps to ArrayBuffer which is what a Buffer is. Maybe it’s another field causing the problem but I really don’t know which one.Thanks in advance!", "username": "Renaud_Aubert" }, { "code": "userInputs: \"{}?\",", "text": "userInputs: \"{}?\",Is that intentional?", "username": "Jay" }, { "code": "userInputs", "text": "Yes, userInputs is an optional dictionary of mixed values.\nIs this syntax forbidden?", "username": "Renaud_Aubert" }, { "code": "", "text": "No - it’s fine. Just wanted to ensure that was actually what you meant to do. To narrow the issue, I would comment out some of the properties and add them back in until the crash occurs.On another note, Realm is generally not well suited for storing blob type data like images as they can often go beyond what can be stored in a single property. 16Mb is the limit.While I know it’s done in the WildAid O-FISH app, they have limits in place to ensure that doesn’t happen. There are a number of other options for storing images, and you can keep the url of that within Realm.", "username": "Jay" }, { "code": "picture: \"data\"projectId: \"ObjectId\"", "text": "Thank your for your advice!As the error isn’t very specific I’ve tried commenting out a few properties especially picture and date being the two with the most “unsual” type.\nFor instance, is a Buffer a correct value for picture: \"data\" in my schema?One thing I didn’t mention is that when I try to create a photo I get an error but a document is still created and replicated to the server. Maybe the error comes from the server?\nI have a trigger running when a new Photo document is added (to remove picture and upload it to S3)Here’s what the created photo looks like\n\nCapture d’écran 2022-11-09 1010291152×197 15.2 KB\nI don’t understand why the projectId isn’t set. I didn’t create a relationship I just typed projectId: \"ObjectId\" and I’m giving an ObjectId instance to the propertyPS: I’m aware of Mongo 16MB limit, I’ve implemented the same trigger as the WildAid O-FISH app to remove the photo", "username": "Renaud_Aubert" }, { "code": "section.userInputsconst section = realm.objectForPrimaryKey(\"Section\", new Realm.BSON.ObjectId(sectionId));userInputs: section.userInputsuserInputs: section.userInputs.toJSON()", "text": "The problem came from section.userInputs, I forgot to include its definition in my first post\nconst section = realm.objectForPrimaryKey(\"Section\", new Realm.BSON.ObjectId(sectionId));.I replaced userInputs: section.userInputs with userInputs: section.userInputs.toJSON()", "username": "Renaud_Aubert" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
React-Native sdk "no internal field" when creating a new object
2022-11-07T10:33:09.591Z
React-Native sdk &ldquo;no internal field&rdquo; when creating a new object
2,537
null
[]
[ { "code": "", "text": "I am looking to shutdown and bring back up my kubernetes cluster to save money. How do I properly shutdown MongoDB in a kubernetes environment to support this?", "username": "Kevin_Carr" }, { "code": "", "text": "I would suggest a normal shutdown of the host / k8s should work in most circumstances. Most sequences would involve send a kill SIGTERM to running containers, initiating a shutdown of mongod.With journalling any write should be persisted before being ack’d to the client.I frequently tear down and start replicasets in docker without even considering this. They always start back up flawlessly.", "username": "chris" }, { "code": "", "text": "Could you elaborate what happens under the hood when k8s issues SIGTERM?When you call shutdown on a primary, doesn’t it start primary reelection process? If so, it is a vicious circle, because potential candidates (secondaries) are also being shut down after k8s SIGTERM.", "username": "Yury_Hrytsuk" }, { "code": "", "text": "Do you know it or do you just suggest it - because I can say from experience, that even with Mongodb operator, there is always this error “Detected unclean shutdown” .So eithera) we must delete the lock file\nb) or there must be a proper shutdown procedure avail (which still does not exist)", "username": "John_Moser1" }, { "code": "db.shutdownServer()", "text": "I don’t know (for now) how you would issue a command to remove the container itself in this case, but to clean shutdown mongod instances, you need to connect to the admin database on each instance and run db.shutdownServer().There should be some scripting around for this purpose. have you tried to search for such?PS: check my answer in your other post about how to one-line with mongo shell.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Continue hereHi there It looks to me that mongod’s in the statefulset pods are not properly shut down. I can see during the startup this error “Detected unclean shutdown”. Since there are a lot of collections/files to be scanned the startup takes like > 1h. ...", "username": "John_Moser1" } ]
Graceful Shutdown of MongoDB Replicaset in Kubernetes
2021-03-22T15:41:50.961Z
Graceful Shutdown of MongoDB Replicaset in Kubernetes
5,376
null
[ "queries", "node-js", "replication", "compass" ]
[ { "code": "find{ date: \"2022/11/11\" }\n{ _id: \"someObjectIdGoesHere\" }\ndatefind", "text": "Atlas cluster deets:\nVERSION 5.0.13\nREGION GCP / Iowa (us-central1)\nCLUSTER TIER M0 Sandbox (General)\nTYPE Replica Set - 3 nodesConnecting from the bay area.I have a collection with ~180 documents, each relatively large sized between ~500-1500KB. Using both the NodeJS driver and MongoDB Compass, attempting to execute a find with a limit of 1 using a simple filter of any kind takes roughly 10-15 seconds to get a result. Sample filters I’ve tried:(there is an index on the date field, but given the poor performance when querying for its objectid, i imagine the index is not relevant)However, attempting to find with a limt of 1 using no filter results in a result almost instantly, as expected.Adding a simple, small document, then querying for that document directly also completes quickly.Since the collection only has less than 200 documents, and the filters are checking top-level fields, I would expect querying with a filter to be similarly performant.Curious if there’s a recommended way to query for larger documents in a performant way, since that’s presumably the issue here?", "username": "Timmy_Chen" }, { "code": "", "text": "if you are talking about the time to “return results”, I guess that should be something expected for the transfer of MB sized data package over the internet. Can you open a network traffic listener and check the speed when you start a query? do an internet speed test before testing your queryThis might even be related to your ISP or region ISP. Can you also try importing the same (or sample) data to some free tier test clusters on same/different regions.Also, do you have time to test your query on different times of the day? I had a month where my ISP was limiting my speed in 2-3 hours every night because they had work going on. You might have hit such an interval during your queries.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I should also mention that this happens both on my local machine (when connecting to Atlas) as well as on my live app hosted on a paid Heroku dyno. My home internet speed is ~800mb up/down so I don’t think it’s an internet speed issue.I have tested it during different times throughout the day, by trying to use my live app - and it has been pretty consistent starting about two weeks ago. I imagine it has something to do with the size of the documents, which is why I’m curious if there’s some special way I should be querying… though at the scale I’m working at, I would hope that nothing special is needed", "username": "Timmy_Chen" }, { "code": "", "text": "This might be the logical reason:\nAtlas M0 (Free Cluster), M2, and M5 Limitations — MongoDB Atlas“10GB out per seven day” then speed throttled for M0 free clusters. You can test this possibility on a new cluster importing same data.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Ahhhh… yup, that was it - didn’t know there were bandwidth limits. Thanks for looking out!", "username": "Timmy_Chen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Slow .findOne() with any filter with large (but few) documents
2022-11-12T14:49:00.541Z
Slow .findOne() with any filter with large (but few) documents
2,070
null
[]
[ { "code": "", "text": "I’m in the process of moving my app from CloudKit to Realm (+Realm Sync with Atlas). I completed most of the steps, but now I’m working on the initial sync of data.I want to display a nice progress bar on the initial sync using the addProgressNotification on the AsyncOpenTask. The “problem” is the data is syncing too fast and I’m not receiving any progress notification. In realty this is excellent news as I only have around 300 public documents for each user to sync and Realm Sync is doing this instantly, but I still want to understand how to intercept the download and show a progress once de collection will evolve.Steps:app.login(…)Async open:\nconfig = realmUser.configuration(partitionValue: “…”)\nlet task = Realm.asyncOpen(configuration: config…)Monitor tasktask.addProgressNotification(queue: .main) { [weak self] progress in\nif progress.isTransferComplete {\nself?.progressAmount = 0\nself?.logger.debug(“Transfer finished”)\n} else {\n…\n}The debugger is never entering the “else” statement. Once it receives a progress it will have the isTransferComplete state already set to true.Notes:\nprogress.transferredBytes = 10419\nprogress.transferrableBytes = 10419\nprogress.fractionTransferred = 1.0Are there some thresholds under which the transfer is set directly to done?", "username": "Andrei_Matei" }, { "code": "", "text": "I’ve noticed transferredBytes and transferrableBytes are always equal even on large datasets. Each update I receive is always 100%, just growing over the last update.", "username": "Tyler_Collins" } ]
Monitoring AsyncOpenTask on initial launch not returning progress
2021-10-18T11:02:59.108Z
Monitoring AsyncOpenTask on initial launch not returning progress
1,987
null
[]
[ { "code": "", "text": "Hi, im trying to create a db but im not sure how to add a data field that could be populated later. Ie. With creating a database to hold student information. I need the teacher to be able to manually add in a grade later in the GUI. So in the students collection i thought it may be correct to embed a document called grade and add the ability to add a grade at a later date. Can anyone help me with a solution.", "username": "Jean_Smith" }, { "code": "_id_id", "text": "data in MongoDB is flexible, meaning you may even start with an empty object and fill in details later. The thing here is to write your app so as to deal later with this missing information.what you need to know in this case is that every document you insert will get an _id field automatically (unless you create it yourself). you need to keep track of a document with this id field and then update it again with this id.Or you may take your time, give some thought to how your data should look, define all fields in a schema, and write your app to conform to the schema. Arrays are not needed to be filled at the beginning which suits your situation to keep grades. or make each grade a separate field with null values, or assign zeros.Yet, even with a well-thought schema, you need to keep track of _id field of a document you want to update. so you also need to get used to working with the id field.", "username": "Yilmaz_Durmaz" } ]
Adding in grades for courses
2022-11-12T12:52:18.701Z
Adding in grades for courses
1,726
https://www.mongodb.com/…b12e40d824bd.png
[ "queries", "swift" ]
[ { "code": " let realm = try! Realm()\n let specificPerson = realm.object(ofType: ChatMessage.self, \nforPrimaryKey: ObjectId(\"6367f704ebd3cbf5c024c338\"))\n let chatM = realm.objects(ChatMessage.self)\n print(chatM)\nResults<ChatMessage> <0x7fe4d163efa0> (\n\n)\nRealm.Configuation.defaultConfiguration", "text": "I am following along the tutorial for the Chat message and have some data. I am not sure how to query specific messages:returns Nil, i am not able to get the specific object. Even when i doI get an empty object :\nimage814×739 46 KB\nI figure it has to do with Realm.Configuation.defaultConfiguration ? but i am not sure what the fileURL would be?\nOpen a Default Realm or Realm at a File URL", "username": "newbieCoderIam" }, { "code": "let specificPerson = realm.object(ofType: ChatMessage.self, forPrimaryKey:Realm.Configuation.defaultConfiguration", "text": "I believe this is a cross post to you StackOverflow QuestionWelcome to the forums - in general it’s a good idea to keep questions in one spot so we can focus our attention on it.The question(s) are still pretty vague - there’s not enough code to understand the issue and the data shown in the question doesn’t match up to the query you’re attempting. Additionally, you’ve got some stuff in the question that I don’t think applies to the question.The code! Questions need to include a minimum amount of code to understand the issueWe need to know if you’re using a local realm or a synced realm. It appears to be synced. If so, we would need to understand if you’re connecting successfully to Realm as shown in the Getting Started guide Configure and Open a Synched RealmThen, the naming conventions are a little confusing:let specificPerson = realm.object(ofType: ChatMessage.self, forPrimaryKey:If you’re querying for a ChatMessage, it will not return a specificPerson but a ChatMessage. Also, that’s not a query. That Finds a specific object by it’s primary keyAlso, that code will not return anything as there is no matching entry - at least by what’s shown in the screenshotI get an empty object :You’re on to something there - but since Realm is a local first database, that tells me that the data on the server doesn’t exist locally. Perhaps you’re attempting to read local data but it’s actually synced? Again, not enough code to understand the use case.I figure it has to do with Realm.Configuation.defaultConfiguration ? but i am not sure what the fileURL would be?That’s a problem because the link you included indicates you’re trying to use a local only realm, not a synced one.So - I would suggest editing and clarifying both questions with enough accurate data and code so we can attempt to help.", "username": "Jay" }, { "code": "import RealmSwift\n\nclass ChatMessage: Object {\n @Persisted(primaryKey: true) var _id:ObjectId\n @Persisted var room: String\n @Persisted var author: String\n @Persisted var text: String\n @Persisted var timeStamp = Date()\n \n convenience init(room: String, author: String, text: String){\n self.init()\n self.room = room\n self.author = author\n self.text = text\n }\n}\nimport RealmSwift\n\nstruct RoomsView: View {\n let userName: String\n \n let rooms = [\"Java\", \"Kotlin\", \"Swift\", \"JavaScript\", \"Naruto\"]\n \n var body: some View {\n List {\n if let realmUser = realmApp.currentUser {\n ForEach(rooms, id: \\.self){ room in\n NavigationLink(destination: ChatView(userName: userName, room: room)\n .environment(\\.realmConfiguration, realmUser.configuration(partitionValue: room))){\n Text(room)\n }\n }\n }\n\n }\n .navigationBarTitle(\"Language\", displayMode: .inline)\n }\n}\n\n\nChatView:\n```import SwiftUI\nimport RealmSwift\n\nstruct ChatView: View {\n @ObservedResults(Chatty.self, sortDescriptor: SortDescriptor(keyPath: \"timeStamp\", ascending: true)) var messages\n\n let userName: String\n let room: String\n\n @State private var messageText = \"\"\n var body: some View {\n VStack {\n ForEach(messages) { message in\n HStack{\n if message.author == userName {Spacer()}\n Text(message.text)\n if message.author != userName {Spacer()}\n\n }\n .padding(4)\n }\n Spacer()\n HStack {\n TextField(\"New message\", text: $messageText)\n .padding(9)\n .background(.yellow)\n .clipShape(Capsule())\n Button(action: fetchData) { Image(systemName: \"paperplane.fill\") }\n .disabled(messageText == \"\")\n\n \n }\n }\n .padding()\n .navigationBarTitle(\"\\(room)\", displayMode: .inline)\n }\n \n func fetchData() {\n let realm = try! Realm()\n \n let specificPerson = realm.object(ofType: ChatMessage.self, forPrimaryKey: ObjectId(\"6369ee9db15ac444f96eb5d6\"))\n\n let test = realm.objects(ChatMessage.self)\n\n print(specificPerson)\n print(test)\n\n }\n \n private func addMessage(){\n let message = Chatty(room: room, author: userName, text: messageText)\n $messages.append(message)\n messageText = \"\"\n }\n}\nConsole output:\n\n```nil\nResults<ChatMessage> <0x7fbc58547900> (\n\n)\n", "text": "Thanks for the response Jay!\nCode:Model:RoomView:Hope this helps clarify my question. I mainly copied the code from mongodb-developer/LiveTutorialChat", "username": "newbieCoderIam" }, { "code": "fetchDatalet realm = try! Realm()\nlet realm = try! Realm()func getRealm() async throws -> Realm {\n let user = try await getUser()\n let partitionValue = \"some partition value\"\n var configuration = user.configuration(partitionValue: partitionValue)\n let realm = try await Realm(configuration: configuration)\n return realm\n}\n\nlet realm = try await getRealm()\n\n// Do something with the realm\n\n", "text": "That’s a bit better but most of that code is unrelated to reading data from Realm so it not necessary.The issue you’re having is that there is data on the server, but no data stored locally - there’s nothing in your app or code that attempts to work with a Synced Realm - you’re working local only.For example - the fetchData function connects to a local realm hereSays read data from the local realmI suggest going back to the Getting Started Guides; go through working with a local Realm first. Get your app to where you can read/write and display locally stored data, then once you have a good understanding of that process, then move on to working with a synced Realm as connecting to a synced Realm is entirely different.For example - as your code shows, that is how you work with a local realmlet realm = try! Realm()but then how you work with a synced realm is much more verbose as the user needs to be authenrticated first and then create the connection like thisSo, I suggest learning how to read and write data from a Local Realm and then move onto a Synced Realm", "username": "Jay" }, { "code": "", "text": "Thanks for the advice Jay. I thought Realm worked by automatically storing data in local and then syncing to Atlas online? This means i just directly added data online before i even stored locally? I followed MongoDb youtube video tutorial of creating first app, i guess this tutorial directly put data online instead of local", "username": "newbieCoderIam" }, { "code": "let realm = try! Realm()", "text": "I thought Realm worked by automatically storing data in local and then syncing to Atlas online?That is 100% correct! Realm is a local first database and then synchs in the background. As mentioned previously, while your data may be synced - stored locally and online, the code you’re using is trying to access a completely different, NON Synced Realm aka local-onlyFor clarity, (conceptually) working with a Synced Realm stores the data in one place on your drive, but if you chose to work with a local-only Realm, data is stored in a different place. They are two different Realms. In fact, the actual Realm files are different for a local-only Realm vs a Synced Realm.This codelet realm = try! Realm()is accessing a local-only Realm, which in your case, contains no data. If you want to access a Synched Realm (which is stored in a different place on your drive) you need to follow the code in the guide for working with Synched Realms (links are in my above post).The file management part is handled by the SDK in both cases so you don’t need to worry about defining file paths etc. But the key is a local-only Realm is completely different than a Synced Realm", "username": "Jay" } ]
Opening a Realm connection properly and connecting to Atlas
2022-11-10T20:46:08.890Z
Opening a Realm connection properly and connecting to Atlas
2,082
null
[ "app-services-data-access" ]
[ { "code": " \"apply_when\": {\n \"user_id\": \"%%user.id\",\n \"%%user.data.email\": {\n \"%exists\": true\n }\n },\n", "text": "Im trying to have a role that applies when user has a email address.\nBut im getting\nerror executing match expression: invalid document: must either be a boolean expression or a literal documentwith the following rule:It seems to be what is suggested here:\nBut it does not work for me.", "username": "hans" }, { "code": " {\n \"roles\": [\n {\n \"name\": \"owner\",\n \"apply_when\": {\n \"user_id\": \"%%user.id\",\n \"%%user.data.email\": {\n \"%exists\": true\n }\n },\n \"insert\": true,\n \"delete\": true,\n \"search\": true,\n \"read\": true,\n \"write\": true,\n \"fields\": {},\n \"additional_fields\": {}\n }\n ],\n \"filters\": [],\n \"schema\": {}\n}\n", "text": "Here is the full rules:", "username": "hans" }, { "code": "%%user{\n \"owner\": \"%%user.id\",\n \"%%request.remoteIPAddress\": {\n \"$in\": \"%%values.allowedClientIPAddresses\"\n }\n}\n{\n \"organizationId\": \"%%user.custom_data.organizationId\",\n \"%%user.custom_data.roles\": \"user-admin\"\n}\n%%root.[field]{\n \"%%root.organizationId\": \"%%user.custom_data.organizationId\",\n \"%%user.custom_data.roles\": \"user-admin\"\n}\n", "text": "Just for reference: I found that you get this error when mixing document fields and %%user expansions in the same expression. It’s somewhat weird because the docs use a similar example:But anyway. I found that this Apply When was not working:and when I changed the field reference to %%root.[field], it did work:", "username": "Rijk_van_Wel" } ]
Role: apply_when: email exists (cant figure it out)
2021-09-05T16:03:17.587Z
Role: apply_when: email exists (cant figure it out)
4,155
null
[]
[ { "code": "", "text": "Hi there,We are using Atlas basic text search. We are trying to compare items from listA to listB and sometimes we get matches that do make sense by text search standards, but it really isn’t the same item in real world and shouldn’t match.For instance:LIST A: SAGE\nLIST B: Sage Palm, Sausage, garlic.So the result is that sage matches Palm Sage and Sausage even though they are not the same item.I was thinking of using synonyms and building an array of all possible permutations of each item and then comparing the whole phrase against it. For instance:GARLIC : [GARLIC POWDER, GARLIC, GARLIC SALT]\nSAGE PALM: [SAGE PALM]\nSAUSAGE: [SAUSAGE, SAUSAGE LINKS, CHICKEN SAUSAGE, …]Appreciate any feedback", "username": "Misagh_Jebeli" }, { "code": "", "text": "Hey there! Can you share the query and index definition you used for this?Are you looking for exact matches? You may find some inspiration from this blog.", "username": "Elle_Shwer" }, { "code": "", "text": "Hi @Elle_Shwer =,The search is tricky. I hope these examples make sense:\nScreenshot_20220708-081134_ChiveLab (3)1080×2169 102 KB\n", "username": "Misagh_Jebeli" }, { "code": "const poisonAggregrate = (poison) => [\n {\n $search: {\n compound: {\n should: [\n {\n text: {\n query: poison,\n path: \"name\",\n score: {\n boost: {\n value: 5,\n },\n },\n },\n },\n ],\n },\n },\n },\n {\n $limit: 1,\n },\n ];\n", "text": "Here is the aggreagate we are using:\nScreenshot 2022-07-11 004239 (1)950×282 10.1 KB\nThank you", "username": "Misagh_Jebeli" }, { "code": "", "text": "That’s very interesting, I suspect synonyms would be really helpful here, as you suggested. Explicit mappings specifically.Will think about this more though and am certainly curious if anyone else in the forum has ideas.", "username": "Elle_Shwer" }, { "code": "", "text": "Hey team,The synonym array worked. We ended up building an array for each poison item and stored all the possible combinations of the ingredient in it. Common Ingredients such as salt, garlic, and onion had 4000 to 14000 elements in their array.Thank you Mongodb team and community for brainstorming ", "username": "Misagh_Jebeli" } ]
How to solve this NLP search scenario
2022-09-09T13:17:14.675Z
How to solve this NLP search scenario
2,336
null
[ "queries" ]
[ { "code": "", "text": "I am currently a bootcamp student and have created 2 collections with 10 documents in 1 and 3 documents in another but db.collections.find() only works sometimes it doesn’t always work more often than not it doesn’t and I’m using Mac anyone else having this kind of trouble??", "username": "Joe_McNeil" }, { "code": "", "text": "Can you post your code, including the code used to create your collections and provision them with documents? Your question amounts to the classic “it doesn’t work” and is unanswerable as posed.", "username": "Jack_Woehr" }, { "code": "", "text": "i can post a github link otherwise i can only post on picture due to me being new to the community.", "username": "Joe_McNeil" }, { "code": "", "text": "\nScreenshot 2022-11-05 at 1.53.21 PM1920×1200 78.1 KB\n\nhere is the terminal in which i tried to do the db.collection.find()", "username": "Joe_McNeil" }, { "code": "", "text": "it seems to be mostly an issue with the terminal side of it. i created a database in the terminal and created 2 collection called movies and users. the users collection should have 4 documents and the movies should have 10 sometimes i can get it to show but most times it does not.", "username": "Joe_McNeil" }, { "code": "mongosh", "text": "Well, the first thing is you’re already connected to the localhost after issuing the mongosh command. I see you trying to connect after you’re already connected.", "username": "Jack_Woehr" }, { "code": "", "text": "Db.collection.find() is not correct\nYou have to replace collection with your collection name\nCheck your collections under mflix db\nShow collectionsAs Jack_Woehr mentioned you are trying to start another mongod while already connected to a mongod", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Yeah I have I’m just saying this as an example", "username": "Joe_McNeil" }, { "code": "", "text": "So before this instance I had tried it prior as well without running the instance", "username": "Joe_McNeil" }, { "code": "", "text": "I’ve lost track of this discussion \n@Joe_McNeil , what are you trying to do?", "username": "Jack_Woehr" }, { "code": "", "text": "I am using mongodb from my MacBook. I created a database named myFlixDB with 2 collections. Movies and users. For no apparent reason using the db.movies.find() or db.users.find() now shows that they databases are empty. I had 10 documents in the movies collection and 4-5 documents in the users collection. I had to submit them via a screenshot showing the data 2 weeks ago so I know they have data in them but now they show empty. I’m not worried about any of my JavaScript or node.js that’s not the issue I am having. Given that I cannot show any of the data I cannot export it and then cannot import it to mongo atlas.", "username": "Joe_McNeil" }, { "code": "", "text": "Just diagnosing this by telepathy, my guess is you added the collections to the wrong db and are looking in the wrong place.", "username": "Jack_Woehr" }, { "code": "db.<collection>.find()collectionsshow dbsuse <db><db>dbshow collectionsdb.movies.find()moviesdb.users.find()users", "text": "Welcome to the MongoDB Community @Joe_McNeil!As mentioned in earlier comments, you need to specify collection names when using db.<collection>.find(). Your example query will only work if you have a collection called collections.In the MongoDB shell:show dbs will show all databasesuse <db> will select a database (<db> is a placeholder for a database name)db will confirm the current database nameshow collections will show you all collections in the currently selected databasedb.movies.find() will find documents in the movies collection in the current databasedb.users.find() will find documents in the users collection in the current databaseI suggest giving MongoDB Compass a try as an administrative GUI. Compass has handy features for interacting with data in a MongoDB deployment and also embeds the MongoDB shell so you still have the option of using a command line interface.Regards,\nStennie", "username": "Stennie_X" }, { "code": "db.collection.find()", "text": "The “collection” in db.collection.find() should be a specific name of collection", "username": "linupy_chiang" } ]
Db.collections.find
2022-11-05T16:55:07.660Z
Db.collections.find
3,193
https://www.mongodb.com/…b217c364f981.png
[ "dot-net" ]
[ { "code": " public class Product : RealmObject\n {\n IList<Price> _priceList;\n \n public Product()\n {\n _id = ObjectId.GenerateNewId();\n _priceList = new List<Price>();\n }\n [BsonId]\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId _id { get; set; }\n [MapTo(\"_partition\")]\n [BsonElement(\"_partition\")]\n public string _partition { get; set; } = \"6310fa126afd4bc77f5517e4\";\n\n [MapTo(\"ref_id\")]\n [BsonElement(\"ref_id\")]\n public int RefId { get; set; }\n\n [MapTo(\"descript\")]\n [BsonElement(\"descript\")]\n public string Descript { get; set; }\n\n /* [MapTo(\"prod_type\")]\n [BsonElement(\"prod_type\")]\n public ProductType ProdType { get; set; }\n*/\n [MapTo(\"price\")]\n [BsonElement(\"price\")]\n public double Price { get; set; }\n\n [MapTo(\"price_list\")]\n [BsonElement(\"price_list\")]\n public Price[]? PriceList\n {\n get => _priceList.ToArray();\n set\n {\n _priceList.Clear();\n foreach (var price in value)\n {\n _priceList.Add(price);\n }\n }\n }\n\n [MapTo(\"active\")]\n [BsonElement(\"active\")]\n public bool Active { get; set; }\n\n }\n public class ProductType : EmbeddedObject\n {\n [MapTo(\"prod_type_id\")]\n [BsonElement(\"prod_type_id\")]\n public int ProdTypeId { get; set; }\n\n [MapTo(\"type_descript\")]\n [BsonElement(\"type_descript\")]\n public string? TypeDescript { get; set; }\n\n }\n public class Price : EmbeddedObject\n {\n [MapTo(\"primary\")]\n [BsonElement(\"primary\")]\n public bool Primary { get; set; }\n\n [MapTo(\"charge\")]\n [BsonElement(\"charge\")]\n public double? Charge { get; set; }\n }\ntry\n {\n var config = new AppConfiguration(\"nrt-product-xkyox\");\n \n\n _RealmAppProduct = App.Create(config);\n Console.WriteLine(\"Test Instance generated\");\n\n var user = await _RealmAppProduct.LogInAsync(Credentials.Anonymous());\n\n if (!user.State.Equals(UserState.LoggedIn))\n Console.WriteLine($\"Uer = null\");\n else\n Console.WriteLine(\"Authenticated\");\n\n Console.WriteLine($\"Instance of PRODUCT REALM Generated\");\n\n var _partition = \"6310fa126afd4bc77f5517e4\";\n\n var partition_config = new PartitionSyncConfiguration(_partition, user);\n Console.WriteLine($\"Test user Generated\");\n\n _Realm_Test = await Realm.GetInstanceAsync(partition_config);\n Console.WriteLine($\"Waiting for download\");\n\n await _Realm_Test.SyncSession.WaitForDownloadAsync();\n Console.WriteLine($\"Test Downloaded\");\n\n\n var test = _Realm_Test.All<Product>();\n\n foreach (var s in test)\n {\n Console.WriteLine(s.Descript);\n \n }\n\n Console.ReadLine();\n // Read Sales data\n }\n catch (Exception ex) { Console.WriteLine($\"InitializeRealm Exception\\n{ex.Message}\\n\\n{ex.InnerException}\"); }\n", "text": "It would seem there are extreme nuanses in this product. Realm in particular.\nIts been suggested that I’m doing something unusual, but I don’t know if that’s the case?I will tell you the struggle is real though.What I feel should be simple. Isn’t and it’s frustrating what principle I’m missing. I’m sure I am.All I want to do is have a Collection (Sales) with a couple nested object’s, even in a list. I gave up on Sales as that’s a more involved collection. How about Products? Only two Embeded as such. This is a test harness to just read the realm.I use these same model’s to load the data (pulling from an existing DB) in another console app.\nAfter I generated the exact same models on a client side console app to test. Which is basically this to list out the products.Clearly I’m missing somthing because this just dosn’t work and I’m not clear enough to understand what’s happening? Note I commented out the ProdType property to try and simplify. That gives a worse error that makes no sense to me.Without it I keep being told it’s made my partition key optional? Why would that be optional?Realms.Exceptions.RealmException: The following changes cannot be made in additive-only schema mode:If I’m trying to think logically that should be “required”? Either way. Won’t work…despite it downloading the data in the local Realm DB. It’s just inacessible I guess?Even when you look at the Realm Studio, it’s decided to convert my obect into something else.That was before I commented the prod_type property out.I don’t even know what to ask yet? I was ambitious and had all my collections (Sales, Employees, Product) all set out and figured “how hard could it be”? Match up some Schema’s…hack a little and should be workable even if it takes a bit to figure it out…right? Not any further than when I started at this point and just don’t know if this thing has any stability at all. Maybe all you can do is some basic primatives and that’s it? That would be pointless.When I had my original 3 collections set with one AppService for each - that was the plan, that was a cluster.I thought Maybe it was because I wanted my console app (expecting some threading issues even though I loaded Nito) to use the driver and Realm in the same app? That was the “design”.Start with batching the data into Atlas\nSync the local Realms (simple partitions to start)\nSimple CRUD on the realms.\nAll that in a Service. (console to start dev)Except you can’t do that. Easily. The errors are abundant and, at this point at lease, make little sense.Can you use the same model to load Atlas and in the Realm CRUD? I realize It inherits from the RealmObject, but what’s happening in the background? Certainly loads Atlas fine, but on the other side it just fails and complains and then reports things in other object models etc…so, I thought I’d separate the load from the realm CRUD operations as above. All that’s supposed to do is read the realm.So, it seems Realm is doing some reflection and failing maybe because I’m not sure what it’s doing.The expectation would have been.; monitor a data source and do operations on the REALM’s.Not sure where/why this seems insurmountable?", "username": "Colin_Poon_Tip" }, { "code": "string[Required]ProductType.TypeDescriptstring?", "text": "Okay, I think I understand what’s going on - unfortunately, the Realm .NET SDK doesn’t yet process models that are annotated for nullability correctly - that’s why it treats string as a nullable type. You need to add the [Required] attribute to all string properties that you want to treat as non-nullable. We realize that getting your local models and cloud models to sync is a little tricky, which is why there’s a tool you can use to export them - in your cloud app, you can navigate to “Realm SDKs” under the “Build” section and then choose the second tab which says “Realm Object Models” - you’ll have the option to export your json schema in a language of your choosing. Unfortunately, it will not add nullability annotations for nullable strings, so you’ll need to do that manually, but it will add all the [Required] attributes.My hunch that you’re using nullability annotated files is due to ProductType.TypeDescript being declared as string?. If that’s not correct, then it may be a different issue. Note that after changing your C# models, you may need to delete your local Realm file to avoid schema validation errors.", "username": "nirinchev" }, { "code": "public class ProductType : EmbeddedObject\n {\n [MapTo(\"prod_type_id\")]\n public int ProdTypeId { get; set; }\n\n [MapTo(\"type_descript\")]\n \n public string? TypeDescript { get; set; }\n }\npublic ProductType prod_type { get; set; }\n\n", "text": "Much appreiated @nirinchev !!\nThat gives me hope that maybe it’s still possible. I’ve scrapped it and am starting fresh-ish. One mode/field at a time. I’ve managed to sync Products with only the primatives and that got synced. Now I’m going to attempt to add a single nested object. Which I’ll start with prod_type. Which is a Embedded ProductType model.Is this not a workable property?If that’s workable GREAT!!Next will be an array of objects. If I get there?I know that it can’t be annotated as [Required] due to it being an object?I persiste Thanks!!\nCPT", "username": "Colin_Poon_Tip" }, { "code": "public class Product : RealmObject\n{\n // ...\n public ProductType? prod_type { get; set; }\n}\n\npublic class ProductType : EmbeddedObject\n{\n // ...\n}\npublic class ProductType : EmbeddedObject\n{\n public ProductType self_link { get; set; }\n}\nProductType", "text": "You can definitely use embedded objects as properties, but make sure you’re not forming a loop. E.g. this is totally fine:But the following isn’t:On the nullability note, make sure to annotate your ProductType property as nullable as object references can never be required with Realm. Realm will ignore the nullability annotation, but it will be there for the rest of your code to take advantage of.", "username": "nirinchev" }, { "code": "public class Product : RealmObject\n {\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId _id { get; set; } = ObjectId.GenerateNewId();\n [Required]\n public string _mypartition { get; set; } = \"6310fa126afd4bc77f5517e4\";\n [Required]\n public string descript { get; set; }\n [Required]\n public int? ref_id { get; set; }\n public bool active { get; set; }\n \n public ProductType? prod_type { get; set; }\n\n }\npublic class Product : RealmObject\n {\n [PrimaryKey]\n public ObjectId _id { get; set; }\n [Required]\n public string _mypartition { get; set; }\n [Required]\n public string descript { get; set; }\n\n [Required]\n public int? ref_id { get; set; }\n public bool active { get; set; }\n \n public ProductType? prod_type { get; set; }\n\n }\n\n\"title\": \"Product\",\n \"required\": [\n \"_id\",\n \"active\",\n \"descript\",\n \"_mypartition\",\n \"ref_id\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_mypartition\": {\n \"bsonType\": \"string\"\n },\n \"active\": {\n \"bsonType\": \"bool\"\n },\n \"descript\": {\n \"bsonType\": \"string\"\n },\n \"prod_type\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"ProdTypeId\": {\n \"bsonType\": \"int\"\n },\n \"TypeDescript\": {\n \"bsonType\": \"string\"\n }\n }\n },\n \"ref_id\": {\n \"bsonType\": \"int\"\n }\n }\n", "text": "Thanks for the response.\nI did end up doing that, but never got it to work.Any idea what this error represents?Realms.Exceptions.RealmException: The following changes cannot be made in additive-only schema mode:So, I’ve separated the loading of Atlas and the Realm in separate test harnesses.The loading model which does a batch to AtlasSimilar the Realm test modelSchema worked out to be.I’m not exactly sure what “additive mode” is or what road that leads to.It does populate and I can see it in the Studio. Excepct of course the same strange occurance happens to that property?Feels close CPT", "username": "Colin_Poon_Tip" }, { "code": "prod_type\"\"", "text": "Hey sorry for the delay - as far as I can tell, the issue is that your json schema doesn’t define a title for the prod_type object. I guess this causes the server to assume this object is of type \"\" (i.e. empty string).", "username": "nirinchev" }, { "code": "", "text": "Hello @nirinchev !!\nI’ve been hacking a little and thanks to some of your insight (.NET Model Exports), maybe I’m onto a path.What I noticed is that the Models were old models? Some of the attributes were named differently (caps etc) after some changes I’d been making. So, that was alarming.I decided to copy the export and overwrite my current models to match. And what do you know…that error went away. AND I could print out the properties. HOWEVER, my Real Studio still showed the object as such. So, that still wasn’t accessible.Nevertheless, I’m tearing it down again with that new tidbit.I’m going to load the data and build my nested Models as they come out and see what happens. Maybe it’s just a matter of doing a new Realm App? Idk…i’ve been trying to do each from scratch every time, but I can’t reconcile how those old properties hung around.Right now I’m loading fresh data and will build a new Realm App service around it and see if what I’m thinking works or not Thanks for responding…it’s a tremendous help as it strings me along thinking this is “possible”. If I get to some “understanding” about objects and array’s of objects then I’d be able to make some progress.Very nuanced product I’d say. But when it printed data a sh*t my pants!! So, onward!!Cheers.\nCPT", "username": "Colin_Poon_Tip" }, { "code": "public class Product : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId _id { get; set; } = ObjectId.GenerateNewId();\n [Required]\n public string _mypartition { get; set; } = \"6310fa126afd4bc77f5517e4\";\n [Required]\n public string descript { get; set; }\n [Required]\n public int? ref_id { get; set; }\n public bool active { get; set; }\n \n public ProductType? prod_type { get; set; }\n\n }\n\n\npublic class ProductType : EmbeddedObject\n {\n public int? ProdTypeId { get; set; }\n\n public string TypeDescript { get; set; }\n }\npublic class Product : RealmObject\n{\n [MapTo(\"_id\")]\n [PrimaryKey]\n public ObjectId Id { get; set; }\n [MapTo(\"_mypartition\")]\n public string Mypartition { get; set; }\n [MapTo(\"active\")]\n public bool Active { get; set; }\n [MapTo(\"descript\")]\n [Required]\n public string Descript { get; set; }\n [MapTo(\"prod_type\")]\n public Product_prod_type ProdType { get; set; }\n [MapTo(\"ref_id\")]\n public int RefId { get; set; }\n}\n\npublic class Product_prod_type : EmbeddedObject\n{\n public int? ProdTypeId { get; set; }\n public string TypeDescript { get; set; }\n}\n\nvar test = _Realm_Test.All<Product>();\n\n foreach (var s in test)\n {\n Console.WriteLine($\"{s.RefId}\\t{s.Descript}\\t{s.Active}\\t{s.Mypartition}\\t{(s.ProdType.TypeDescript)}\");\n \n }\nWaiting for download\nTest Downloaded\n1 Test Product x False 6310fa126afd4bc77f5517e4 Ordering Product\n2001 Test Prod 22 True 6310fa126afd4bc77f5517e4 Ordering Product\n2002 Default Message True 6310fa126afd4bc77f5517e4 Manual Keyboard\n2003 Default Seat True 6310fa126afd4bc77f5517e4 Seating Position\n2004 /as Appetizer True 6310fa126afd4bc77f5517e4 Option (ie Hold)\n2005 /as Main True 6310fa126afd4bc77f5517e4 Option (ie Hold)\n2006 Hold and Fire True 6310fa126afd4bc77f5517e4 Delay Print Command\n2007 Clear Table True 6310fa126afd4bc77f5517e4 Bussing Command\n2009 Arizona Iced Tea True 6310fa126afd4bc77f5517e4 Ordering Product\n2010 Cup of Joe True 6310fa126afd4bc77f5517e4 Ordering Product\n", "text": "hmph. I got it!! As I said I am loading things into Atlas with myModel:I guess this is where Idk what’s happening, but I deduced via the .NET SDK Model export that Mongo DB doesn’t name it the same. so, you have to follow what Mondo is doing. I think?The export of the Product and ProductType Models as Mongo see’s it is as follows. So, they re-named the embedded Model?Once I used THEIR naming convention and generated a class Model of the same name…bingo.The results… Which is correct.I absolutely don’t know what assumptions I’m making with this hack I/we got to, but it’s curious if this is the expectation of how to do what I’m doing?Ugh…\nAnywho, it’s forward movement and now I gotta see how an array looks in Mongo’s eyes.Cheers,\nCPT", "username": "Colin_Poon_Tip" }, { "code": "var test = _Realm_Test.All<Product>().FirstOrDefault(t => t.RefId == 2001);\n\n _Realm_Test.Write(() =>\n {\n test.PriceList[1].Primary= false;\n test.PriceList[1].Charge = 15.5;\n }\n );\n", "text": "@nirinchev , I think I have some success!!\nI haven’t merged my realm app back to the originally intended console (to be a service) but I’ve loaded the Product data (11000 docs) with array’s of objects and can manipulate the array of objects directly even with now setter defined as the complier will not allow when inheriting from RealmObject.I can’t imagine this is the final driver version, which is a concern as I suspect I’d have to rewrite all my models to the intended nameing/models.It would seem “logical” to use the same model for both Atlas and Realm work, but I’ll see how that goes.Not that intuitive, I’d think, but it’s good to get some results finally. My expectation is that I’ll have to include redundant Object Models naming the objects as Mongo does.At least there’s a “process”.I was about to embark on using constructors to set the Object Array’s, but turns out that functions as expected.Can’t tell you how much I appreciate your assistance!! It got me moving forward when I didn’t think I could!! I’m sure I’ll be back to the forum sooner than later. The “support” dosn’t seem to be as quick…when they respond (average 2 days per message) I’ve already moved past the issue thanks to you!!Best regards,\nCPT", "username": "Colin_Poon_Tip" }, { "code": "Product_product_typetitle \"title\": \"Product\",\n \"properties\": {\n // ...\n \"prod_type\": {\n \"bsonType\": \"object\",\n \"title\": \"ProductType\", // <-- Set this to something custom\n \"properties\": {\n // ...\n }\n },\n }\nMapTo[MapTo(\"Product_prod_type\")]\npublic class ProductType : EmbeddedObject\n{\n public int? ProdTypeId { get; set; }\n public string TypeDescript { get; set; }\n}\n", "text": "Regarding the naming, it looks like cloud generated the product type class name based on the parent collection title and the property name - i.e. (Product + _ + product_type). If you want to change that, you can always add a title to your json schema:Or you can add MapTo to your C# model to get a nicer name:Happy to hear things are finally moving forward and hope the worst issues are behind us now. If you do encounter more problems, we’re here to help.", "username": "nirinchev" }, { "code": "", "text": "Good day @nirinchev . Is it safe to say this Realm stuff is a hot mess beyond anything usable?\nI’m quite aware that I could be missing something fundamental, but I’m kind of a “brute force” guy and I’ve hacked, re-worked, took different directions, re-built, did it again, took another angle…etc…etc. Separated driver application from Realm app, and it’s just doesn’t work. Is it fundamentally ONLY for tiny mobile data? Like one form or something?I always get one Realm working, and as soon as I try to access another (say Employee after Products is working) just to observe the itterated data…it’s over…It starts complaining about nullable here there and not even for it’s own object model. At this point I go from one working Realm to nothing as the required fields are now wrong???Can things in different model NOT have the same property names or something? Why is it even asking about another model? I’m using the Employee model… wth does it have to do with the Product model. Once that (attempting to use two Realms in the same program) starts to unravel, everthing is about done and needs to be rebuilt pretty much because there’s nothing that makes sense anymore and who knows what’s happening?Trying to just run the Employee Realm/Object model, which I’ve done 100 times suddenly it’ thinks the property Product.active_price is nullable on one side but not the other? Is it now?This product (Realm) doesn’t work does it?I think I started this thread trying to use the driver to load data and Realm to manipulate after the load.\nI think you were asking “why would I want to do that”? And although you said “techinically” it’s workable you didn’t understand why I would do that. Which I think I tried to explain?I don’t think it works either way. I think, at this point, you can “maybe” work with one Realm and that’s about it.\nFrankly, I haven’t even got to that point because BEFORE that happens I needed to establish working with multiple realms in one program. However, I’m going in circles and getting to the same point every time.I always get that done and don’t know why I can write to Lists that only have getters? That might be a sign of something bad? I’m not sure why array of objects seem acceible in the Driver and or Realm as I think I’ve seen doc talking about the opposite.As soon as I go down the path of the second model, doesn’t matter which all hell breaks loose and it seems like there’s no recovery.I just spent 2 days separating the driver loads from the REALM program because I was thinking maybe the Realm and drivers can’t be in the same application space. So, I’ll load things and just run a separate dedicated program for the Realm componont. That in and of itself is a hot mes and it turns out it’s no better anyway.I’m distraught by how much wasted time just to get something small to work is required here.\nThe docs says some things but in practice it does something else, but what does that mean? Such as, I can’t use setters on lists, but I can?Anywho, that’s a lot and considering you’re the only one who was listening and an excellent resource by the way, I just had to quit after the last failure. It’s endless.So, the only real question are.Anyway, I always appreciate it and I suspect I’ll throw out a support ticket to see what they say.Mongo wins, I’m throwing in the towel for now.uncle \nCPT", "username": "Colin_Poon_Tip" }, { "code": "", "text": "Everyone decides for themselves whether a product is usable or not and I won’t presume to make that decision for you and your company. Generally speaking, we do have a number of customers using Realm and it suits their needs.We’re aware that the schema validations are somewhat annoying, especially early on in development and we do have some projects that would improve that, but for now it is a requirement that your client models match exactly the JSON Schema defined on the server.One way to do that as I mentioned is to use the generated models from the cloud UI - if you’re using the exported models, you should not be running into these schema mismatch errors. If you do run into them, we would need to see your C# model and your Json schema to try and understand what the cause for the discrepancy is.Regarding your questions:Again, I understand and sympathize with your frustrations. It appears like you’re trying to get a fairly complex schema syncing by manually specifying both the client and the server schema - this is always going to be difficult and the general recommendation we have is to choose only one side and go from there. If you want your C# models to be the source of truth, then you can enable dev mode on the server and have the schema get generated from the client. Alternatively, you can specify your json schema on the server and use the model generation tool to generate the c# classes.", "username": "nirinchev" }, { "code": "", "text": "Godd day @nirinchev!!\nWell, I kept plugging away and DID get a little progress. I broke it all up, simplified (no lists) and got both collections sync’d as expected.I feel like a couple things happen and confusion ensuse due to my lack of REALM processing understanding. I still don’t know what it’s doing. However, I hacked a few things to get something working. Still holding before I move forward.For example, I started defining the .realm db file instead of letting it default. Which generated two local files/directories for each Realm (Employee, Products).What I still don’t understand is how “partitioning” is processing?\nI happen to have designated a “_partition” field in my collections and in this case they represent a location objectId. So, truthfully it’s the same in Employee and Products for my testing. What I’m confused about is that if all I load initially is Products, I noticed the system loads both collections. Even though I didn’t even ask for Employee’s? So, that’s confusing and leads me to believe there’s something global about these partition keys? Which is also weird as I beleive I can interact with both Realms? Even though I didn’t sign into it (Employee). I haven’t confirmed all those details on if it sync’s etc or what? But for me not very intuitive. My initial thought was one App(Realm) was it’s own app, but the cross over isn’t clear?I created a Server APP ID for each Realm(App service) but again, it doesn’t really need it. Once I login I with one, I think I can do stuff since it’d loaded everything locally? Right now, I’m trying to understand the “correct” method of:When I’m editing the Product App I find it weird that I can see the Employee object Models in the Realm SDK along with the embedded Price object model. However, editing the Employee Object Models it shows Product, Employee but not the ebmeded Price model? Not sure if that’s by design or being worked on. Idk why the cross over but I still haven’t got the basic Realm operations.Well, that’s already too much…again I’m down but still not out. I’ll keep hacking along in hopes of unwinding how this is functioning. I certainly don’t seem to find a whole lot in terms of doc relating to multiple reaml access and the “standard” practice. Do I log out, login each access? I heard I have to nill it, but how expensive is signing in every time? Maybe it’s not because off-line is local anyway.If I get there, I’m golden. I have ONE List of Obects working in my Products (a price list). I going for a location list in the Employee Object model today. Inch by inch until it explodes Best regards,\nCPT", "username": "Colin_Poon_Tip" }, { "code": "_partition\"abc\"_partitionProductEmployeerealm.Subscriptions.Update(() =>\n{\n var californiaEmployees = realm.All<Employee>().Where(e => e.State == \"CA\");\n realm.Subscriptions.Add(californiaEmployees);\n \n var allProducts = realm.All<Product>();\n realm.Subscriptions.Add(allProducts);\n});\n", "text": "Hey, so I’m a little confused and I think it might be a good idea to take a few steps back. Let’s start with the idea behind Realm and Device Sync. I imagine you have a pretty large MongoDB database with multiple products and employees. If you were to sync that to every single device, it’d be a ton of data that most users likely don’t need. So the idea of partitioning your data is to define subsets of that dataset to store locally on the end-user device and synchronize with the server. Different users have different partitioning schemes, but the rule of thumb is: all your documents with a particular value for the _partition field will be grouped together in one partition. So for example, if you have \"abc\" for _partition, all Product and Employee documents that have that value will get synchronized to the same local Realm file.While this works pretty well, it’s not particularly flexible - e.g. it doesn’t allow you to get all products and only a subset of the employees. That’s why we have a different synchronization mode, called “Flexible Sync”. With flexible sync, you define the queries that you want to subscribe to and the server will make sure to send you the documents that match. For example, you can do something like:That way, you can explicitly request the data you need from the server. It’s up to you to decide which sync mode you prefer to use, but one thing to keep in mind is that with partition sync, every document may exist in exactly one partition. On the other hand, with flexible sync, you can have the same document be sent to multiple clients depending on their subscriptions.Regarding your questions:", "username": "nirinchev" }, { "code": "var config1 = new PartitionSyncConfiguration(_locationId, user, \"C:\\\\Users\\\\...\\\\Documents\\\\iMonkey\\\\iMonkey.realm\");\n\n\n var _Realm = Realm.GetInstance(config1);\n", "text": "Thanks for responding!! I always apreciate a knowlegable resource. Well, ironically I think I’m way further along that you might think. So, I actually understand the idea of Partition sync and knew I’d get to Felxible after I proved out Partition sync. Which I’m wanting understand clearly why Partion sync is even a problem. The _partition fiedl I use in both the Product and Employee collection, in THIS case should be fine. Considering I am using one restaurant the Employee collection can be 1000 restaurants but partitioned by “location”. I happen to have ONE location configured and it’s ObjectId is 6310fa126afd4bc77f5517e4 which I use in the _partition field for both Products and Employee.So, at least I understand that. I’d like to know about your response on point 3 when you said “… 1. With partition sync, you can only have a single global partition key. You can’t define partition keys per collection…”.What’s the net affect of that? So, I can have several collections (Employee, Sales, Product, Shedules) wihch I beleive would be a separate App service per collection. That’s how I understand it. The “basic” understanding would be that the “location id” could be used as a partition So, one location get’s only it’s product, employee’s and sales etc.Shouldn’t that work?The odd thing, to me, was about how that partition was defining what was being downloaded into the local Realm DB. Meaning I just hatd to sign into anything, and it would load ALL collections with that partion. So, what’s the point of security if I can do that? I figured I’d need a App Key for each realm/collection and sign into each. Which would give me granular control if I wanted it. But it seems I just need any login and a partition and I can see verything in Realm Studio.Yes, flexible sync is where it’s at. I’m just trying to get over this basic hurdle before I get more complex. Which is the root of my stress.I thought I was simple.that’s it.So, what I feel I’m missing at this point. Is the best way to interact with multiple realm’s (product, sales, employee, schedules etc) that are AT LEAST partitioned by location ID.Like I said I did server API key’s for each Realm (Product, Employee).\nSo, let’s say now I’m in a loop or using timers of some sort. (basically monitoring a local SQL db for changes in said Sales, Emploee’s, Product, Schedules etc…When changes occur they update the specific Realm.Do I need separate local DB’s or can a single file db work fine? Or does the partion limit that?looking forward to the next step!!\nCPT", "username": "Colin_Poon_Tip" }, { "code": "", "text": "One thing I’m not sure I’m getting is why do you need multiple app services per collection (assuming by App Service, you mean server-side App). Is there a reason why you can’t use a single app that takes care of all collections?", "username": "nirinchev" }, { "code": "", "text": "@Colin_Poon_Tip I didn’t read through this entire thread but I’m a Xamarin dev who’s had some success with MongoDB + Realm Sync so here is my advice.First of all just to make sure we are on the same page with a few things: a Realm can contain multiple collections and you sync them down with a partition key. You don’t need timers to check if things change - the whole point of Realm Sync is that the local db and backend are kept in sync for you. Just use one App Service in Atlas to hold your collections.I would create a new app, turn DEV mode on, create a simple RealmObject, start your app, and let it automagically generate the schema for you on the backend. Don’t do stuff like string? … you don’t need nullable strings. If the string is optional just have it be string… if it’s required put [Required] over it. Keep your models basic until you know what you’re doing.Here are some models I’m using in a prototype that you can follow if you need:[.NET MongoDB + Realm Sync models demonstrating relationships and embedded objects · GitHub](https://.NET Realm Models). You can see that the main model is an Issue with multiple embedded objects and a to-one relationship to a Contractor that the issue is assigned to. I also have some [BsonIgnore] attributes because you can’t put enums in a RealmObject… so we get/set to an int through an enum property. I also have Folder and Document models. A folder can contain many folders and documents and has one parent folder. Hopefully you can use these as a reference for mapping some relationships in your app.Using the models from this gist, my prototype syncs all the collections for a single project by passing a partition key of “projectId=89f97030-a9e4-11ec-b909-0242ac120002”. This way I get all the Folders, Documents, Issues, Contractors, and I have relationships setup so I can navigate the object graph through Realm.Hope this helps", "username": "Derek_Winnicki" }, { "code": "exports = async function(partitionValue) {\n try {\n const callingUser = context.user;\n\n // The user custom data contains a canReadPartitions array that is managed\n // by a system function.\n const {canReadPartitions} = callingUser.custom_data;\n \n // If the user's canReadPartitions array contains the partition, they may read the partition\n return canReadPartitions && canReadPartitions.includes(partitionValue);\n\n } catch (error) {\n console.error(error);\n return false;\n }\n};\n\nexports = async function createNewUserDocument({user}) {\n \n const cluster = context.services.get(\"mongodb-atlas\");\n const customUserData = cluster.db(\"myApp\").collection(\"CustomUserData\");\n \n return customUserData.insertOne({\n _id: user.id,\n _partition: `user=${user.id}`,\n canReadPartitions: [`user=${user.id}`],\n canWritePartitions: [],\n });\n \n};\n\n", "text": "To answer the rest of your question: you can control on the backend who has permission to sync what data. So even though anybody could attempt to sync a location… if they aren’t an employee at that location for example, they won’t have permission to sync down. Or maybe you give them permission to read certain data from that location but not write it.Check the docs, but this is what I have setup in a different app of mine in Apps → Build → Functions:\n\nScreenshot 2022-11-11 at 9.35.36 AM2526×606 45.6 KB\nAnd over in my Triggers:\n\nScreenshot 2022-11-11 at 9.39.58 AM3034×514 52.8 KB\nSo when a new user is created I give them access to partitions with their user ID, but in your example you could give employees access to location partitions based off some other logic. You could manage it with other functions etc if say a Manager is giving Employees access to Locations through a web portal or your app.Let me know if this helps!", "username": "Derek_Winnicki" }, { "code": "var config = new AppConfiguration(\"productrt-pnyci\");\n var apiKey = \"S4JZU00zKPbLr69upuTYFXh6ZhKooh56h9owBrPOi3G0Kn0b3zxqW2hVzYbNcpdw\";\n _RealmApp = App.Create(config);\nvar config = new AppConfiguration(\"employeert-ueqyw\");\n var apiKey = \"PulSpxewf6XjpeUKFCLAVfBu3SipfVRDufWW2gKnsnF9pkNUB7xTGA2jO1tsp3lT\";\n var _RealmApp = App.Create(config);\n", "text": "I believe you. It’s one of those fundamental Q&A I’ve been trying to answer. I think it’s about the language.\nWhen I think of a “realm”. I’m thinking each app service (realm) I generate around a collection in MongoDB.I saw them as separate App services as they each have an appId. Meaning, I assume I have to log into each one as such in my client side application (console app I’m testing with):for my product AppService(realm)Is it not the case if I want to connect to my other AppService(realm) Employees I’d mirror the above with the Employee AppId to sign in?I generated apiKey’s for each Appservice as I expected that to be a method of controling access later.Maybe I’m mixing the tearm “realm” up or maybe I’m answering my question?Are you saying in the MongoDB AppServices configuration I can create a SINGLE AppService and service multiple collection in it? So, if I generated an new AppService in MongoDB called Restaurant I could load all the collections I need? In otherwords, the schema would hold the collections Sales, Employees, Product, Schedules etc? Holy cow…that would change everything in my thinking!! Which would make sense why I couldn’t understand why my partition key spanned both when I didn’t ask for one realm(collection)?hmm…o-boy.", "username": "Colin_Poon_Tip" }, { "code": "", "text": "@Derek_Winnicki Helps TREMENDOUSLY!! \nI’m gonna rebuild with what I feel I had wrong the whole time. Which, i believe, was what an AppService(Realm) was. I started my tutorials thinking One App service per collection and I think that put me in a world of hurt. Soo…now I have to put that lesson to test. OR, I’ll fail miserably with another wrong assumption Thanks for responding, it’s super helpful!!\nCPT", "username": "Colin_Poon_Tip" } ]
C# REALM sync anomalies abound
2022-10-24T20:10:49.025Z
C# REALM sync anomalies abound
4,377
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 6.0.3-rc2 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.2. The next stable release 6.0.3 will be a recommended upgrade for all 6.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.3-rc2 is released
2022-11-12T01:44:41.430Z
MongoDB 6.0.3-rc2 is released
1,856
https://www.mongodb.com/…d_2_1024x248.png
[ "aggregation", "atlas-device-sync" ]
[ { "code": "encountered non-recoverable resume token error. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning: (ChangeStreamHistoryLost) PlanExecutor error during aggregation :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.", "text": "This morning I noticed an app we used (Task-Tracker) is not syncing. I logged into console and navigated to the App Services section->Logs and and am seeing a messageSynchronization between Atlas and Device Sync has been stopped, due to error:encountered non-recoverable resume token error. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning: (ChangeStreamHistoryLost) PlanExecutor error during aggregation :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.With a button to Restart Atlas Sync. Pressing it comes back to the same vague error. I am working with Chat Support now and will circle back with any additional information.Clear server generated errors and suggestions for corrective action would be really helpful as I have no idea what to do what that.Suggestions?Jay\nWut?2246×544 76 KB\n", "username": "Jay" }, { "code": "", "text": "The resolution, which I don’t quite understand was the following:To follow up on this, the App services team informed that your cluster have fallen off the oplog, so you will have to terminate/reenable the Sync.Note that as mentioned before, this errors is common on free tiers since are not suggested for using the Sync feature.I assumed the button thats says Restart Atlas Sync (shown in the above post) indicated that it was already terminated. Apparently not, it has to be manually terminated and restarted. Either the button is broken or it doesn’t do what the title says it does.The really interesting part is the response that this is a “common error” on a free tier and is “not recommended for syncing.” That doesn’t make a whole lot of sense to me if you just trying out MongoDB it would crash like that - but ok. Hmmm. My .02Jay", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Synchronization between Atlas and Device Sync has been stopped, due to error
2022-11-11T14:38:00.529Z
Synchronization between Atlas and Device Sync has been stopped, due to error
2,115
null
[]
[ { "code": "", "text": "Hi. I want to implement syncing feature that synchronizes data between Realm in local device and remote firestore. While when having records added to realm, I can also push it to firestore, but I dont now how to do so in opposite direction. I want to update local realm when new records added to firestore. How can I implement this. Thanks you all in advance.", "username": "Tu_Nguy_n_Xuan" }, { "code": "func observeUserCollectionChanges() {\n let usersCollectionRef = self.db.collection(\"users\") //self.db points to my Firestore\n usersCollectionRef.addSnapshotListener { querySnapshot, error in\n guard let snapshot = querySnapshot else {\n print(\"Error fetching snapshots: \\(error!)\")\n return\n }\n\n //process fine-grained changes\n snapshot.documentChanges.forEach { diff in\n let userName = diff.document.get(\"name\") as? String ?? \"No Name\"\n if (diff.type == .added) {\n print(\"Added user: \\(userName)\")\n //add the user to Realm\n }\n if (diff.type == .modified) {\n print(\"Modified user: \\(userName)\")\n //modify and existing user\n }\n if (diff.type == .removed) {\n print(\"Removed user: \\(userName)\")\n //delete the user. So there.\n }\n }\n }\n}\n", "text": "That should be pretty straight forward:In Firestore, you would add an observer to a collection (for example) and when something changes in that collection your app is notified within the observer closure.You can process the changes in a broad way - like an entire document or you can be more granular with added, change and removed events.Once you have the data that was change within that closure, update your local realm object. Here’s a Firestore Swift function that will receive changes to a Users collection, so you can then update your local Realm objects. This code prints the users name and I commented where the realm code would go.My question is - and I have been using Firebase since the beginning - why not just use Realm Sync’ing?You’ll have a lot of the same capabilities; Firestore is an Online First Model wheras Realm is Offline First, if you want to sync the data and store it locally anyway, you may be able to just use Realm and simplify the codebase and app.", "username": "Jay" }, { "code": "", "text": "Thank you, @Jay . Very straight and on point reply. About the reason I use both Realm and Firestore: My system has apps and devices that connect to each others and firestore is used as central database. I also have to use Realm as my app intended as offline first.", "username": "Tu_Nguy_n_Xuan" }, { "code": "", "text": "I would encourage checking out Realm Sync capabilities as they may provide similar services to Firebase Firestore (for this use case); multiple users apps and devices with a centralized database across apps.", "username": "Jay" }, { "code": "", "text": "In my project, we do not use Realm Sync because it would require an additional BAA for sensitive information.", "username": "Conner_Mccraw" }, { "code": "", "text": "Hi @Conner_Mccraw welcome to the forums.The question was about how to synchronize Realm to Firestore - which I answered but also suggested Realm Sync - which, by the way has come a long way since this post in Sept 21.How does your comment apply? Do you have a followup question or need some coding help? If so, can you clarify?Let see if you’re stuck on something - perhaps a separate post would be in order if so. Let us know!", "username": "Jay" } ]
Syncing between Firestore and Realm
2021-09-11T12:41:48.031Z
Syncing between Firestore and Realm
5,060
null
[]
[ { "code": "_id: {\n $binary: {\n base64: base64var,\n subType: \"04\"\n }\n }\n", "text": "I need to create a log entry in another collection on the negative outcome of a trigger function. The new document needs to have a root _id field as a UUID not ObjectId, I’ve tried using the global BSON module, but it does not seem to have the UUID method.Also have tried setting it manually with the base64 data, this works for other fields in the document, but not the _id field. for eg:Any advice on the above? Thanks", "username": "Neil_Riedel" }, { "code": "", "text": "Im running into the same issue. I cannot create UUID fields in Functions", "username": "Tyler_Collins" } ]
How to create root level _id as UUID inside a trigger function
2022-11-09T06:38:53.262Z
How to create root level _id as UUID inside a trigger function
1,244
null
[ "compass", "database-tools" ]
[ { "code": "", "text": "Hi Guys, has anyone had the same scenario that I am having?\nI have a database with two collections and when I try to export it I receive a message with exported 0 records.\nI am rarely new- any suggestions will be much appreciated-\nI have tried as examples: mongoexport --collection=events --db=reporting --out=events.json or mongoexport -d name_of_database -c name_of_collection -o name_of_file.json.", "username": "Hermann_Rasch" }, { "code": "", "text": "It should work\nWhat is your mongoexport and mongodb version?\nShow us db with collections and full mongoexport log", "username": "Ramachandra_Tummala" }, { "code": "", "text": "\n2022-11-10 (1)1920×1080 176 KB\n\nHope that suffices.", "username": "Hermann_Rasch" }, { "code": "", "text": "Regarding logs would you mind being a bit specific? Hehe I believe I understand partially", "username": "Hermann_Rasch" }, { "code": "", "text": "Where is the reporting db?\nI was asking for mongoexport log from your terminal where it showed 0 rows exported", "username": "Ramachandra_Tummala" }, { "code": "", "text": "\n2022-11-071920×1080 132 KB\n", "username": "Hermann_Rasch" }, { "code": "", "text": "So under movies db do you have movies collection?\nShow output of below commands from db\nuse movies\nshow collections", "username": "Ramachandra_Tummala" }, { "code": "", "text": "No, I believe that is the issue.\nI created a new one it worked.\nI mean a database without a collection is not really functional?\nThanks anyhow for helping out", "username": "Hermann_Rasch" } ]
I am not being able to export my Data base (I do not have Compass install due to exercise purposes)
2022-11-09T20:06:07.868Z
I am not being able to export my Data base (I do not have Compass install due to exercise purposes)
1,882
null
[ "aggregation", "performance" ]
[ { "code": "", "text": "We have a database with many thousands of documents and we are using an aggregation pipeline with a facet stage.\nThe facet stage throws an error: “document constructed by $facet is 61104 bytes, which exceeds the limit of 57600 bytes”We know we can set ‘allowDiskUse: true’, which we’ve already done.\nWe know we can set the ‘internalQueryFacetMaxOutputDocSizeBytes’ higher to accommodate this, which we’ve already done a couple of times.What we would like to know:Is there is a way to modify a facet stage or stream the data in such a way that as our data footprint grows, it wont continually keep consuming larger amounts of resources?", "username": "Dan_Alverth" }, { "code": "$facetCOLLSCAN$facet$facet$facet$facet", "text": "Hi @Dan_Alverth - Welcome to the community Is there is a way to modify a facet stage or stream the data in such a way that as our data footprint grows, it wont continually keep consuming larger amounts of resources?Currently, as you may be aware of, the $facet stage does not use any indexes and will perform a COLLSCAN. To perhaps provide any suggestions, would you be able to provide the following details:Please also note the following from SERVER-40317:Note that the $facet’s output document is allowed to be up to 100MB large if it is an intermediate result. However, all documents produced by an aggregation pipeline’s result set must be 16MB or less due to the BSON size limit. So the 100MB limit applies to the output document produced by $facet, but the pipeline will still fail unless the size of this document is subsequently reduced to 16MB or less.Interestingly the error message you have provided does contain values which appear relatively small (57600 bytes for example). In terms of the error message, in the past, $facet may consume an unlimited amount of memory. This was fixed in SERVER-40317, so it is possible that your pipeline is consuming an excess amount of resources to execute.Regards,\nJason", "username": "Jason_Tran" }, { "code": "/**\n * outputFieldN: The first output field.\n * stageN: The first aggregation stage.\n */\n{\n \"measures\": [\n {\n \"$unwind\": {\n \"path\": \"$timeSeriesCustomFieldValues.27405\",\n \"preserveNullAndEmptyArrays\": true\n }\n },\n \n {\n \"$unwind\": {\n \"path\": \"$timeSeriesCustomFieldValues.27407\",\n \"preserveNullAndEmptyArrays\": true\n }\n },\n \n {\n \"$group\": {\n \"_id\": {\n \"display-by_id\": {\n \"$filter\": {\n \"input\": \"$timeSeriesCustomFieldValues\",\n \"cond\": { \"$eq\": [ \"$$this.definitionId\", \"27405\" ] }\n }\n },\n \"display-by_display\": {\n \"$filter\": {\n \"input\": \"$timeSeriesCustomFieldValues\",\n \"cond\": { \"$eq\": [ \"$$this.definitionId\", \"27405\" ] }\n }\n },\n \"display-by_sort\": {\n \"$filter\": {\n \"input\": \"$timeSeriesCustomFieldValues\",\n \"cond\": { \"$eq\": [ \"$$this.definitionId\", \"27405\" ] }\n }\n },\n \"group-by_id\": {\n \"$filter\": {\n \"input\": \"$timeSeriesCustomFieldValues\",\n \"cond\": { \"$eq\": [ \"$$this.definitionId\", \"27407\" ] }\n }\n },\n \"group-by_display\": {\n \"$filter\": {\n \"input\": \"$timeSeriesCustomFieldValues\",\n \"cond\": { \"$eq\": [ \"$$this.definitionId\", \"27407\" ] }\n }\n },\n \"group-by_sort\": {\n \"$filter\": {\n \"input\": \"$timeSeriesCustomFieldValues\",\n \"cond\": { \"$eq\": [ \"$$this.definitionId\", \"27407\" ] }\n }\n }\n },\n \"result\": {\n \"$sum\": 1\n }\n }\n },\n \n {\n \"$sort\": {\n \"_id.display-by_sort.value.value\": 1\n }\n },\n \n {\n \"$sort\": {\n \"_id.group-by_sort.value.value\": 1\n }\n },\n \n {\n \"$group\": {\n \"_id\": null,\n \"display-by_labels\": {\n \"$push\": {\n \"$ifNull\": [\n \"$_id.display-by_display.value\", null\n ]\n }\n },\n \"display-by_ids\": {\n \"$push\": {\n \"$ifNull\": [\n \"$_id.display-by_id.value\", null\n ]\n }\n },\n \"group-by_labels\": {\n \"$push\": {\n \"$ifNull\": [\n \"$_id.group-by_display.value\", null\n ]\n }\n },\n \"group-by_ids\": {\n \"$push\": {\n \"$ifNull\": [\n \"$_id.group-by_id.value\", null\n ]\n }\n },\n \"result_push\": {\n \"$push\": \"$result\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"x-group\": \"$display-by_labels\",\n \"x-group-ids\": \"$display-by_ids\",\n \"x-name\": \"$group-by_labels\",\n \"x-name-ids\": \"$group-by_ids\",\n \"data\": [\n {\n \"result\": \"$result_push\",\n \"option\": \"count\"\n }\n ]\n }\n }\n ]\n}\n{\n \"stage\": \"COLLSCAN\",\n \"nReturned\": 236,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 238,\n \"advanced\": 236,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 236\n}\n{ \"_id\": \"3892\", \"type\": \"opportunities\", \"weightedAllocatedValue\": 0, \"probability\": 0.01, \"createdTimestamp\": { \"$date\": { \"$numberLong\": \"1278086242000\" } }, \"classificationType\": \"New Investment\", \"weightedValue\": 15000, \"isErisa\": false, \"expectedInvestmentDate\": { \"$date\": { \"$numberLong\": \"1291183200000\" } }, \"allocatedAmount\": 0, \"name\": \"Allen Investments Opp\", \"requestedAmount\": 1500000, \"currencyCode\": \"USD\", \"effectiveDate\": { \"$date\": { \"$numberLong\": \"1278086242000\" } }, \"timeSeriesCustomFieldValues\": [ { \"definitionId\": \"27405\", \"name\": \"Element\", \"fieldType\": \"select\", \"values\": [ { \"id\": \"92171\", \"effectiveDate\": { \"$date\": { \"$numberLong\": \"1664600400000\" } }, \"value\": [ { \"lovSet\": 13124, \"code\": \"1\", \"value\": \"Earth\" } ] }, { \"id\": \"92173\", \"effectiveDate\": { \"$date\": { \"$numberLong\": \"1667278800000\" } }, \"value\": [ { \"lovSet\": 13124, \"code\": \"2\", \"value\": \"Wind\" } ] }, { \"id\": \"92175\", \"effectiveDate\": { \"$date\": { \"$numberLong\": \"1669874400000\" } }, \"value\": [ { \"lovSet\": 13124, \"code\": \"3\", \"value\": \"Water\" } ] } ] } ], \"investor\": { \"_id\": \"788709\", \"type\": \"contacts\", \"birthday\": { \"$date\": { \"$numberLong\": \"-491857200000\" } }, \"lastName\": \"Allen\", \"isEmployee\": false, \"website\": \"http://www.allenfunds.com\", \"gender\": \"UNSPECIFIED\", \"otherId\": \"1010\", \"jobTitle\": \"CIO\", \"createdTimestamp\": { \"$date\": { \"$numberLong\": \"1272386954000\" } }, \"firstName\": \"George\", \"name\": \"Allen, George\", \"contactSource\": { \"_id\": \"3668\", \"type\": \"contact-sources\", \"name\": \"3P-MKT 3\", \"description\": \"3P-MKT 3\" }, \"contactLocations\": [ { \"_id\": \"25203\", \"type\": \"contact-locations\", \"country\": \"United States\", \"city\": \"New York\", \"postalCode\": \"10011\", \"locationTitle\": \"Business\", \"isPrimaryLocation\": true, \"state\": \"NY\" } ], \"clientDefinedEntityType\": { \"_id\": \"52\", \"type\": \"entity-types\", \"pluralName\": \"People\", \"name\": \"Person\", \"resourceType\": \"people\" }, \"permissionBucket\": { \"_id\": \"2451\", \"type\": \"permission-buckets\", \"name\": \"Public\" } }, \"stage\": { \"_id\": \"974\", \"type\": \"opportunity-stages\", \"sortOrder\": { \"$numberLong\": \"3\" }, \"name\": \"Committed/Processing\", \"closed\": false }, \"createdBy\": { \"_id\": \"51443\", \"type\": \"system-users\", \"lastName\": \"SuperAdmin\", \"firstName\": \"Training\", \"fullName\": \"Training SuperAdmin\", \"disabled\": false, \"userName\": \"SuperAdmin\" }, \"primaryContact\": { \"_id\": \"788709\", \"type\": \"contacts\", \"birthday\": { \"$date\": { \"$numberLong\": \"-491857200000\" } }, \"lastName\": \"Allen\", \"isEmployee\": false, \"website\": \"http://www.allenfunds.com\", \"gender\": \"UNSPECIFIED\", \"otherId\": \"1010\", \"jobTitle\": \"CIO\", \"createdTimestamp\": { \"$date\": { \"$numberLong\": \"1272386954000\" } }, \"firstName\": \"George\", \"name\": \"Allen, George\", \"contactSource\": { \"_id\": \"3668\", \"type\": \"contact-sources\", \"name\": \"3P-MKT 3\", \"description\": \"3P-MKT 3\" }, \"contactLocations\": [ { \"_id\": \"25203\", \"type\": \"contact-locations\", \"country\": \"United States\", \"city\": \"New York\", \"postalCode\": \"10011\", \"locationTitle\": \"Business\", \"isPrimaryLocation\": true, \"state\": \"NY\" } ], \"clientDefinedEntityType\": { \"_id\": \"52\", \"type\": \"entity-types\", \"pluralName\": \"People\", \"name\": \"Person\", \"resourceType\": \"people\" }, \"permissionBucket\": { \"_id\": \"2451\", \"type\": \"permission-buckets\", \"name\": \"Public\" } }, \"clientDefinedEntityType\": { \"_id\": \"56\", \"type\": \"entity-types\", \"pluralName\": \"Opportunities\", \"name\": \"Opportunity\", \"resourceType\": \"opportunities\" }, \"permissionBucket\": { \"_id\": \"2451\", \"type\": \"permission-buckets\", \"name\": \"Public\" }, \"investorType\": { \"_id\": \"12106\", \"type\": \"investor-types\", \"classificationType\": \"Endowment / Foundation\", \"investorType\": \"endowment\" }, \"_index\": [ { \"k\": \"_id\", \"v\": \"3892\" }, { \"k\": \"type\", \"v\": \"opportunities\" }, { \"k\": \"investor._id\", \"v\": \"788709\" }, { \"k\": \"investor.type\", \"v\": \"contacts\" }, { \"k\": \"stage._id\", \"v\": \"974\" }, { \"k\": \"stage.type\", \"v\": \"opportunity-stages\" }, { \"k\": \"createdBy._id\", \"v\": \"51443\" }, { \"k\": \"createdBy.type\", \"v\": \"system-users\" }, { \"k\": \"primaryContact._id\", \"v\": \"788709\" }, { \"k\": \"primaryContact.type\", \"v\": \"contacts\" }, { \"k\": \"clientDefinedEntityType._id\", \"v\": \"56\" }, { \"k\": \"clientDefinedEntityType.type\", \"v\": \"entity-types\" }, { \"k\": \"permissionBucket._id\", \"v\": \"2451\" }, { \"k\": \"permissionBucket.type\", \"v\": \"permission-buckets\" }, { \"k\": \"investorType._id\", \"v\": \"12106\" }, { \"k\": \"investorType.type\", \"v\": \"investor-types\" }, { \"k\": \"effectiveDate\", \"v\": { \"$date\": { \"$numberLong\": \"1278086242000\" } } }, { \"k\": \"probability\", \"v\": 0.01 }, { \"k\": \"currencyCode\", \"v\": \"USD\" }, { \"k\": \"createdTimestamp\", \"v\": { \"$date\": { \"$numberLong\": \"1278086242000\" } } }, { \"k\": \"expectedInvestmentDate\", \"v\": { \"$date\": { \"$numberLong\": \"1291183200000\" } } }, { \"k\": \"requestedAmount\", \"v\": 1500000 }, { \"k\": \"allocatedAmount\", \"v\": 0 }, { \"k\": \"name\", \"v\": \"Allen Investments Opp\" }, { \"k\": \"isErisa\", \"v\": false } ]}\n", "text": "db.collection.explain(“executionStats”)Hello Jason,Here is the facet stage we are trying to process:As for the explain plan, it says:Here is an example document:The Mongo version is 4.4.4 Community Edition.As for the low document size, we actually set the facet document size low so we could test locally. The original problem was happening in production, on a much larger dataset, several thousand documents.", "username": "Dan_Alverth" }, { "code": "$facet", "text": "Thanks for providing those details Dan.The pipeline is relatively complex (multiple unwinds, groups & sorts) and in addition to the fact that $facet cannot make use of indexes, the intermediate values will need to be stored in memory which could be the main issue. As you have mentioned earlier, the data footprint will grow and with that, the pipeline’s memory needs would as well. If this is a frequent operation, perhaps you could consider using materialized views.The typical solution to a query’s performance & memory issue is to utilize proper indexing. Perhaps the page Create Indexes to Support Your Queries would be useful in this regard?The Mongo version is 4.4.4 Community Edition.As noted in the Release Notes for MongoDB Version 4.4, MongoDB version 4.4.4 is not recommended for production use due to critical issue WT-7995, fixed in later versions. Use the latest available patch release version.Regards,\nJasone", "username": "Jason_Tran" }, { "code": "", "text": "Thank you very much Jason!\nI will look into these suggestions!", "username": "Dan_Alverth" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Aggregation Pipeline Facet Memory Issue
2022-11-02T14:44:49.048Z
Aggregation Pipeline Facet Memory Issue
3,256
null
[]
[ { "code": "", "text": " all!Some of our MongoDB Community Team will be at AWS re:Invent later this month. We’d love to connect with the MongoDB Community there.Please post here or DM me if you’ll be attending!", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "This topic was automatically closed after 60 days. New replies are no longer allowed.", "username": "system" } ]
Community @ AWS re:Invent
2022-11-11T15:07:36.325Z
Community @ AWS re:Invent
1,684
null
[ "data-modeling", "sharding" ]
[ { "code": "", "text": "Is there any model architecture for the MongoDB Shard cluster that needs to process millions of documents per hour? Also, How can I define the number of Config Servers, Mongos (Routers), and Shards (Primary and Replicas) for my MongoDB Shard cluster?", "username": "Luis_Alexandre_Rodrigues" }, { "code": "", "text": "ow can I define the number of Config Servers, Mongos (Routers), and Shards (Primary and Replicas) for my MongoDB Shard cluster?The minimum is", "username": "steevej" } ]
What's the suggested architecture to process millions of documents per hour?
2022-11-11T14:43:48.592Z
What&rsquo;s the suggested architecture to process millions of documents per hour?
1,332
null
[ "java" ]
[ { "code": "E11000 duplicate key error collection: . . . . . . . . dup key: { id: null } null", "text": "I’m trying to insert multiple data using a for-loop (there will be individual processing after each insertion). The _id for each should be auto-generated so I’m providing null in that field.I’m getting a E11000 duplicate key error collection: . . . . . . . . dup key: { id: null } Nothing is inserted.\nI’m assuming that the id will be generated before the insertion.\nWhy is null being flagged as duplicate?Edit: The same thing happens with insertMany()", "username": "dmdum" }, { "code": "", "text": "If the_id for each should be auto-generatedyou should not supply any values. Null is a value so you should notproviding null in that fieldThe error message does not match part of you post. You writeThe _id for eachwith an leading underscore but the error complains aboutdup key: { id: null }without the leading underscore. Have you redacted the error message? Is it _id or id? The only auto-generated field in native mongo is the top level _id. If you have an error with id rather than _id then you must have a unique index on that field. If you do, then you cannot have 2 documents with the same value and null is a value.To conclude", "username": "steevej" }, { "code": "", "text": "Yeah, I realized that after looking at the indexes. There was an unwanted index probably created from the earlier stages of the code. I should’ve created a reply instead of an edit. Let me mark your reply as the solution though. Thanks for the detailed explanation!", "username": "dmdum" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
To be generated id is marked as duplicate
2022-11-11T11:17:17.921Z
To be generated id is marked as duplicate
1,668
null
[ "data-modeling", "serverless", "android", "flexible-sync" ]
[ { "code": "", "text": "Good morning,\nwe’re developing a new application and we are figuring out what would be a good architecture to choice.\nIn the past we used realm as embedded db for some mobile applications.\nthe idea of realm sync and the concept of a mobile serverless application is very fascinating.\nWe have some\nquestions in order to understand if flexible sync technology can do for us.\nThe context is:\nA new mobile application (android and ios) that shows to users a set of plant cards that can be viewed in read-only mode and used in offline mode too (if no connections are available)\nEvery card has one main section with general informations and differents linked subsections.\nEvery user can search a plant with a dedicated functionality based on n-criteria. The starting set is around a thousand of cards, but the aim is to reach the order of one million (Would be possible?). Each card would be implemented by a main realm object that has embedded objects inside it that represent the various sections. Could Flexible Sync be the right solution? is there a way to limit the size of the database locally? Is it possible to search for a card starting from an attribute of a subsection?\nThe initial subscription has to contemplate the entire set of cards?\nThanks in advance.\nAlessandro.", "username": "Alessandro_Penso" }, { "code": "", "text": "Hi, Im glad you are contemplating using flexible sync. My initial reaction is “yes” it can definitely help you. You can use permissions to enforce it is read-only, and the system is capable of sending millions of objects (obviously performance will be impacted by a combination of (a) the size of the data set (b) the size of the result set for a particular client and (c) the average size of the objects in the result set, but I would recommend getting started and it should be pretty easy to test. I would just caution you that if you plan on doing any load/performance testing you should use an M10 or above since M0’s and M5’s do rate limiting and the performance will likely bottleneck there.As for your second question, it is hard to answer without too many specifics but if you have a sample data model / schema and some notion of how you want to query them I would be happy to look at it.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "ok, thank you, tomorrow i’ll send you a sample model so you can better understand what i’m talking about.", "username": "Alessandro_Penso" }, { "code": "", "text": "Hi again, i uploaded some models (Kotlin language) that describe the plant card (PlantCardRealm), an example of related section (SectionPlantingConditionsRealm) and, at last, a sort of bean class(SectionPreviewDescriptionItemRealm) that allow us to put inside any field a initial description and a more discorsive detail. The number of related sections is 10, with about 5-10 fields per section. have the cardinality of the initial description some impact on the space of the embedded realm in app? Is it possible to define a subscription with a query on a field that belongs to embedded object? I’m thinking about a million of cards and the impact on local db. thanks.PlantCardRealm.txt (1.0 KB)\nSectionPlantingConditionsRealm.txt (959 Bytes)\nSectionPreviewDescriptionItemRealm.txt (1.4 KB)", "username": "Alessandro_Penso" }, { "code": "\"lightRequirements.previewInfo ==\"more information\"", "text": "Hi, Ill try to answer some of the questions in order:Have the cardinality of the initial description some impact on the space of the embedded realm in app?I am not entirely sure what you mean by the above. I do think in general that space is pretty well optimized and this is probably something that you shouldn’t worry about too much right now unless you are very space constrained on your device which is normally not the case.Is it possible to define a subscription with a query on a field that belongs to embedded object?Unfortunately it is not. See here: https://www.mongodb.com/docs/atlas/app-services/sync/data-access-patterns/flexible-sync/#eligible-field-typesNote that once the data is on your device you can use the entire realm query language to have your app’s logic utilize the data (and here you can query on embedded objects), but you cannot only sync down a subscription of data that is of the form \"lightRequirements.previewInfo ==\"more information\". What are you trying to subscribe to ideally?", "username": "Tyler_Kaye" }, { "code": "\"lightRequirements.previewInfo ==\"more information\"", "text": "Note that once the data is on your device you can use the entire realm query language to have your app’s logic utilize the data (and here you can query on embedded objects), but you cannot only sync down a subscription of data that is of the form \"lightRequirements.previewInfo ==\"more information\" . What are you trying to subscribe to ideally?I try to explain my worries:\nIn a couple of years we would like to enrich our database with many plant’s card, the goal is to add more than 1 milion of cards.\nWe did an initial test in which we duplicate 1,5 milion of times an example card. The local Realm db exceeded 1 giga of bytes, that for some users could be,\npotentially, annoying.\nI was thinking that using subscriptions would allow us to maintain a lighter version of the local database, because every subscription brings to local\na subset of the original database. Isn’t correct?\nIn this way we could define a set of subscriptions, and after choose the right one for the current purpose.\nThis was the initial idea, i don’t know if it’s correct or if there’s another way to reach the goal.\nThanks.\nAlessandro.", "username": "Alessandro_Penso" }, { "code": "", "text": "Hi, the size of the database is a function of (a) the number of objects (b) the average size of those objects and (c) the rate of changes to those objects since history is stored and compacted. So I am not sure how much you expect that data to be on your local DB but that seems resonable for a million objects that are each 1KB in sizeYes, that is how subscriptions should work. You define the subset of the data that you would like to sync down:", "username": "Tyler_Kaye" }, { "code": "", "text": "Ok, the subscription idea is very “powerfull” but it’s a big limitation that you cannot create a subscription using embedded fields inside the query definition of it\nIn our case the final size of database is too big to replicate and store in a smarthphone. Without the possibility of creating dynamic partition of it (using subscription) we have to change solution, using standard api rest to retrieve only the subset useful in a determined context. I’ll check if flexible sync will be updated with this implementation in the future. Thanks so much.Alessandro.", "username": "Alessandro_Penso" }, { "code": "", "text": "Hi. I am still a little confused what query you are trying to make? We will certainly continue to make the product better and including embedded fields as “queryable” is on the horizon, but in the meantime you can change your data model ever so slightly in order to avoid this limitation.I see you posted your schemas above, but I am not quite sure what the query you are trying to make is?", "username": "Tyler_Kaye" }, { "code": "", "text": "hi,Tyler, what do you mean with “change your data model ever so slightly”? The problem is that i can’t do a subscription based on embedded fields logic, and so … how can i bring into the local realm only a subset of the entire initial Mongodb’s set? I try to explain with an example: suppose having 1 milion of plants cards into Remote Backend. The client (android) needs to retrieve only some cards based on different search criterias, for example the “difficultyLevel” inside embedded object “SectionPlantingConditionsRealm”. How would you handle a situation like that? (And many others dealing with different search criterias in differents embedded objects?) How can i do this without having an exact replica of the Mongo db in the local Realm db? Am I missing some concept?", "username": "Alessandro_Penso" } ]
Flexible sync, right choice?
2022-10-26T10:17:16.905Z
Flexible sync, right choice?
2,863
null
[ "crud" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"635f63b41c66064760addc11\"\n },\n \"isMigrated\": true,\n \"isDeleted\": false,\n \"createdBy\": {\n \"userId\": 11088,\n \"userName\": \"ANIL64\"\n },\n \"modifiedBy\": {\n \"userId\": 11088,\n \"userName\": \"ANIL64\"\n },\n \"cugMasterAccountId\": 15,\n \"customerId\": 10025094,\n \"type\": \" \",\n \"cugMasterAccountDetails\": [\n {\n \"tagAccountNumber\": 20056411,\n \"vehicleId\": 56346,\n \"startDate\": \"2016-10-21 00:00:00\",\n \"minBalance\": 2000,\n \"maxBalance\": 700,\n \"percentage\": 0\n },\n {\n \"tagAccountNumber\": 20056411,\n \"vehicleId\": 56346,\n \"startDate\": \"2016-10-21 00:00:00\",\n \"minBalance\": 1000,\n \"maxBalance\": 700,\n \"percentage\": 0\n }\n ]\n}\n", "text": "Hello Everyone,For the below mentioned document,i want to unset the percentage field in cugMasterAccountDetails Array when the percentage is equal to Zero.", "username": "Amrutha_Sai_Kala_Challapalli" }, { "code": "$unsetupdateOneupdateManydb.collection.updateMany(\n { \"cugMasterAccountDetails.percentage\": 0 },\n { $unset: { \"cugMasterAccountDetails.$[c].percentage\": \"\" } },\n { arrayFilters: [{ \"c.percentage\": 0 }] }\n)\n", "text": "Hello @Amrutha_Sai_Kala_Challapalli, Welcome to the MongoDB community forum,You can use the $unset operator to delete fields,And use filtered positional operator and arrayFilters to match the nested elements by providing condition,You can try something like this,", "username": "turivishal" }, { "code": "", "text": "Thanks for your quick response.I will try it.", "username": "Amrutha_Sai_Kala_Challapalli" } ]
Unset field in array
2022-11-11T09:19:32.786Z
Unset field in array
2,507
https://www.mongodb.com/…1_2_1024x101.png
[ "replication", "atlas-search" ]
[ { "code": "", "text": "Not able to create search index. Also we tried to delete existing search field which was not active, it is stuck and showing delete in progress. We are on M60, replica set configuration. Please help here. Attached screenshot\n\natlasSearch1632×161 17 KB\n", "username": "Mukesh_Kumar_Gupta" }, { "code": "", "text": "Can you share the index you were trying to create?", "username": "Elle_Shwer" } ]
Not able to create search index
2022-11-11T06:25:31.422Z
Not able to create search index
1,150
https://www.mongodb.com/…1b586a9507e2.png
[ "node-js" ]
[ { "code": "", "text": "I got this error while seed the data from nodejs to mongo DB\n\nScreenshot 2022-11-11 165739795×261 10.6 KB\n", "username": "Vimal_Kumar_G" }, { "code": "", "text": "Hello @Vimal_Kumar_G, Welcome to the MongoDB community forum,The error says, provided url argument has a non-string value so make sure by printing/consol the url property has the correct value or not.More debugging details would be great if still not resolved the problem.", "username": "turivishal" }, { "code": "", "text": "Hi @turivishal thank you for your response, I’m beginner for node js and mongodb can u please explain me how to do that…", "username": "Vimal_Kumar_G" }, { "code": "const { faker } = require('@faker-js/faker');\nconst { MongoClient } = require('mongodb')\n\nconst _ = require(\"lodash\");\n\nconst { connection } = require('mongoose');\n\nasync function main() {\n\n const uri = 'mongodb://localhost:27017';\n\n const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n\n try {\n\n await client.connect();\n const productsCollection = client.db(\"e-canteen\").collection(\"products\");\n const categoriesCollection = client.db(\"e-canteen\").collection(\"categories\");\n let categories = ['breakfast', 'lunch', 'dinner', 'drinks'].map((category) => { return { name: category } });\n await categoriesCollection.insertMany(categories);\n\n let imageUrls = [\n 'https://asset.cloudinary.com/dbqvp70fu/9bb1282387d2e870bd764a7cbd36908f',\n 'https://asset.cloudinary.com/dbqvp70fu/38d8ef5bc366f5a0906f982329bd6b18',\n 'https://asset.cloudinary.com/dbqvp70fu/c5156be531cbbd2e1c1adf4258ab77d0'\n ]\n\n let products = [];\n\n for (let i = 0; i < 10; i+=1) {\n let newProduct = {\n name: faker.commerce.productName(),\n desciption: faker.commerce.productDescription(),\n price: faker.commerce.price(),\n category: _.sample(categories),\n imageUrl: _.sample(imageUrls)\n };\n products.push(newProduct);\n }\n await productsCollection.insertMany(products);\n\n } catch (e) {\n\n console.error(e);\n\n } finally {\n\n await connection.close();\n\n }\n}\n\nmain();\n", "text": "This is my seed.js code", "username": "Vimal_Kumar_G" }, { "code": "", "text": "Hello @Vimal_Kumar_G,I tested your provided code, and it is working for me and inserts the data in the database, Just make sure you are executing the same send.js file, or check the package.json file all npms are updated to the latest version.If you are new to node js and MongoDB then you have to check the free online courses at MongoDB university,Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "turivishal" }, { "code": "", "text": "Thanks @turivishal can you please send me the other js files used by you to connect with mongo db and config", "username": "Vimal_Kumar_G" }, { "code": "", "text": "Hello @Vimal_Kumar_G,There is no need for other js files, I just used your js file only.", "username": "turivishal" }, { "code": "", "text": "Thank you Mr. @turivishal", "username": "Vimal_Kumar_G" } ]
TypeError [ERR_INVALID_ARG_TYPE]: The "url" argument must be of type string. Received undefined
2022-11-11T11:28:56.067Z
TypeError [ERR_INVALID_ARG_TYPE]: The &ldquo;url&rdquo; argument must be of type string. Received undefined
5,765
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello everyone! I am creating a food delivery app. What is the best way to manage my Order? I want to have 3 types of orders: live, bought, completed. Should I create 3 different Collections or only one collection where I have a field name “type”? Also, order collection has the shop and user id, what is the best way to search in the collection for both id’s?", "username": "Ciprian_Gabor" }, { "code": "", "text": "Hello @Ciprian_Gabor,Your food delivery app appears to have a similarity with the Kotlin tutorial that can help.For search queries, you can check the Realm Query Language section in the documentation.I hope the provided information is helpful. Please don’t hesitate to ask if you need further assistance.Cheers, \nHenna", "username": "henna.s" }, { "code": "", "text": "Thanks for you answer. Why I should use an Enum class and not a simple string field?", "username": "Ciprian_Gabor" }, { "code": "String", "text": "@Ciprian_Gabor: You can use. It’s just that Enums is considered a little better approach than String.", "username": "Mohit_Sharma" } ]
Create new Collection or add a field
2022-11-04T19:39:57.821Z
Create new Collection or add a field
1,498
null
[ "text-search" ]
[ { "code": "", "text": "I have text Index on Name field of a collection. When I search for “clic”, it returns documents having names as “Clic®” or “Clic-” but not the one with “Clic™”. Is there any reason why Text Search would work for ® but not for ™ ? Please help me resolve this issue.Thanks in advance!", "username": "Prajakta_Sawant1" }, { "code": "", "text": "Hi @Prajakta_Sawant1 ,For me the example worked as expected:\n\nScreenshot 2022-11-03 at 11.28.441920×1062 81.2 KB\nI used the standard lucene dynamic mapping index.Perhaps you used a diffrerent analyser or indexing attributes?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you replying @Pavel_DuchovnyIt probably works fine in Atlas. But I am trying to do this via C# code where I am facing this issue. Any particular reason why it would not work for C# application?Thank you!", "username": "Prajakta_Sawant1" }, { "code": "", "text": "@Prajakta_Sawant1 ,Interesting can you share the code and its outcome here?Ty", "username": "Pavel_Duchovny" } ]
MongoDB text search works for registered trademark(®) but not for unregistered trademark(™)
2022-11-03T08:52:15.980Z
MongoDB text search works for registered trademark(®) but not for unregistered trademark(™)
2,213
null
[]
[ { "code": "error executing match expression: invalid document: must either be a boolean expression or a literal document\n\"apply_when\": {\n \"company\": \"%%user.custom_data.company\",\n \"%%user.custom_data.type\": \"supplier\"\n }\n", "text": "Hello,We are currently working on a project with multiple user types and want to apply rules to our collections to ensure correct access.\nWhile writing the rules we encountered an issue. Rules that contain more than one statement in the apply_when are just not working and just throwing exceptions in backend:The rule is:(With “company” being a field in the document and “supplier” beeing a possible value of the type in the custom data)Both statements of the apply_when are working solo in other places. But when together - they result into the exception.\nAre we missing something? Because in the documentation this is presented as the way to go for multiple statements.Best Regards,\nDaniel", "username": "Daniel_Bebber1" }, { "code": "\"delete\":{\n \"%%prevRoot.company\": \"%%user.custom_data.company\",\n \"%%user.custom_data.type\": \"supplier\"\n }\n", "text": "hi\nI had a problem like this , i just found the solution\nI think the problem is with “company” field.try this:\n1-convert rules to advanced rule\n2-leave apply_when field empty\n3-for read ,write, search,insert field copy and paste your original condition\n4-for delete field use this line of code:", "username": "Parsa_Foroozmand" }, { "code": "\"write\":{\n\"%or\": [\n { \"%%prevRoot.company\": \"%%user.custom_data.company\"},\n { \"%%root.company\": \"%%user.custom_data.company\"}\n]\n,\n \"%%user.custom_data.type\": \"supplier\"\n }\n", "text": "if that didn’t work,do all previous things and for write field use this code:", "username": "Parsa_Foroozmand" }, { "code": "", "text": "This might be workarounds, but I don’t get why the way how it is descripted in the official documentation is not working at all…", "username": "Daniel_Bebber1" } ]
Apply_when with more than one statement not working
2022-09-01T06:14:00.861Z
Apply_when with more than one statement not working
2,017
null
[ "replication" ]
[ { "code": "{\n \"operationTime\" : Timestamp(1667896491, 1),\n \"ok\" : 0,\n \"errmsg\" : \"Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: FIRSTNODE:27017; the following nodes did not respond affirmatively: SECONDNODE:27017 failed with Error connecting to SECONDNODE:27017 :: caused by :: Could not find address for SECONDNODE:27017: SocketException: Host not found (authoritative)\",\n \"code\" : 74,\n \"codeName\" : \"NodeNotFound\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1667896491, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\n", "text": "hi,\nI am setting up mongodb replica set cluster with 3 nodes. I have mongo v 4.4.17 installed on all three nodes. The cluster is initialized with no error but while adding secondary node using rs.add(“SECONDNODE:27017”) it shows following error:I can telnet and ping both servers form each server. On adding second node with ip address, the second node doesn’t acknowlede replication.", "username": "Ravindra_Pandey" }, { "code": "", "text": "Share your /etc/hosts and rs conf() output\nSometimes quotes around host:port also cause issues\nUse straight double quotes", "username": "Ramachandra_Tummala" }, { "code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0# Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n#security:\n\n#operationProfiling:\n\nreplication:\n replSetName: \"TESTREPLICATION\"\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n\n", "text": "the config file is", "username": "Ravindra_Pandey" }, { "code": "", "text": "I wanted rs.conf() output from mongo primary\nFrom the node where you ran rs.initiate() would have become primary and where you are trying to add other nodes\nDid you try to add ,3rd node?\nWhat about quotes issue?Is that ruled out\nAnd also /etc/hosts", "username": "Ramachandra_Tummala" }, { "code": "TESTREPLICATION:PRIMARY> rs.conf()\n{\n \"_id\" : \"TESTREPLICATION\",\n \"version\" : 1,\n \"term\" : 1,\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"FIRSTNODE:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"636bb9d25ee322803e1ddfff\")\n }\n}\n\nTESTCLUSTER:PRIMARY> rs.add(\"SECONDNODE:27017\")\n{\n \"operationTime\" : Timestamp(1668004686, 1),\n \"ok\" : 0,\n \"errmsg\" : \"Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: FIRSTNODE:27017; the following nodes did not respond affirmatively: SECONDNODE:27017 failed with Error connecting to SECONDNODE:27017 :: caused by :: Could not find address for SECONDNODE:27017: SocketException: Host not found (authoritative)\",\n \"code\" : 74,\n \"codeName\" : \"NodeNotFound\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1668004686, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\n\n", "text": "Following is the result of rs.conf(). The first node is primary but when I want to add second node it pops the error. Currently I am trying to only 2 nodes but if the second node succeds then I will add another standalone third node.I am able to telnet and ping secondnode", "username": "Ravindra_Pandey" }, { "code": "", "text": "Is mongod up & running on node2 & 3?\nDoes mongod.log show more details\nCould be DNS firewall issues\nCan you connect to each of your nodes &\nPing each other?\nDid you try to add 3rd node?", "username": "Ramachandra_Tummala" }, { "code": "{\"t\":{\"$date\":\"2022-11-09T23:35:50.237+08:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplNetwork\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"SECONDNODE:27017\"}}\n", "text": "Yes, Mongod is running on all nodes and can ping and connect to them all.\nI have not initialized third node because it is currently using by developers for testing so as soon as the replication succeds i will add the third node to the cluster. I have stopped firewall in all nodes so i think it is not firewall issue.\nthe log file shows:", "username": "Ravindra_Pandey" }, { "code": "\"_id\" : 1,\n \"name\" : \"SEONDNODE:27017\",\n \"health\" : 1,\n \"state\" : 0,\n \"stateStr\" : \"STARTUP\",\n \"uptime\" : 1887,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastAppliedWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastDurableWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2022-11-10T05:55:51.305Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -2,\n \"configTerm\" : -1\n\n\"t\":{\"$date\":\"2022-11-10T13:52:19.031+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.168.56.102:59644\",\"connectionId\":3349,\"connectionCount\":4}}\n\n", "text": "I started the mongo with the configuration i.et mongod -f /(path…to…conf) and initialize the replication on primary server. The replication is initialized but when I add second node the scree freezes and the log file shows in a loop and when I stop the service it shows:Message from the log file is:", "username": "Ravindra_Pandey" }, { "code": "", "text": "Startup is not correct status.It should change to secondary\nStartup status means it is not part of any replicaset\nAre your config file replset param matching with primary?\nCheck mongod.log of secondary\nDid you run any other command on secondary?\nWhat exactly you mean by not initialised 3rd node?\nYou should run rs.initiate() only once on primary not on all nodes", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Yes, the config file param matches with primary and I ran rs.initiate() command on primary only. The secondary does not acknowledge the replication but I can see the connection is established by the primary to the second node. I haven’t run any other command on secondary. I just run mogod instance with the configuration using mongod -f /path_to_conf. When I repeat the same process in my local environment there is no any errors and the replicaiton works just fine. But when I try the same steps in preprod environment I am stucked.", "username": "Ravindra_Pandey" }, { "code": "", "text": "Appears to be hostname to IP resolution issue\nCompare your working environment to preprod\nIs your /etc/hosts setup properly\nCan you connect from one node to other using --host?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you !\nmy issue is now resolved. It was a simple spelling mistake in hostname.", "username": "Ravindra_Pandey" } ]
Unable to Add Node to Replica Set Cluster
2022-11-08T08:38:13.366Z
Unable to Add Node to Replica Set Cluster
3,429
null
[]
[ { "code": "", "text": "compre um curso de mongodb no qual o professor fala sobre “criar um serviço independente” mas não consegui intender muito bem pois ele usa Linux e eu windows", "username": "Diego_Silva" }, { "code": "", "text": "É mais fácil programar serviços web no Linux do que no Windows.", "username": "Jack_Woehr" } ]
Criar um serviço independente no crud
2022-11-10T21:54:04.260Z
Criar um serviço independente no crud
704
null
[ "node-js", "mongodb-shell", "database-tools", "backup" ]
[ { "code": "", "text": "Hi Folks,\nI was trying to backup my database, having around 50 collections and total records will be somewhere around 80 million. While dumping the collections, I got an error statingFailed: error writing data for collection `{collection_name} to disk: error reading collection: connection pool for 127.0.0.1:27017 was cleared because another operation failed with: (InterruptedAtShutdown) interrupted at shutdownI was going through the mongo logs and only thing which I found was,Interrupted operation as its client disconnectedmongodump verison => 100.6.1\nmongo version => 6.0.2I am unable to find the root cause of the error. Any help will be much appreciated. Thanks!", "username": "Syed_Ahsan_Hasan_Khan" }, { "code": "", "text": "Failed: error writing data for collection `{collection_name} to disk:Sounds like there is not enough disk space to hold the dump.", "username": "steevej" }, { "code": "", "text": "I don’t think so,\n\nimage914×454 98.8 KB\n", "username": "Syed_Ahsan_Hasan_Khan" }, { "code": "", "text": "Without the size statistics of your 50 collections it is hard to say if 213G is enough.", "username": "steevej" }, { "code": "mongodumpmongodumpmongodmongodumpmongodmongodumpmongodmongodumpmongod", "text": "Hi @Syed_Ahsan_Hasan_Khan,In addition to what @steevej has mentioned, the Backup with mongodump documentation advises:When connected to a MongoDB instance, mongodump can adversely affect mongod performance. If your data is larger than system memory, the queries will push the working set out of memory, causing page faults.Regarding the above statement, (depending on the operating system and/or environment) it is possible that whilst mongodump was running, the available memory in the system reached a point where mongod was terminated by the OOM killer. Are you aware if there was any memory exhaustion during the mongodump procedure or if OOM killer had ended the mongod process where the mongodump was running against?In addition to the above, can you also provide the relevant part of the mongod logs when this error happened?Regards,\nJason", "username": "Jason_Tran" } ]
MongoDB shutdown during mongodump
2022-11-10T22:32:33.035Z
MongoDB shutdown during mongodump
2,961
null
[ "aggregation", "golang", "views" ]
[ { "code": "collection := client.Database(\"poc\").Collection(\"po_new_audit\")\n\n\tmatchStg := bson.D{{\"$match\", bson.D{{\"synced\", false}}}}\n\taddFieldStg1 := bson.D{{\"$addFields\", bson.D{{\"audit_type\", \"purchase_order\"}}}}\n\tlookupStg := bson.D{{\"$lookup\", bson.D{{\"from\", \"user\"}, {\"localField\", \"who.id\"}, {\"foreignField\", \"_id\"}, {\"as\", \"user_details\"}}}}\n\tunwindStg1 := bson.D{{\"$unwind\", \"$user_details\"}}\n\taddFieldStg2 := bson.D{{\"$addFields\", bson.D{{\"breadcrumbs\", bson.D{{\"po_id\", \"$entity_id\"}}}}}}\n\tprojectStg := bson.D{{\"$project\", bson.D{{\"_id\", \"1\"}, {\"what\", \"1\"}, {\"when\", \"1\"}, {\"user_details\", \"1\"}, {\"breadcrumbs\", \"1\"}, {\"audit_type\", \"1\"}}}}\n\tmergeStg := bson.D{{\"$merge\", bson.D{{\"into\", \"audit\"}}}}\n\tc, err := collection.Aggregate(context.TODO(), mongo.Pipeline{matchStg, addFieldStg1, lookupStg, unwindStg1, addFieldStg2, projectStg, mergeStg})\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tvar loaded []bson.M\n\tif err = c.All(context.TODO(), &loaded); err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(loaded)\n", "text": "hi,I am new to mongo and was looking for ways to trigger the on-demand-materialised view in Go driver.I tried running the aggregation pipeline in Go with $merge. But seems like it’s not performing anything.The aggregation works till the projectStg and cursor gives the return values. But adding $merge doesn’t give an error but returns empty cursor. Also, the merge document doesn’t get updated.Any pointers will really help.Regards,\nSayan", "username": "Sayan_Mitra" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Go driver, run aggregation pipeline with $merge
2022-11-09T17:50:35.330Z
Go driver, run aggregation pipeline with $merge
2,057
null
[ "data-modeling" ]
[ { "code": "[\n {\n _id:ObjectId('313231'),\n type: 1,\n settings: {\n companyId: 9239232,\n uniqueId: \"D939M\"\n }\n },\n {\n _id:ObjectId('34335'),\n type: 3,\n settings: {\n trackingNumber: 9239232,\n business name: \"Moon Company\"\n accountId: \"k4nj4n331\"\n }\n }\n]\n", "text": "Hi guys! I have a question about the correct way to design a schema for the following scenario.I have a collection in which I will be storing documents with the same properties, except for one which is “settings”, in that scenario each document might vary, so we might have something like this:As you can see we have different settings and as such we can have up to 5 different fields in each settings, is it a bad practice if I store this information as such?", "username": "Carlos_Bravo" }, { "code": "", "text": "I think that your settings field is a good candidate to apply the attribute pattern.", "username": "steevej" } ]
Collection Schema question
2022-11-10T19:58:48.386Z
Collection Schema question
1,042
null
[ "aggregation", "node-js", "crud", "mongoose-odm" ]
[ { "code": "db.findOneAndUpdate$condCast to string failed for value \"{ '$cond': [ { '$…s', '$status' ] }\" (type Object) at path \"status\"\nawait db.findOneAndUpdate(\n { projectId: new ObjectId(projectId) },\n {\n $set: {\n status: {\n $cond: [{ $eq: 'pending' }, 'in_progress', '$status'],\n },\n },\n },\n { new: true, upsert: true }\n );\nawait db.findOneAndUpdate(\n { projectId: new ObjectId(projectId) },\n [{\n $set: {\n status: {\n $cond: [{ $eq: 'pending' }, 'in_progress', '$status'],\n },\n },\n }],\n { new: true, upsert: true }\n );\nCannot mix array and object updates\n", "text": "Hi there! I am trying to update a a field base on its actual condition which seems to be pretty simple using db.findOneAndUpdate and the $cond method. But I am getting this error no matter what I try:My query is the following:I believe that error comes from not using the aggregation pipeline. So I tried to use it like this adding [ … ]:But then I get this error:Could someone tell me what am I doing wrong? Thank You!I am using Mongoose: ^6.2.3", "username": "hot_hot2eat" }, { "code": "{ $eq: 'pending' }{ \"$eq\" : [ \"$status\" , \"pending\" ] ] \n", "text": "I do not think the syntax{ $eq: 'pending' }is valid inside a expression like it is needed by $cond. More likely something likeis more appropriate.I do not know a lot about findOneAndUpdate() but I think it is mongoose specific. May be it (mongoose) is doing some schema validation and prevents you from updating a string field with an JSON object representing an expression.With the native updateOne() the aggregation version with [ { “$set” : … } ] works perfectly when the appropriate version, mentioned above, of $eq is used.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error: Cannot mix array and object updates
2022-11-10T14:11:38.323Z
Error: Cannot mix array and object updates
1,448
null
[ "queries" ]
[ { "code": "", "text": "How data structure looks like :\n{\n“created_at” : ISODate ( “2022-02-12T13:09:01.860+0000” ),\n“size_chart” : [\n{\n“Inventory” : NumberInt ( -1),\n“Name” : “L” ,\n“Waist” : NumberInt ( 32 ),\n“Hips”: NumberInt (42 )\n},\n{\n“Inventory” : NumberInt ( 1),\n“Name” : “XL” ,\n“Waist” : NumberInt ( 32 ),\n“Hips”: NumberInt (42 )\n},\n{\n“Inventory” : NumberInt ( 3),\n“Name” : “M” ,\n“Waist” : NumberInt ( 32),\n“Hips”: NumberInt (42 )\n},\n],\n“status” : “active” ,\n“original_price”: 300.0 ,\n}So in this case i want to get only those distinct size names which have inventory greater than 0.\n**There are so many documents same as this.", "username": "Zoro-OP" }, { "code": "", "text": "Hi there @Zoro-OP,Please, surround the code between backticks like so:```\n{\n  key1: “hello”,\n  key2:“world”\n}\n```Ideally you’d also paste the code in the browser terminal and prettify it And please include the expected output, also formatted.", "username": "santimir" }, { "code": "{\n\"XL\"\n\"M\"\n}\n{\n “created_at” : \"Date\",\n “size_chart” : [\n {\n “Inventory” : NumberInt ( -1),\n “Name” : “L” ,\n “Waist” : NumberInt ( 32 ),\n “Hips”: NumberInt (42 )\n },\n {\n “Inventory” : NumberInt ( 1),\n “Name” : “XL” ,\n “Waist” : NumberInt ( 32 ),\n “Hips”: NumberInt (42 )\n },\n {\n “Inventory” : NumberInt ( 3),\n “Name” : “M” ,\n “Waist” : NumberInt ( 32),\n “Hips”: NumberInt (42 )\n },\n ],\n “status” : “active” ,\n “original_price”: 300.0\n}\n", "text": "Expected Output:Prettified Data -", "username": "Zoro-OP" }, { "code": "db.collection.aggregate({\n \"$match\": {\n \"size_chart.Inventory\": {\n \"$gt\": 0\n }\n }\n})\n{$project:{size_chart.Name:1, _id:1}}$addToSetdb.collection.aggregate([\n {\n \"$match\": {\n \"size_chart.Inventory\": {\n \"$gt\": 0\n }\n }\n },\n {\n \"$project\": {\n \"Names\": \"$size_chart.Name\"\n }\n },\n {\n $unwind: \"$Names\"\n },\n {\n \"$group\": {\n \"_id\": \"$_id\",\n \"NamesSet\": {\n \"$addToSet\": \"$Names\"\n }\n }\n }\n])\n", "text": "@Zoro-OP hiThanks a lot for that formatting and description. If I undestand correctly X,XL, etc will be repeated (not in the code sample that you include). I will assume that.To get the documents in a collection where an array, including array of objects, hold a key equal to a value, you can use:And the array query automatically traverses the array, and looks inside each document, and applies an expression to a field. That’s MongoDB magic…{$project:{size_chart.Name:1, _id:1}}But we need a condition in name, such that there are no repeated values.We can use $addToSet. $addToSet returns an array of all unique values that results from applying an expression to each document in a group.The problem is this operator only work in a $group or related stages, so we will group instead of $project. And afaik $addToSet can’t look inside a document, so we need to unwind the array. This is what I tried, then:Play with it here, to find any bug and probably will need performance improvements. If the array key that we $match is indexed (i.e size_chart.Inventory), this will be quite fast.I first wrote this reply, but noticed it was wrong, you may want to read it anyways.\nIf I understand correctly, we need to first filter documents on a condition, this could be just a $match.When an operation needs a comparison between documents you normally need some bold approach.I am not an expert, but I would would $match, $group, $unwind.", "username": "santimir" }, { "code": "", "text": "Excelente aporte. Very good information.", "username": "Graziany_Gomez" } ]
Is there a way to query on array of objects such that after the query it will hold only those objects that are fulfilled by the query and also to have distinct on that field
2022-02-24T18:11:29.637Z
Is there a way to query on array of objects such that after the query it will hold only those objects that are fulfilled by the query and also to have distinct on that field
3,062
https://www.mongodb.com/…6_2_1024x576.png
[ "queries", "next-js", "react-js", "api" ]
[ { "code": "import React from \"react\";\nimport Head from \"next/head\";\nimport clientPromise from \"../../lib/mongodb\";\nimport { ObjectId } from \"mongodb\";\n\nexport default function CompanyShow(companyData) {\n return (\n <div>\n {\" \"}\n <h1>{companyData.name}</h1>...\n </div>\n );\n}\n\nexport async function getStaticPaths() {\n // Return a list of possible value for id\n\n const client = await clientPromise;\n const db = client.db(\"main\");\n\n const companies = db.collection(\"companies\");\n\n // here you can filter for certain data entries in your db\n // in this case we only want the ids\n const data = await companies.find({}, { _id: 1 }).toArray();\n\n client.close();\n\n return {\n // no 404 page will be rendered in case an id is entered which doesn't exist\n fallback: \"blocking\",\n\n // dynamcly calculating the routes\n paths: data.map((datapoint) => ({\n params: {\n companyId: datapoint._id.toString(),\n },\n })),\n };\n}\n\nexport async function getStaticProps({ context }) {\n const client = await clientPromise;\n const db = client.db(\"main\");\n\n const companies = db.collection(\"companies\");\n\n // Fetch necessary data for the company using params.id\n const companyId = context.params.companyId;\n\n // filter the companies for the companyId you get from context.params.companyId;\n const selectedCompanyEntry = await companies.findOne({\n _id: ObjectId(companyId),\n });\n\n client.close();\n\n return {\n props: {\n companyData: {\n id: selectedCompanyEntry._id.toString(),\n name: selectedCompanyEntry.name,\n },\n },\n revalidate: 1,\n };\n}\n", "text": "Hello,I have followed the well explained tutorial on how to integrate MongoDB in Next.JSLearn how to easily integrate MongoDB into your Next.js application with the official MongoDB package.I have been able to query the DB to get the list of one collection, but I am now struggling to get one document of a collection based on its ID.Right now on that iteration I am getting an error of undefined params.Would you be as kind as helping me fix that issue please?Or orienting me towards a new page documentation of MongoDB explaining step by step how to do it?Thank you!", "username": "Marving_Moreton" }, { "code": "const selectedCompanyEntry = await companies.findOne({\n _id: ObjectId(companyId),\n });\n", "text": "Hi @Marving_Moreton ,The query looks correct to me:I suspect the issue is with context.params which might be passed wrongly…Can you share a full error you get and the line that message it?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "import React from \"react\";\nimport Head from \"next/head\";\nimport clientPromise from \"../../lib/mongodb\";\nimport { ObjectId } from \"mongodb\";\n\nexport default function CompanyShow(props) {\n return (\n <div>\n <h1>{props.companyData.name}</h1>...\n </div>\n );\n}\n\nexport async function getStaticPaths() {\n // Return a list of possible value for id\n\n const client = await clientPromise;\n const db = client.db(\"main\");\n\n const companies = db.collection(\"companies\");\n\n // here you can filter for certain data entries in your db\n // in this case we only want the ids\n const data = await companies.find({}, { _id: 1 }).toArray();\n\n client.close();\n\n return {\n // no 404 page will be rendered in case an id is entered which doesn't exist\n fallback: \"blocking\",\n\n // dynamcly calculating the routes\n paths: data.map((datapoint) => ({\n params: {\n companyId: datapoint._id.toString(),\n },\n })),\n };\n}\n\nexport async function getStaticProps(context) {\n const client = await clientPromise;\n const db = client.db(\"main\");\n\n const companies = db.collection(\"companies\");\n\n // Fetch necessary data for the company using params.id\n // const companyId = context.params.companyId;\n console.log(context.params.companyId);\n\n // console.log(ObjectId(params.companyId));\n const companyId = context.params.companyId;\n\n // filter the companies for the companyId you get from context.params.companyId;\n const selectedCompanyEntry = await companies.findOne({\n _id: ObjectId(companyId),\n });\n\n client.close();\n\n return {\n props: {\n companyData: {\n id: selectedCompanyEntry._id.toString(),\n name: selectedCompanyEntry.name,\n },\n },\n revalidate: 1000,\n };\n}\n\n", "text": "Hello @Pavel_Duchovny, thank you for your reply!Well… I had some evolution to it!I had a destructuring issue with passing ( {context}) instead of (contex†).However, now I am getting a new error: BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integerIt seems that const companyId does not get the _id but the whole object as a whole:", "username": "Marving_Moreton" }, { "code": "companyId", "text": "Hi @Marving_Moreton ,So it looks like the value of companyId is. not a valid string containing an actual ObjectId format.Please debug your code and verify that the string you provide is constructed to be converted to an objectId…Thank\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yeah I have figured my error.I were looking to have the [slug] as URL and not the Id itself", "username": "Marving_Moreton" } ]
Dynamic route query DB - return a single element based on ID
2022-11-08T16:50:31.874Z
Dynamic route query DB - return a single element based on ID
3,684
null
[]
[ { "code": "", "text": "Hi,\nI restored the mongodb backup taken from the OPS Manager which is stored in the blockstore but once restored, it shows the database size as zero and all the collections are empty even though size in the disk drive is almost same as in the production.We are using 3.4 version of OPS manger to manage replica set deployed with mongodb version 3.0.15.Please help …Thank you.", "username": "Madhab_Paudel" }, { "code": "", "text": "Sorry to revive this old thread, but I’m having the exact same issue. Were you able to solve it?Trying to load my downloaded snapshot via mongod --dbpath and it just shows an empty dataset with the default databases and collections.Thanks!", "username": "Matt_Oskamp" } ]
MongoDB Restore from Block Store Snapshot showing no data
2020-06-30T20:20:40.996Z
MongoDB Restore from Block Store Snapshot showing no data
1,425
null
[ "python" ]
[ { "code": "", "text": "Hi,\nI’m looking for an option to extract data from Elasticsearch and store into the mongodb using some python script. Please help me with that.", "username": "chinmoy_padhi" }, { "code": "from elasticsearch import Elasticsearch\nfrom elasticsearch.helpers import scan\nimport pandas as pd\nimport pymongo\nfrom pymongo import MongoClient\n\n\nesClient = Elasticsearch('http://localhost:9200/')\n\ndef get_data_from_elastic():\n query = {\n \"query\": {\n \"match\": {\n \"state\": \"failed\"\n }\n }\n }\n res=scan(client = esClient,\n query = query,\n scroll = '1m',\n index = 'test-2022-11-09',\n raise_on_error=True,\n preserve_order=False,\n clear_scroll=True)\n \n result = list(res)\n temp = []\n \n for hit in result:\n temp.append(hit['_source'])\n \n df = pd.DataFrame(temp)\n return df\ndf = get_data_from_elastic()\nprint(df.head())\n\nclient = MongoClient('mongodb://admin:[email protected]:27001/admin')\ndb = client['elasticDB']\nelastic_data = db['elastic_data']\n \ndf.reset_index(inplace=True)\ndata_dict = df.to_dict(\"records\")\nelastic_data.insert_many(data_dict)\n", "text": "Hi ,\nHere is my solution, may be any one can comment, if anyway I can improve this code or how I can query the data from an index pattern", "username": "chinmoy_padhi" } ]
Extract data from elasticsearch and store into mongodb
2022-11-07T14:20:39.692Z
Extract data from elasticsearch and store into mongodb
1,519
null
[]
[ { "code": "", "text": "Hey guys, is this a thing?\nI can browse https://cloud.mongodb.com only with VPN enabled. It would be okay, but… My local mongodb shell is working properly only without VPN. And often I need both connections", "username": "Dmitry_Mikhalkov" }, { "code": "", "text": "anyway, can someone see this comment?", "username": "Dmitry_Mikhalkov" }, { "code": "", "text": "Hey @Dmitry_Mikhalkov welcome to the community!If you can open cloud.mongodb.com using VPN, then perhaps there’s some restrictions in the non-VPN network you’re connected to. You might want to double check with your network admin regarding this.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "I found it.We will not sell our cloud services to customers in Russia and Belarus", "username": "Dmitry_Mikhalkov" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ERR_CONNECTION_CLOSED cloud.mongodb.com
2022-11-09T00:47:35.440Z
ERR_CONNECTION_CLOSED cloud.mongodb.com
928
null
[ "aggregation", "queries", "compass", "mongodb-shell" ]
[ { "code": "\"leadership\": [\n {\n \"text\": \"Build trust and inspire my leader to deliver great\\nresults and have fun too. \",\n \"value\": \"Build trust and inspire my leader to deliver great\\nresults and have fun too. \",\nleadership: [\n {\n text: 'Build trust and inspire my leader to set clear goals to deliver great\\n' +\n 'results and have fun too. ',\n value: 'Build trust and inspire my leader to set clear goals to deliver great\\n' +\n 'results and have fun too. ',\n", "text": "I’ve tried searching around for this but am not sure how to word it, exactly. For some reason, when retrieving a document using the mongosh interface, any strings containing “\\n” are split into “string before \\n” + “string after \\n”. Example below:Using legacy mongod connection (or viewing in Compass):Using mongosh:I really like the improvements done on mongosh, but it’s so frustrating having to redo my whole workstream because aggregation and query results aren’t parsed JSON anymore and there are all these weird quirks to it. I’ve seen some stuff online but are there no official ways to use legacy formatting or functionality?Appreciate any help on this.", "username": "Vincent_Simone1" }, { "code": "ObjectId(\"636c039c124f7c960b99b610\")EJSON.stringify(data)", "text": "Please note that aside from that problem, the output is not true JSON anyways, e.g. ObjectIds show up as e.g. ObjectId(\"636c039c124f7c960b99b610\").The best way to get true JSON is to use EJSON.stringify(data). Would that work for you?", "username": "Massimiliano_Marcon" }, { "code": "Uncaught:\nBSONTypeError: Converting circular structure to EJSON:\n (root) -> _mongo -> __serviceProvider -> mongoClient -> s -> sessionPool -> client\n \\-------------------------------/\nUncaught:\nTypeError: Converting circular structure to JSON\n --> starting at object with constructor 'MongoClient'\n | property 's' -> object with constructor 'Object'\n | property 'sessionPool' -> object with constructor 'ServerSessionPool'\n --- property 'client' closes the circle\n", "text": "Yes, I normally exclude the _id for this reason. I did try using EJSON.stringify(data) but I always get this error:Or for JSON.stringify(data)", "username": "Vincent_Simone1" }, { "code": "", "text": "Can you share your entire script? I suspect you might be trying to stringify the cursor return by a find or an aggregate instead of the array of results.", "username": "Massimiliano_Marcon" }, { "code": "var data = db.coll.aggregate([pipeline])\nEJSON.stringify(data)\n\nvar alt = db.coll.find({ })\nEJSON.stringify(alt)\nmongosh \"URI\" --quiet < aggregation_pipeline.mongodb | mongoimport [external server details]EJSON.stringify(db.coll.aggregate([pipeline]))", "text": "Ah, okay that explains it. So the scripts I was trying were:I’ve saved that in a file and am referencing it in a .bat file like so:\nmongosh \"URI\" --quiet < aggregation_pipeline.mongodb | mongoimport [external server details]After you said this I tried adding .toArray() at the end of the pipeline and that seemed to work. So the contents of aggregation_pipeline.mongodb are now:\nEJSON.stringify(db.coll.aggregate([pipeline]))Thanks, @Massimiliano_Marcon !", "username": "Vincent_Simone1" }, { "code": "\"disableGreetingMessage\":true,\"displayBatchSize\":X,\"inspectDepth\":Yprompt = ''mongoexport --fields=\"myFields\" | mongoimport", "text": "P.S. For anyone else trying this very silly way of migrating data from one MongoDB server to another within Windows, here are the other things I had to change to make this work:You could also try using mongoexport --fields=\"myFields\" | mongoimport if you don’t need to alter the dataset beyond pulling specific fields", "username": "Vincent_Simone1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongosh splits strings with "\n" into invalid value
2022-11-09T18:52:21.009Z
Mongosh splits strings with &ldquo;\n&rdquo; into invalid value
1,967