image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
https://www.mongodb.com/…9_2_1024x670.png
[]
[ { "code": "", "text": "\nScreenshot 2022-12-31 at 11.35.36 PM1100×720 155 KB\n", "username": "Justin_Jenkins" }, { "code": "db.2022db[\"2022\"]db.abcdb[\"abc\"]\"-\"db.my-first-propertydb[\"my-first-property\"]db[\"my\"] - first - property", "text": "It’s the Javascript parser thingy.The db.2022 has a special meaning in Javascript, hence it is not interpreted into db[\"2022\"] meanwhile db.abc parsed as db[\"abc\"]. It will be interpreted as an attribute only when the name starts with alphabet.Similar thing happened for example with field names that has hyphen (\"-\") in it.\ndb.my-first-property doesn’t work. db[\"my-first-property\"] works.\nThis is because the parser will treat it as db[\"my\"] - first - property (arithmetic formula, value a minus value b minus value c).If you have control over the naming, it’s better to give a database names and collection names contain only alphanumeric and always start with an alphabet (which is quite common across languages that you should not use identifier names starting with numerical digits), use underscore instead of hyphens for those names.", "username": "Daniel_Baktiar1" }, { "code": "", "text": "@Daniel_Baktiar1 so, this was a bit of joke. Haha!But, I really appreciate your response since someone might find it helpful (had they happened to have this issue) one day!", "username": "Justin_Jenkins" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SyntaxError: Happy New Year
2023-01-01T06:47:26.191Z
SyntaxError: Happy New Year
785
null
[ "conference" ]
[ { "code": "", "text": "CodeMash is a four day developer event happening in Sandusky, Ohio.Full schedule and event details can be found on the event websiteEvent Type: In-Person\nLocation: 7000 Kalahari Dr, Sandusky, OH 44870, United States", "username": "Joel_Lord" }, { "code": "", "text": "Oh cool!\nI’m attending (and doing a pre-compiler) with FastAPI and mongo backend.", "username": "Nuri_Halperin" }, { "code": "", "text": "My workshop at CodeMash covers python API, pydantic, and MongoDB as a database to back the API.", "username": "Nuri_Halperin" } ]
CodeMash
2022-11-04T21:32:58.159Z
CodeMash
2,911
null
[]
[ { "code": "", "text": "Hello,\nIs there a limit to the number of fields that can be added to Atlas Search Indexes?Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "", "text": "Hi Prasad,For MTM clusters (M0 - M5), each index has a limit of 300 fields that can be indexed. However, for dedicated clusters (M10 & above), there is no limit. Hope this helps!Thanks,\nAkaash", "username": "Akaash_Kambath" }, { "code": "", "text": "Thanks Akash. Is there any limit to the number of fields in Stored Source?", "username": "Prasad_Kini" } ]
Atlas Index Max Number Of Fields
2023-01-06T16:57:18.672Z
Atlas Index Max Number Of Fields
718
null
[ "app-services-user-auth", "react-native" ]
[ { "code": "(app.currentUser.logOut()app.emailPasswordAuth.registerUser()Realm.open(config).then.catchRealm.openuseEffect(() => {\n const OpenRealmBehaviorConfiguration = {\n type: 'openImmediately',\n };\n const config = {\n schema: [EntrySchema, SubjectSchema, LearnerSchema],\n sync: {\n user,\n partitionValue: user.id,\n error: (_session, error) =>\n console.log('sync error', error.name, error.message),\n newRealmFileBehavior: OpenRealmBehaviorConfiguration,\n existingRealmFileBehavior: OpenRealmBehaviorConfiguration,\n initialSubscriptions: {\n update: (subs, _realm) => {\n subs.add(_realm.objects('Entry'));\n subs.add(_realm.objects('Subject'));\n subs.add(_realm.objects('Learner'));\n },\n },\n rerunOnOpen: true,\n },\n };\n Realm.open(config)\n .then(_realm => {\n setRealm(_realm);\n })\n .catch(e => console.warn('Realm open error:', e));\n\n return () => {\n if (realm) {\n realm.close();\n setRealm(null);\n }\n };\n }, [user]);\n \"dependencies\": {\n \"@react-native-community/datetimepicker\": \"^6.3.2\",\n \"@react-native-picker/picker\": \"^2.4.4\",\n \"@react-navigation/native\": \"^6.0.11\",\n \"@react-navigation/native-stack\": \"^6.7.0\",\n \"@realm/react\": \"^0.3.2\",\n \"react\": \"18.0.0\",\n \"react-native\": \"0.69.3\",\n \"react-native-background-timer\": \"^2.4.1\",\n \"react-native-get-random-values\": \"^1.8.0\",\n \"react-native-safe-area-context\": \"^4.3.1\",\n \"react-native-screens\": \"^3.15.0\",\n \"react-native-toast-message\": \"^2.1.5\",\n \"react-native-vector-icons\": \"^9.2.0\",\n \"react-native-wheel-pick\": \"^1.2.0\",\n \"realm\": \"^10.19.5\"\n },\n", "text": "I’ve been developing a React Native app using Realm for data syncing for a while now. My data seems to be syncing as expected but recently, I tried modifying the authorization portion of the app and I’m running to issues where the app is crashing. I’ve narrowed down where the crash is occurring but I haven’t been able to find any helpful logs, either from the Realm dashboard or putting console logs in my code. My goal is to try to figure out what’s breaking.Repo steps:Result: app crashes when the code reaches Realm.open(config), never getting into the .then block or the .catch blockThe Realm dashboard logs indicate a successful sync and no other error\nReopening the app after the crash, the user is logged in and the app behaves as expected.Here is the useEffect that calls Realm.openHere are my package deps:", "username": "Nick_Martin" }, { "code": "realm", "text": "updating the realm package to 10.24.0 seems to have resolved the issue", "username": "Nick_Martin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
React Native Realm app crashing when registering a user and signing in immediately after
2023-01-06T16:55:52.565Z
React Native Realm app crashing when registering a user and signing in immediately after
1,615
null
[ "atlas-search" ]
[ { "code": "", "text": "I would like to be able to make a search query using the compound pipeline. An example of a document in the collection I am querying is below:\n{\ntitle: “Nike Flyknit Pegasus 32”\ngender: “Women”\ncategory: “Shoes”\ndescription: “Newest Nike running shoe with Zoom foam in the forefoot”\nbrand: “Nike”\nprice: 120\n}I have a search index on “title”, and want to be able to perform a search where I filter based on “gender”, “category”, “brand”, and < price, but it seems I cannot stack filters in a must case?I want the query to filter by the fields (similar to a match) so if I make a search for “Adidas Ultraboost” with filters “Women” “Shoes” and < $ 150, I want to search the “title” index with “Adidas Ultraboost”, but also only show results that match “Women” “Shoes” and are less than 150 for the price attribute.Any advice on how to make the compound pipeline work would be appreciated.", "username": "Kunal_Rai" }, { "code": " $search: {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"path\": \"title\",\n \"query\": \"Nike Flyknit Pegasus 32\",\n }\n },\n {\n \"text\": {\n \"path\": \"gender\",\n \"query\": \"Women\",\n }\n },\n ]\n }\n },\n },\n", "text": "You should be able to something like this:", "username": "Prasad_Kini" }, { "code": "", "text": "Thanks for the example query. I was running the compound stage like that however it seems like “must” isn’t enforced strictly? For something like gender or category I am looking for a strict match. I only want to search for results that absolutely have the attribute Women or category Shoes.", "username": "Kunal_Rai" }, { "code": "", "text": "Try using the phrase operator instead of text.I am new to Mongo as well ", "username": "Prasad_Kini" }, { "code": "{\n\t\"$search\": {\n \"compound\": {\n \n \"must\": [{\n \"phrase\": {\n \"query\": \"Shoes\",\n \"path\": \"category\"\n },\n \"phrase\": {\n \"query\": \"Nike\",\n \"path\": \"brand\"\n }\n }],\n \n },\n \n\t},\n },\n { \"$project\" : { \"_id\" : 0 } },\n { \"$limit\" : 24 }\n])\n", "text": "Even with a pipeline like this, the search results show documents that do not match 1 of the 2 must cases. Do I need to combine the all phrase statements in the must clause into a singular one?", "username": "Kunal_Rai" }, { "code": "", "text": "Which analyzer is the index using? I believe you should be using the Keyword Analyzer.Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!", "username": "Prasad_Kini" }, { "code": "", "text": "Currently I only have a single index (dynamic over index fields) that is using lucene.standard for the index analyzer and lucene.standard for the search analyzer. Do I need to make a separate index for each field I want to query with a keyword analyzer? And should the lucene option be consistent across index analyzer and search analyzer?", "username": "Kunal_Rai" }, { "code": "", "text": "I am pretty sure using different analyzers for indexing and searching would lead to different behaviors. It would be mostly trial and error at this point. I will be doing these experiments over the next few weeks.At this point, it is one blind guiding the other ", "username": "Prasad_Kini" }, { "code": "", "text": "Makes sense, I will experiment with some different analyzers and see if I can get this query working. Thanks!", "username": "Kunal_Rai" } ]
Making a compound $search query with multiple attribute matching
2023-01-06T06:30:01.212Z
Making a compound $search query with multiple attribute matching
1,208
null
[ "node-js", "replication", "compass" ]
[ { "code": "mongodb://username:[email protected]:27017/db?authSource=admin\nIt looks like you are trying to access MongoDB over HTTP on the native driver port.\nmongodb://username:[email protected]/db\n", "text": "Hello everyone.I build a self hosted instance of mongodb in AWS EC2 with username and password configs.\nIt work correclty with connection in compass or in my app using the uri like this:When I fetch 11.22.33.44:27017, I get this message:I want to use an uri using a subdomain like the uri provided by mongo atlas.\nsomething like thisthen deny all access to port via IP address 11.22.33.44:27017.I want only acces to mongo instance via one uri with domain name.Thanks", "username": "ayoub_sadiki" }, { "code": "mongodb+srv://", "text": "When I fetch 11.22.33.44:27017, I get this message:This is expected. MongoDB will expect TLS handshake or mongodb wire protocol.I want to use an uri using a subdomain like the uri provided by mongo atlas.\nsomething like thisDoing it like Atlas is more complex as it uses the DNS Seed List connection URI mongodb+srv:// which would require the configuration of DNS SRV and TXT records. See connection string.then deny all access to port via IP address 11.22.33.44:27017.That is never going to work. DNS always resolves to an ip address. It is better to allow access only from the IPs you permit to connect.Be sure to enable TLS as well. Connecting without TLS leaves the traffic vulnerable to capture and eavesdropping.", "username": "chris" } ]
Build mongo uri using self hosted database
2023-01-03T15:33:31.943Z
Build mongo uri using self hosted database
1,204
null
[ "queries", "performance" ]
[ { "code": "", "text": "Where can I find description of all input_stages", "username": "Prof_Monika_Shah" }, { "code": "", "text": "Hi @Prof_Monika_Shah,You can find some documentation in here: Explain Results. But they are not all fully documented as they are closely related to the entire query engine which can differ a bit from one version to another and frankly this can be more confusing than helping.What is the question exactly? Maybe I can try to help.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88 ,\nThank you for reply.\nI know this url for basic information about stages. But, it does not show all stages or input stages. I have some queries .\nfor example for db.emp.explain(‘executionStats’).find({‘salary’:{’$gt’:197812}}),\nit shows\nStage:Shard_merge Time: 848 [shard_rs1 Time:576,shard_rs2 Time:703,shard_rs3 Time:847]\nStage wise time at every shard\nshard_rs1 Time : 576(shard_filter Time: 174 (Fetch :113 (IXSCAN:43))) ]\nshard_rs2 Time : 703(shard_filter Time: 309 (Fetch :213 (IXSCAN:73))) ]\nshard_rs3 Time : 847(shard_filter Time: 331 (Fetch :254 (IXSCAN:75))) ]\nMy queries are\ni) Is Shard_Merge done at every shard ? (Because, It shows array of executionStats for every shard)\nii) What is difference between executionTimeMillis and executionTimeMillisEstatimte?\n(Because, it shows executionTimeMillis for every shard in Shard_Merge array. On other side, for every input stages, it shows executionTimeMillisEstimate)\niii) Is executionTimeMillisEstimate of every stage represent executionTimeMillisEstimate + its own processing time?\niv) Why executionTime of every shard is much more than other stages?\nWhat is need of Sharding_Filter after IXSCAN and FETCH stages for each shard?", "username": "Prof_Monika_Shah" }, { "code": "", "text": "Can you give a good example to understand use of SHARD_FILTER stage?", "username": "Prof_Monika_Shah" }, { "code": "", "text": "Why Shard_Merge is own at all shards in explain plan?", "username": "Prof_Monika_Shah" }, { "code": "explain(true)", "text": "I honestly don’t have an answer to all these questions. It’s very specific and low level.Can you please share the entire explain plan with explain(true) so I can have a better idea of what is happening?Some stages only deal with orphan docs so they are relatively fast. Everything “estimate” is probably related to the query planner and the cached plans.Maxime.", "username": "MaBeuLux88" }, { "code": "explain", "text": "Hi @Prof_Monika_Shah,As my colleague noted, having the full (verbose) explain output could provide us with a better understanding of the situation and potentially allow for a more useful reply. The other thing that would be particularly useful to know would be about the overall problem that you are facing and the goals/outcome that you are attempting to achieve. Without that we are only going to be able to provide snippets of information which may or may not be helpful in solving your underlying issue(s).Broadly speaking, there’s nothing in the brief information that you’ve provided that looks particularly out of the ordinary or problematic. Shard filtering and merging are standard internal parts of executing queries that target more than one shard. The filtering stage is responsible for removing documents that have logically moved to another shard but haven’t been physically removed yet (referred to as orphans) and the merge stage is responsible for compiling the responses from individual shards into the single response that the client is expecting. Duration estimates are less accurate due to the manner in which they are measured which can result in rounding errors obscuring where some of the work is taking place. Stage times are cumulative, so any time spent by child stages are included in the duration of the parent stage. There is some parallelism that naturally happens with sharding, which means that the cumulative time there looks a little different. Relatedly it’s important to keep in mind that network latency associated with internal communication to process the operation will also be captured in various places for the reported execution times.We cannot say anything meaningful about the overall duration of the operation (seemingly about 0.85 seconds) without more information. A key factor there would be the number of documents in the result set.Hopefully this is helpful!Best,\nChris", "username": "Christopher_Harris" } ]
Stages in explainPlan
2022-12-07T18:51:54.983Z
Stages in explainPlan
2,392
null
[ "aggregation", "queries", "data-modeling", "indexes" ]
[ { "code": "", "text": "In a dating app like Tinder, users slide left and right to indicate who the have rejected and who they have liked.The problem is, most users are going to easily have 1000s of rejects and 1000s of likes.How would you store these many to many relationships? For most users, they will not fit in one document, so the outlier pattern doesn’t apply here.", "username": "Big_Cat_Public_Safety_Act" }, { "code": "{\n matcher : \"id1\",\n matched : [ \"id2\", \"id5\" ... Up to 100 or last 10s],\n matchSeq : 1,\nwriteDate : \"2023-01-10\"\n ..\n}\n{\n matchId : \"aaa\"\n match : [ {\"matcher\": \"id1\" ,\n \"matched\" : \"id5\" },\n {\"matcher\": \"id5\" ,\n \"matched\" : \"id1\" }]\n matchedDate : ...,\n IsMatched : true\n\n}\n\n", "text": "Hi @Big_Cat_Public_Safety_Act ,The outlier pattern may apply on a matched collection where you will keep an id of the matcher user and its matched array in the past x seconds. Since apps like tinder can swipe users pretty fast we won’t want to do an insert or update call on each swipe but either accumulate them in an array or write them as separate match documents into a matched collection.Or hold small matched document:Now in the small matched document I will also write it periodically with a bulk upsert write.You can find a match using an upsert on if matched is my user and if not setOnInsert.Obviously index the predicts and consider ttl index for old matches which expired.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "So then we have to put all of the matched data in a collection separate from the user collection, correct?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "I would vote for that yes.But this is according to the design I have in my mind for CRUD and queries of a system like that.You might find something else useful there is no one model fits all…", "username": "Pavel_Duchovny" }, { "code": "db.col.aggregate({\n {\n $match: {\n userID: { $nin: [ /* already rejected or already matched useres*/ ] }\n }\n }\n])\n[ /* already rejected or already matched useres*/ ]", "text": "So when the user logs in and the app must query for a list of suggestions, it must determine the lists of rejected and match users, so that the query won’t show them again. It must do something likeSince the array, [ /* already rejected or already matched useres*/ ], is expected to be in the 1000s and can easily exceed the 16MB limit, what is the solution to this problem that this query must handle?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "var matchedProfiles = db.matches.find({matcher : myId})\n\nprofiles = db.profiles.aggregat({ { $near : { gelocation : ... } }, gender : ... }, \n// Add a sortby an index compund to the geo one to add some randomisation and better score\n{$sort : {lastModified : -1, score : 1}},\n{\n $limit : 1000\n}\n]\n)\n\n// Optimization turn the matchedProfile to be dict by a matchedId \n\n\nprofiles.filter( !matchedProfiles[profileId]))\n\nWhile (profiles.length < 200) \n{\n// randomize sort \nprofiles = profiles + search again\n}\n\nRun async process to get the next batch ready for next access. \n\nsend 200 results back\n\n\n", "text": "Hi @Big_Cat_Public_Safety_Act ,So there are several ways to achieve that, what i prefer is that the endpoint in the API that delivers the batch of profiles to render in the application is doing a 2 phase query :So in this process we first fetch the list of users to avoid , then we get the profiles based on our general location and some index based sorting to have some fundamental smart randomisation .Only then we limit to leta say a 1000 results while the app needs only 200 for first init batch .We filter out matched or rejected users and pass 200 back. In case we missed the 200 mark we add another randomisation search. This can be done in server memory or some cache service. Its not a complex task for modern languagesOnce we decide to return we already async prepare the next batch for when the user scroll through the 200 users (probably not in the near few seconds hopefullyTy\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "To be honest I believe that the real big systems have some suphisticated caching mechanisms or even stores.So they might do this filtering there or even pre prepair candidates for you once you log out for fast login next time. And if you changed location drastically they will buy time by prompting confirm your location while preparing a new batch of users for you.Thanks", "username": "Pavel_Duchovny" } ]
How would one design a database to model a dating App like Tinder?
2023-01-06T05:16:20.743Z
How would one design a database to model a dating App like Tinder?
2,738
https://www.mongodb.com/…6_2_1024x589.png
[ "mongodb-shell" ]
[ { "code": "show databasesuseshow collectionsinsert", "text": "My mongoshell is starting correctly and show databases command use commands are working perfectly but when i use show collections commands or insert document commands the terminal showing mongoserverselectionerror\n\nimage1097×631 32.1 KB\n", "username": "Kamesh_A" }, { "code": "", "text": "Id try removing the timeout setting from the connection string query options. And see what happens.", "username": "santimir" }, { "code": "timeout setting", "text": "if i set timeout setting =0 it’s taking almost an hour to show the collections. why is it taking this much time to respond?", "username": "Kamesh_A" } ]
Mongoserverselectionerror in mongo shell
2023-01-06T11:02:21.421Z
Mongoserverselectionerror in mongo shell
860
null
[]
[ { "code": "", "text": "Hi, I added 0.0.0.0/0 as my IP address during creation of my new cluster. Now we are considering restricting the access, but if I add my public IP which probably a dynamic one, what will happen if my public IP gets changed. And there are multiple people accessing the cluster using Studio 3T, will it affect them. Can anyone suggest any solution. I’m an absolute newbie.TLDR: I’m looking for a solution to restrict access to my cluster from public network but there are multiple people accessing the cluster using apps like Studio 3T", "username": "sbn390" }, { "code": "", "text": "If this is a free Atlas account, you’ll need to change the IPs manually whenever they change.\nIf this is a paid Atlas account, you need to contact Atlas support for instructions on managing your IP access.", "username": "Jack_Woehr" } ]
Adding my IP to Access List
2023-01-06T05:18:13.611Z
Adding my IP to Access List
598
null
[ "sharding" ]
[ { "code": "sudo apt-get update\necho \"deb http://security.ubuntu.com/ubuntu impish-security main\" | sudo tee /etc/apt/sources.list.d/impish-security.list\nsudo apt update\nsudo apt install libssl1.1 --> libssl1.1 is not installed!\n\nwget -qO - https://www.mongodb.org/static/pgp/server-6.0.asc | sudo apt-key add -\necho \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list\nsudo apt update\nsudo apt install -y mongodb-org\nmongodb-org-mongos : Depends: libssl1.1 (>= 1.1.1) but it is not installable\n mongodb-org-server : Depends: libssl1.1 (>= 1.1.1) but it is not installable\n mongodb-org-shell : Depends: libssl1.1 (>= 1.1.1) but it is not installable\n", "text": "Hello! I try to install mongoDB community server 6.0 on Ubuntu 22.04. Belows are my installation process.But libssl1.1 is not installed and brings the errors when executing the installation.Belows are the error messgaes.So I downloaded the installation deb file from the site and execute it. But the same errors are thrown. Any reply will be thanksful. Best regards", "username": "Joseph_Hwang" }, { "code": "", "text": "Ubuntu 20.04 is the latest supported Ubuntu.Keep an eye on https://jira.mongodb.org/browse/SERVER-62300Use the supported platform or you can run it in Docker.\nimage826×209 25.6 KB\n", "username": "chris" }, { "code": "", "text": "Hi Guys, any update on what 22.04 will be supported?", "username": "Ken_Town" }, { "code": "wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1-1ubuntu2.1~18.04.20_amd64.deb\nsudo dpkg -i libssl1.1_1.1.1-1ubuntu2.1~18.04.20_amd64.deb\n", "text": "hey guys,the following allowed me to run version 6 on ubuntu-22.10-in-a-docker-image. so in theory it should work on a real installation too.in addition to the steps you followed, do this first:As long as they do not break installed things, “archived” packages should always install and run without a problem. that is what happened in these lines.Yet, keep in mind that, if you break things, the responsibility is yours because MongoDB Community 6 is not yet supported on Ubuntu 22 (is the enterprise version supported? I did not try).PS: I found the package here: Index of /ubuntu/pool/main/o/openssl . there are a few others so check they are applicable instead of the one I use.", "username": "Yilmaz_Durmaz" }, { "code": "echo \"deb http://security.ubuntu.com/ubuntu focal-security main\" | sudo tee /etc/apt/sources.list.d/focal-security.list\n", "text": "this also works:With this one, it will not even require you to know what to install. it will fetch “1.1.1f-1ubuntu2.16” for amd64. (my other answer used ubuntu2.1~18).", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Please check this video…I installed Mongodb on Ubuntu 22.04 by following the step by step instructions from this video: install mongodb in ubuntu 22.04 - YouTube", "username": "Silpa_Baburajan" }, { "code": "focal-securitysources.list", "text": "Hi @Silpa_Baburajan , that video installs version 5, so I suggest you check your installation.the OPs first post above has the commands for version 6. It seems adding focal-security to sources.list allows the installation smoothly.", "username": "Yilmaz_Durmaz" }, { "code": "jammyfocalecho \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list\n", "text": "As of 6.0.3 mongodb server is built for Ubuntu 22.04.The manual is yet to be updated but very similar to the 20.04 instructions but using jammy in the repo string instead of focal.", "username": "chris" }, { "code": "md64,arm64 ] httpcurl -sS https://pgp.mongodb.com/server-6.0.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/mongodb-6.0.gpg", "text": "md64,arm64 ] httpto avoid the apt-key message:\ncurl -sS https://pgp.mongodb.com/server-6.0.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/mongodb-6.0.gpg", "username": "Ken_Town" }, { "code": "", "text": "That is part of the regular install steps in the manual for installing version 6 on ubuntu. No point in recreating the manual in the forum.", "username": "chris" }, { "code": "", "text": "this man page: https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/#platform-support uses apt-key which gave me a warning that apt-key is depreciated. I found this thread because all the other docs/attempts to install mongo on 22.04 failed. Is there a more current install guide to help the next guy?", "username": "Ken_Town" }, { "code": "apt-key/etc/apt/trusted.gpg.ascgpg --armor.gpg/etc/apt/trusted.gpg.d/.ascgpg --dearmor.gpg.gpgcurl -sS https://pgp.mongodb.com/server-6.0.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/mongodb-6.0.gpg\nor\nwget -qO - https://pgp.mongodb.com/server-6.0.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/mongodb-6.0.gpg\napt-key list/etc/apt/trusted.gpgapt-key remove", "text": "I agree with @Ken_Town ,Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8))apt-key stores all keys in a single /etc/apt/trusted.gpg which seems to have its own issues (thus deprecated). It embeds to this file the .asc key files created with gpg --armorthe suggested way to store key files is to store .gpg files under /etc/apt/trusted.gpg.d/. if we have .asc file then gpg --dearmor takes its content and gives back .gpg content.unless we are supplied directly with .gpg file by MongoDB (so be careful in the future), it is better to usePS: use apt-key list and see any key under /etc/apt/trusted.gpg. use apt-key remove to remove those if you want, and add again with the above method.", "username": "Yilmaz_Durmaz" }, { "code": "focaljammy", "text": "That is a good point, apt-key is deprecated.However it is only a warning. i.e. Don’t continue to use this its going away, there is a newer way of doing it now. I’m sure it will be addressed when the docs are updated.The point I am making is that until the documentation/manual is updated the only change required to install on Ubuntu 22.04 is changing focal to jammy. There is no additional steps, a very small deviation from what is already documented.If you trace back the release notes and related jiras there is outstanding items relating to updating the documentation.", "username": "chris" }, { "code": "", "text": "I followed the instructions on the video to install MongoDB 6 on Ubuntu 22.04 (replaced focal with jammy and 5.0 and 6.0) but once I started, enabled and checked the status of mongod it gave me \"failed (Result: core-dump). I’m running Ubuntu 22.04 in VirtualBox. I removed the installation thinking I may have done something wrong but got the same result. I don’t know what else to do.", "username": "Nelson_Muller" }, { "code": "sudo apt dist-upgrade\nsudo apt-get install gnupg\nwget -qO - https://pgp.mongodb.com/server-6.0.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/mongodb-6.0.gpg\necho \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list\nsudo apt-get update\nsudo apt-get install -y mongodb-org\nsudo systemctl enable mongod\nsudo systemctl start mongod\n\n$ mongosh\nCurrent Mongosh Log ID: 63b77379982aa0dc5c083475\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.1\nUsing MongoDB: 6.0.3\nUsing Mongosh: 1.6.1\n", "text": "step by step for Ubuntu 22.04. full docs here", "username": "Ken_Town" }, { "code": "avxgrep flags -m1 /proc/cpuinfo | grep avx", "text": "If you have an old cpu you will need to install version 4. Version 5 and 6 require avx\ngrep flags -m1 /proc/cpuinfo | grep avxI was just testing our MongoDB v17.0 release and Mongo wouldn't start. In fact i… can't even read it's help or check it's version...:\n\n```\nroot@mongodb ~# mongod --help\nIllegal instruction\nroot@mongodb ~# mongod --version\nIllegal instruction\n```\n\nAfter a bit of searching, I discovered a [bug that sums up the situation](https://jira.mongodb.org/browse/SERVER-59482). A comment from one of the devs notes that v5.0+ has [specific CPU requirements](https://www.mongodb.com/docs/manual/administration/production-notes/#x86_64).\n\nApparently, it could be recompiled to allow v5.0 to run on older x86_64 CPUs, although I'm not keen on doing that.\n\nAnother option is to stick with v4.4 for now?\n\nI guess another option would be to provide 2 appliances, but 2 things; firstly I don't ahve a compatible CPU and I don't think that mongodb is popular enough for that. My inclination is to stick with v4 for now. We could perhaps have a script to upgrade?\n\nFWIW on Linux to check if your CPU can run v5:\n```\ngrep flags -m1 /proc/cpuinfo | grep avx\n```\n(FWIW, that will only check the first CPU core, but AFAIK, that's enough).", "username": "Ken_Town" }, { "code": "", "text": "This is a different problem.You should search the forum first and then open a new topic if you don’t find an answer.Both the core-dump/Illegal Instruction and ECONNREFUSED have had much coverage in the forums.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to install mongodb 6.0 on Ubuntu 22.04
2022-07-26T10:06:14.000Z
How to install mongodb 6.0 on Ubuntu 22.04
28,587
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hi can we store mongo db queries in mongo db document and later evaluate them with parameters? Currently it doesn’t work. Please see the details in this link.Basically a document will have a query and I want to find the result of that query by giving parameters at run time in aggregation.", "username": "VIPAN_KUMAR" }, { "code": "stored_pipeline = [\n { \"$match\" : { \n \"creation_date\" : \"@creation_date\" ,\n \"user_id\" : \"@user_id\" ,\n \"status\" : \"vacation\"\n } } \nquery_map = {\n \"@creation_date\" : new Date( \"2022-12-25\" ) ,\n \"@user_id\" : \"steevej\"\n}\nfunction map_dynamic_query( stored_query , query_map )\n{\n for stored_value in stored_query\n {\n if stored_value starts with @\n {\n dynamic_value = query_map.find( stored_value )\n replace stored_value with dynamic_value\n }\n }\n}\nstored_pipeline = [\n { \"$match\" : { \n \"creation_date\" : 2022-12-25T00:00:00.0Z ,\n \"user_id\" : \"steevej\" ,\n \"status\" : \"vacation\"\n } } \n", "text": "It should not be too hard to implement something along the lines:Query stored somewhere with placeholders. The placeholders are string that starts with @.The you setup a map for your place holders using the current values you wish.Then you write a function that takes the stored query and the query map. This function goes over the stored query and replace each instance of a place holder with the current value from the query map.Running the above function with the above stored_pipeline and query_map would produce an executable query such as:", "username": "steevej" }, { "code": "", "text": "@VIPAN_KUMAR, please share the final approach you took so that others find the forum useful.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Storing queries in document and evaluate using parameters
2022-12-28T17:37:45.931Z
Storing queries in document and evaluate using parameters
1,170
null
[ "react-native", "react-js" ]
[ { "code": "import React from 'react';\nimport { render, act } from '@testing-library/react-native';\nimport ListOfPlayers from './ListOfPlayers';\nimport createMockRealm from '../../utils/Realm/MockRealm';\nimport { mockTeam, RealmProvider, schema, mockData, Realm } from '../../../__mocks__/realm/realm';\n\ndescribe('ListOfPlayers', () => {\n\tlet realm;\n\tlet onChange;\n\n\tbeforeEach(async () => {\n\t\trealm = await createMockRealm(schema, [mockTeam]);\n\t\tonChange = jest.fn();\n\t});\n\n\tafterEach(() => {\n\t\tif (!realm.isClosed) {\n\t\t\trealm.close();\n\t\t}\n\t});\n\n\tit('renders a list of players', async () => {\n\t\ttry {\n\t\t\tconst { getByText } = render(\n\t\t\t\t<RealmProvider realm={realm}>\n\t\t\t\t\t<ListOfPlayers player={{ name: 'player1' }} onChange={() => {}} />\n\t\t\t\t</RealmProvider>\n\t\t\t);\n\t\t\tfor (const player of mockData) {\n\t\t\t\tawait act(async () => {\n\t\t\t\t\tawait new Promise(resolve => setTimeout(resolve, 0));\n\t\t\t\t\texpect(getByText(player.name)).toBeTruthy();\n\t\t\t\t});\n\t\t\t}\n\t\t} catch (error) {\n\t\t\tconsole.log(error);\n\t\t}\n\t});\n\n});\n\nimport { Realm } from '@realm/react';\n\nasync function createMockRealm(schema, data) {\n\tconsole.log('Creating mock Realm with schema:', schema);\n\tconsole.log('Populating mock Realm with data:', data);\n\tconst config = {\n\t\tschema,\n\t\tdeleteRealmIfMigrationNeeded: true,\n\t\tinMemory: true,\n\t\tpath: 'mock.realm',\n\t};\n\ttry {\n\t\tconst realm = await Realm.open(config);\n\t\tconsole.log('Successfully created mock Realm:', realm);\n\t\tdata.forEach(item => {\n\t\t\trealm.write(() => {\n\t\t\t\trealm.create(item.name, item, true);\n\t\t\t});\n\t\t});\n\t\tconsole.log(realm.objects('Team'), 'objects');\n\t\tconsole.log('Finished populating mock Realm with data');\n\t\treturn realm;\n\t} catch (error) {\n\t\tconsole.log(error);\n\t}\n}\n\nexport default createMockRealm;\nimport React, { useContext, useState } from 'react';\nimport { ScrollView } from 'react-native';\nimport { RadioButton } from 'react-native-paper';\nimport { ThemeContext, UserContext } from '../../context/context';\nimport { useQuery } from '../../context/realmContext';\n\nconst ListOfPlayers = ({ player, onChange }) => {\n\tconst { colors } = useContext(ThemeContext);\n\tconst { userId } = useContext(UserContext);\n\tconst [selectedPlayer, setSelectedPlayer] = useState(player.name);\n const team = useQuery('Team').filtered(`user_id == '${userId}'`)[0];\n const players = team?.players;\n\n\treturn (\n\t\t<ScrollView style={{ width: '100%', marginBottom: 20 }}>\n\t\t\t<RadioButton.Group\n\t\t\t\tvalue={selectedPlayer}\n\t\t\t\tonValueChange={name => {\n\t\t\t\t\tsetSelectedPlayer(name),\n\t\t\t\t\t\tonChange({\n\t\t\t\t\t\t\t...player,\n\t\t\t\t\t\t\tname: name,\n\t\t\t\t\t\t});\n\t\t\t\t}}>\n\t\t\t\t{players.map(player => (\n\t\t\t\t\t<RadioButton.Item\n\t\t\t\t\t\ttestID={'player-item' + player.name}\n\t\t\t\t\t\tkey={player._id}\n\t\t\t\t\t\tlabelStyle={{ color: colors?.primary }}\n\t\t\t\t\t\tlabel={player.name}\n\t\t\t\t\t\tvalue={player.name}\n\t\t\t\t\t/>\n\t\t\t\t))}\n\t\t\t</RadioButton.Group>\n\t\t</ScrollView>\n\t);\n};\n\nexport default ListOfPlayers;\n\nimport { createRealmContext } from '@realm/react';\nimport { Realm } from '@realm/react';\n\nconst schema = [\n\t{\n\t\tname: 'Player',\n\t\tproperties: {\n\t\t\tname: 'string',\n\t\t\tage: 'int',\n\t\t\t_id: 'string',\n\t\t},\n\t},\n\t{\n\t\tname: 'Team',\n\t\tproperties: {\n\t\t\tname: 'string',\n\t\t\tuser_id: 'string',\n\t\t\tplayers: { type: 'list', objectType: 'Player' },\n\t\t},\n\t},\n];\n\nconst mockData = [\n\t{ name: 'player1', age: 21, _id: '1' },\n\t{ name: 'player2', age: 22, _id: '2' },\n\t{ name: 'player3', age: 23, _id: '3' },\n];\n\nconst mockTeam = {\n\tname: 'Team',\n\tuser_id: '123',\n\tplayers: mockData,\n};\n\nconst { useRealm, useQuery, useObject, RealmProvider } = createRealmContext({\n\tschema,\n\tdeleteRealmIfMigrationNeeded: true,\n\tinMemory: true,\n});\n\nexport { useRealm, useQuery, useObject, RealmProvider, mockTeam, schema, mockData, Realm };\n\n", "text": "React Native with @realm/react - testing with Jest and @testing-library/react-nativeNeed help with setting up mock for the Realm.In this example I’m trying to test my component ListOfPlayers to see if it renders a list, to do so I need to Mock the RealmProvider. But the problem is the Mocked Realm is always returning an Empty Object\ninstead of the data I’m writing to it.If you have any insights on how to go about testing or I’m way of base, please feel free to share it thanks.\nHi @Kyle_Rollins is it possible you could help me out again ?Below is the Test for the component ListOfPlayers.Here is the utility mock functionHere is the componentSchema and realmProvider", "username": "Mads_Haerup" }, { "code": "src/__tests__", "text": "Hi @Mads_Haerup! I’ll try to find some time to look at this more closely and be more helpful.In the meantime, have you looked at the tests in the React Native example app? You can find individual component test in src/__tests__.", "username": "Kyle_Rollins" } ]
Testing - Realm React Native SDK
2023-01-06T11:42:04.565Z
Testing - Realm React Native SDK
1,653
null
[ "aggregation", "queries", "data-modeling" ]
[ { "code": "{\n {\n userName: \"cary\"\n time: 50\n userId: 1\n }\n {\n userName: \"john\"\n time: 40\n userId: 1\n }\n {\n userName: \"bliss\"\n time: 50\n userId: 1\n }\n {\n userName: \"ross\"\n time: 40\n userId: 1\n }\n}\n", "text": "Hi,\nsuppose if I am having lakhs of doc in a collection like this.collection example:I have to query in this collection such as I will get the count of users whose time is less than 40,50,60result example:{\ntime: UsersCountWithPrticularTime,\n‘40’: 150,\n‘50’: 50,\n‘60’: 75,\n‘40’: 90,}Any idea of how can I acheive this", "username": "Mohammad_Farhan" }, { "code": "", "text": "I think that what you want can be done using $bucket.", "username": "steevej" }, { "code": "", "text": "@Mohammad_Farhan, please share your final approach so that others find the forum useful.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best aggregate approach regarding a query on lakhs of doc in a collection
2022-12-29T08:49:13.861Z
Best aggregate approach regarding a query on lakhs of doc in a collection
1,084
null
[ "aggregation", "queries", "mongoose-odm" ]
[ { "code": "find()aggregation pipelinefind()find()find()aggregation pipeline in $matchfind()aggregation pipeline $match", "text": "I am working with Data.\nI have applied the find() query and aggregation pipeline for the same data.Now in find() query, it is loosely checking the data while in the aggregation pipeline, in $match it is checking strictly. Here I am giving the example.For example in MongoDB data is stored like status: ‘2’. That means the value of status is in a String(To be a precise numeric string). Now when I apply find() query on status like find(status: 2), it will return all the data with status 2. Please note that in find() query I am passing the Number, not the string, and still it is working fine.Now the same thing I have applied in the aggregation pipeline in $match. Example. $match : {status : 2}. then it is not returning anything !!!This is such weird behavior.\nYou can try these two queries in any kind of numeric string data. Stored the value in a Numeric string. In the query parameter pass the numeric value in both queries, find() and aggregation pipeline $match.Ping me if you need further explanation.\nAlso, correct me if I am wrong.", "username": "Dhrumit_Patel" }, { "code": "", "text": "While your title reads Strange behaviour of MongoDB, I would be surprised if that would be case. As far as I know, MongoDB does not do automatic type conversion.What I suspect is that you are using an abstraction layer, something like mongoose with a schema that defines status as being of type string and when you use the something like mongoose to query it silently does type conversion for you. And when you do not use the something like mongoose there is not such conversion, so you do not $match anything.If my hypothesis of something like mongoose is wrong please share sample documents and your code.", "username": "steevej" }, { "code": "", "text": "Your suspect is right. I am using a mongoose.But still, In mongoose also there should not be automatic type conversion. It will create chaos.\nWhile applying the query in the mongo shell, we know that there will NOT be automatic type conversion. So we will expect that it will be the same for all types of abstraction layers.Personally, I experienced lots of trouble due to this behavior.Now in mongoose, I don’t know what is behind the scene, while I apply the mongoose query.\nSo according to you, is it Bug or is it expected?? Can you please say your views with justification?", "username": "Dhrumit_Patel" }, { "code": "", "text": "So we will expect that it will be the same for all types of abstraction layers.I beg to differ. I do not know much about mongoose but I know it uses schema. In your schema you define the type of a field. When you define a type you then have 2 choices:\n1 - you generate an error when you use the wrong type\n2 - you convert the value to the correct type if such a conversion is possibleIn your first post you do not mentioned that you get an error. You mention about the string “2” to be interchangeable in one case and not in the other. To me it really looks like there is type conversion happening in one case and not the other.If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duckSo according to you, is it Bug or is it expected??Expected, but I do not much about mongoose.Can you please say your views with justification?Share your schema, share sample data and share your code. Others, more savvy with mongoose will be able to jump. My views are clear: if you pass a number and you get a string then there is type conversion and it is not the mongodb native driver that does it.", "username": "steevej" }, { "code": "", "text": "It looks like I was right about type conversion in mongoose.", "username": "steevej" }, { "code": "", "text": "See I am talking about retrieving data. Not about inserting data.The schema is perfect. The data storing process is perfect.The problem is when I apply the find() query and $match, then it is showing this behavior.In data storing, it is not doing any type of conversion. When I apply a parameter in the query then while searching in the database, it is doing type conversion in the find() query only. In $match is not doing type conversion while searching.", "username": "Dhrumit_Patel" }, { "code": "const testSchema = new mongoose.Schema(\n{\n field : { type: String }\n}\n\ntest = { field : 5 } /* 5 the number, not the string */\n\nTest.insertOne( test )\n\n/* Document is stored as { \"field\" : \"5\" } 5 the string because of type\n conversion */\n\n/* In the following findOne you really want result to hold the Document\n inserted. Do you? I think you do and it would not make sense if you\n did not. */\n\nresult = Test.findOne( test ) \n\n/* If result is a document with field:\"5\", the string,\n then type conversion is done. */\n", "text": "We are making progress.See I am talking about retrieving data. Not about inserting data.I understand but if one part of mongoose (data insertion using the schema) does type conversion it would make sense that the other part (data retrieval using the schema) does the same data conversion.The schema is perfect. The data storing process is perfect.I did not imply that either was wrong when I asked you to share your schema, documents and code. I think it could help comprehension because we could use your real schema and document rather than the one I made up above with approximate syntax.In data storing, it is not doing any type of conversion.The SO post I shared seems to indicate the contrary.We are making progress because we went fromwe know that there will NOT be automatic type conversion. So we will expect that it will be the same for all types of abstraction layers.toWhen I apply a parameter in the query then while searching in the database, it is doing type conversion in the find() queryIn $match is not doing type conversion while searching.It makes sense that mongoose would not do type conversion in $match. In a lot of $match, there is no schema related to the $match-ed documents. In a pipeline, documents are altered or even generated (by $group). It would take a lot of complicated code to determine which schema to use, if any, to do type conversion. And it will be very confusing. For 1 $match you have conversion because we have a schema and for another one there is none because it is generated document.My conclusion is1 - type conversion occurs (may be not in your case) when documents are inserted because of the schemaPersonally, a warning or error would be better.2 - type conversion occurs in simple queries where it is clear which schema is usedIt makes sense because conversion occurs at insertion.3 - type conversion does not occurs in pipeline for $match because it would be hard to do and it will be very confusion because it cannot be done on some of the $match4 - storing numbers as string is a bad ideaA string with a couple of digits will take more space than the number and it is slower to compare. Numbers as strings are not in the natural sort order “2” > “111” but 2 < 111.5 - I am sure about the following.Personally, I experienced lots of trouble due to this behavior.Hopefully, mongoose has a way for you to communicate and ease your frustration.", "username": "steevej" } ]
Strange Behavior of MongoDB queries in numeric string data. find() and aggregation pipeline $match
2022-12-28T05:28:54.234Z
Strange Behavior of MongoDB queries in numeric string data. find() and aggregation pipeline $match
1,684
null
[ "queries" ]
[ { "code": "drop()test> db.createCollection(\"cappyMcCapCap\", { \"capped\": true, \"size\": 100000, \"max\": 100 } )\n{ ok: 1 }\n\ntest> db.cappyMcCapCap.isCapped()\ntrue\ntest> db.cappyMcCapCap.insertOne({ \"hat\": \"bowler\" });\n{\n acknowledged: true,\n insertedId: ObjectId(\"63a4c2dffe0ea68c13a8c265\")\n}\n\ntest> db.cappyMcCapCap.insertOne({ \"hat\": \"newsboy\" });\n{\n acknowledged: true,\n insertedId: ObjectId(\"63a4c2e9fe0ea68c13a8c266\")\n}\ntest> db.cappyMcCapCap.deleteOne({ \"hat\": \"newsboy\" });\n{ acknowledged: true, deletedCount: 1 }\ntest> db.cappyMcCapCap.find();\n[ { _id: ObjectId(\"63a4c2dffe0ea68c13a8c265\"), hat: 'bowler' } ]\n", "text": "I’m hoping for sort out why I’m not seeing what I’d expect in a capped collection. According to the docs:You cannot delete documents from a capped collection. To remove all documents from a collection, use the drop() method to drop the collection and recreate the capped collection.However, I am able to delete documents from my Capped Collection, from what it seems?So far so good, now I’ll insert two documents:And then delete the newer of the two documents:Which is successful?I seem to remember maybe getting an error in the past, but I have not tried manually deleting documents from a Capped Collection in a while.So is that documentation correct, or am I misunderstanding something?Thanks!", "username": "Justin_Jenkins" }, { "code": "db.createCollection(\"cappyMcCapCap2\", { capped : true, size : 5242880, max : 5000 } )\n", "text": "Hello @Justin_JenkinsCould you please let me know which version of MongoDB are you using? In the meantime, could you try this syntax to create a capped collection and try to remove an inserted document?Looking forward to hearing from youPlease let me know if you have any additional questions or concerns regarding the details above.Kind Regards,\nJosman", "username": "Josman_Perez_Exposit" }, { "code": "6.0.15.0.146.0.14.2.234.2.23> db.cappyMcCapCap2.deleteOne({ \"hat\": \"newsboy\" })\n\n2022-12-23T20:20:39.563+0000 E QUERY [js] WriteError({\n\"index\" : 0,\n\"code\" : 20,\n\"errmsg\" : \"cannot remove from a capped collection: test.cappyMcCapCap2\",\n\"op\" : {\n\"q\" : {\n\"hat\" : \"newsboy\"\n},\n\"limit\" : 1\n}\n}) :\nWriteError({\n\"index\" : 0,\n\"code\" : 20,\n\"errmsg\" : \"cannot remove from a capped collection: test.cappyMcCapCap2\",\n\"op\" : {\n\"q\" : {\n\"hat\" : \"newsboy\"\n},\n\"limit\" : 1\n}\n})\nWriteError@src/mongo/shell/bulk_api.js:458:48\nmergeBatchResults@src/mongo/shell/bulk_api.js:855:49\nexecuteBatch@src/mongo/shell/bulk_api.js:919:13\nBulk/this.execute@src/mongo/shell/bulk_api.js:1163:21\nDBCollection.prototype.deleteOne@src/mongo/shell/crud_api.js:375:17\n@(shell):1:1\n", "text": "Thanks @Josman_Perez_Exposit I tired using that syntax, with the same results.I’m using 6.0.1 (via Docker).Out of curiosity, I also spun up a 5.0.14 container to test, and it worked the same.I could delete documents in capped collections.The process in 6.0.1:\nScreenshot 2022-12-23 at 1.11.13 PM1660×1030 96.5 KB\nHowever!Then, I spun up a container with 4.2.23 and I did get the kind of error I recall getting in the past.Is this just not a the same anymore in new versions but it isn’t documented? Or what exactly is going on?The error, in 4.2.23:", "username": "Justin_Jenkins" }, { "code": "", "text": "Any thoughts @Josman_Perez_Exposit … is this ability to delete something newish with capped collections or ???", "username": "Justin_Jenkins" }, { "code": "", "text": "Hi @Justin_Jenkins, sorry for the delay. I have indeed reproduced and scaled it internally. I want to inform you that MongoDB > 5.0 removed the restriction on delete operations in the apply operations command.However, we don’t seem to have added this to the documentation yet. I’ve filed an internal ticket to update the documentation to reflect this to avoid confusion.Please let me know if you have any additional questions or concerns regarding the above details.Best regards,\nJosman", "username": "Josman_Perez_Exposit" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Confusion About Deleting Documents in Capped Collections
2022-12-22T20:56:28.296Z
Confusion About Deleting Documents in Capped Collections
2,376
null
[ "aggregation", "atlas-triggers" ]
[ { "code": "exports = function () {\n\n const datalake = context.services.get(\"FederatedDatabaseInstance-analytics\");\n const db = datalake.db(\"analytics\");\n const coll = db.collection(\"assessments\");\n\n const pipeline = [\n /*{\n $match: {\n \"time\": {\n $gt: new Date(Date.now() - 60 * 60 * 1000),\n $lt: new Date(Date.now())\n }\n }\n },\n {\n \"$out\": {\n \"s3\": {\n \"bucket\": \"322104163088-mongodb-data-ingestion\",\n \"region\": \"eu-west-2\",\n \"filename\": { \"$concat\": [\n \"analytics\",\n \n \"$_id\"\n ]\n },\n \"format\": {\n \"name\": \"JSON\",\n \"maxFileSize\": \"40GB\"\n //\"maxRowGroupSize\": \"30GB\" //only applies to parquet\n }\n }\n }\n }*/\n {\n \"$out\": {\n \"s3\": {\n \"bucket\": \"322104163088-mongodb-data-ingestion\",\n \"region\": \"eu-west-2\",\n \"filename\": \"analytics/\",\n \"format\": {\n \"name\": \"json\",\n \"maxFileSize\": \"100GB\"\n }\n }\n }\n }\n ];\n\n return coll.aggregate(pipeline);\n};\n", "text": "Recently I have using this tutorial ([How to Automate Continuous Data Copying from MongoDB to S3 | MongoDB](https://How to Automate Continuous Data Copying from MongoDB to S3)) to try to copy data from assessments(collection name)>analytics(db name)>dev(cluster name) to a s3 bucket. I first created a federated database called FederatedDatabaseInstance-analytics, I then created a a trigger with this function:The thing is, I get no errors runnings it but nothing appears on my bucket", "username": "Marina_Stolet" }, { "code": "db.VirtualCollectionFoo.aggerage([])", "text": "Hey Marina,Can you confirm that you are using the right name spaces in your Federated Database? You should be using the ones that you’ve set in the visual editor.So if you have VirtualCollectionFoo (and inside you have AtlasCollectionBar1, AtlasCollectionBar2), then your aggregation should look likedb.VirtualCollectionFoo.aggerage([])Then when the aggregation pipeline is reading from VirtualCollectionFoo it will in turn be pulling data from the underlying Atlas Collections.Let me know if that works.Best,\nBen", "username": "Benjamin_Flast" }, { "code": "return db.coll.aggregate(pipeline);Cannot access member 'aggregate' of undefined\n\tat exports (function.js:51:11(47))\n\tat function_wrapper.js:4:27(18)\n\tat <eval>:8:8(2)\n\tat <eval>:2:15(6)\n", "text": "Well, “FederatedDatabaseInstance-analytics” is what I set in the visual editor for the data federation configuration, which is what I am using (see picture 1), is that correct? Should I put the name of something else?Also, do you mean using return db.coll.aggregate(pipeline);? when I do that this\nimage1797×628 54.8 KB\n", "username": "Marina_Stolet" }, { "code": "", "text": "Every time I run the trigger function the number of queries in this instance raises, so it must mean I am “connecting” to it", "username": "Marina_Stolet" }, { "code": "const db = datalake.db(\"analytics\");\n const coll = db.collection(\"assessments\");\n", "text": "Sorry for the delay, this seems to have gotten lost in my inbox!Yes, that is the right thing to have there, but i’m referring to the database name and collection name. Are they the names of the DB and Collection in your Federated Instance? Or are they the names from your cluster itself? They should be the names that you manually edit inside the Federated Database Instance", "username": "Benjamin_Flast" }, { "code": "", "text": "I’m having the exact same problem. I’m using the database and collection name in the federated database.", "username": "TeenSmart_International" }, { "code": "", "text": "When you created the federated database, did you add as a source both the cluster you want to ingest and the landing bucket on aws? It needs to be in the same Federated database.", "username": "Marina_Stolet" } ]
Using Data Federation and triggers to copy data to a s3 bucket
2022-10-03T14:41:15.890Z
Using Data Federation and triggers to copy data to a s3 bucket
3,389
null
[]
[ { "code": "{ version: \"1.002\", subVersion: \"3\" }{ version: { version1: 1, version2: 2, version3: 3 } }", "text": "Original data:\n{ version: \"1.002\", subVersion: \"3\" }How do I change it into an Object:\n{ version: { version1: 1, version2: 2, version3: 3 } }In the new object, the fields are integer and they come from the string fields of version and subVersion from original data.Thanks!", "username": "Cecilia_Liu1" }, { "code": " {\n version: \"1.002\",\n subVersion: \"3\"\n }\n\"$set\"db.collection.aggregate([\n {\n \"$set\": {\n \"version\": {\n \"$let\": {\n \"vars\": {\n \"splitVer\": {\n \"$map\": {\n \"input\": {\"$split\": [\"$version\", \".\"]},\n \"as\": \"strNum\",\n \"in\": {\"$toInt\": \"$$strNum\" }\n }\n }\n },\n \"in\": {\n \"version1\": {\"$first\": \"$$splitVer\"},\n \"version2\": {\"$last\": \"$$splitVer\"},\n \"version3\": {\"$toInt\": \"$subVersion\"}\n }\n }\n },\n \"subVersion\": \"$$REMOVE\"\n }\n }\n])\n {\n \"_id\": ObjectId(\"5a934e000102030405000000\"),\n \"version\": {\n \"version1\": 1,\n \"version2\": 2,\n \"version3\": 3\n }\n }\n", "text": "The are lots of ways this could be done. One way, given:Example document:Example aggregation pipeline using \"$set\":Example output:Try it on mongoplayground.net.", "username": "Cast_Away" }, { "code": "", "text": "This is great! Thanks so much!!", "username": "Cecilia_Liu1" } ]
How do I change 2 string fields into an object?
2023-01-05T21:21:55.545Z
How do I change 2 string fields into an object?
537
null
[]
[ { "code": "", "text": "Hi Team,While I’m exploring options to use mongodb sharding, and we have using it in our test environments BUT what I’m unable to understand is, Config, router and one of the shards CPU spikes for every 4 hours from 1 to ~30% and I failed to understand the cause of these SPIKE, I turned of balancer and seen but no help, can someone help me to understand it properly regarding the spikes that occurs for every 4 hours,\nAlso it would be great if someone can explain how does balancer looks for number of chunks on a shard, I mean How it is scheduled to check? and can we alter this?Thanks in Advance,\nVaseem", "username": "Vaseem_Akram_mohammad" }, { "code": "", "text": "Hi @Vaseem_Akram_mohammad,When this is happening, can you run mongostat and mongotop to see if you can identify the root cause ? Maybe a batch operation ?\nAre these nodes collocated on the same physical machines by any chance and this could be completely independent from MongoDB ? Something in the CRON tab?I wouldn’t mess with the balancer. Especially if you already identified that this isn’t the root cause of the problem. It’s already heavily optimized.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hey @MaBeuLux88 ,Thanks for the response, But I don’t have any JOB’s running in the background, All I can say is we have not changed any default settings of the mongo sharded cluster, We tried by enabling debug level also BUT no use,\n\nScreenshot 2022-12-09 at 2.32.12 PM1568×332 7.47 KB\n it is spiking for every 4 hours from 0 to 30 or <2 to ~30%.when I ran mongotop command, I got this info during spike, spike is about 2 minutes of duration.\nScreenshot 2022-12-09 at 2.35.28 PM866×986 81.1 KB\nThanks,\nVaseem", "username": "Vaseem_Akram_mohammad" }, { "code": "", "text": "Hey @MaBeuLux88 ,Can you please help me in understanding the above one. Also, I’ve another Question, How do we choose the number of shards needed for prod environment, is it like based on expected data growth or on based on available data oR anything which I’m missing?Your response is greatly Appreciated!Thanks In Advance!\nVaseem", "username": "Vaseem_Akram_mohammad" }, { "code": "", "text": "Hi @Vaseem_Akram_mohammad,Apologies for the delay!Some context about these collections:1 => Should be relatively small as you shouldn’t have a huge number of collections. How many docs do you have in this collection? Spending that much time in this collection don’t make much sense to me.\n2 => Should contain a lot more. This makes a bit more sense.\n3 => Tags should also be a very small collection. Few docs I guess? Can you confirm?But given the fact that a lot of read operations seem to happen in these internal collection, my best bet is that you have a backup plan running every 4 hours?Do you also have anything else increasing at the same time? Like number of connections? What does mongostats looks like?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "increasingThank you so much for the response,So far I remember, We have not altered any of the default configurations and no backup is also scheduled.\nAnd Yes, I’ve see the growth in connections during the spike but the growth is <5 it seems.\nScreenshot 2022-12-15 at 12.42.50 AM1590×1136 54.6 KB\nThanks in Advance!\nVaseem", "username": "Vaseem_Akram_mohammad" }, { "code": "mongos", "text": "I really don’t see what could generate this.\nLast idea maybe: how many mongos do you have?Actually can you please describe the entire config (number of nodes, etc), the amount of data you have (like 8TB compressed across 8 shards?).Also which version of MDB are you running?Maxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88 ,The spike is about only 2 minutes! but for every 4 hours it is happening.~300GB data is compressed across 2 sharded environment.This is the infra!\nScreenshot 2022-12-15 at 3.28.42 PM1126×1260 124 KB\nMongoDB shell version v5.0.11Thanks In Advance,\nVaseem", "username": "Vaseem_Akram_mohammad" }, { "code": "", "text": "I’m spotting several issues here.\nimage870×1170 118 KB\nIt would be nice to upgrade to MDB 6.0.X but 5.0 isn’t that old so this shouldn’t be a big issue here.So if the CPU spikes aren’t causing big problems, I’d rather focus on the other issues I pointed out that are more pressing in my opinion. Maybe this will resolve the CPU spike issue as well as this could be related to the lack of RAM.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hey @MaBeuLux88 ,Thank you for more insights,This is just test environment, in production we have more than 5times(2TB), just trying to understand the concerns for around these periodic CPU spikes to explain ourself!.Lets me see it by increasing the RAM whether these spikes comedown or not.Thanks in Advance,\nVaseem", "username": "Vaseem_Akram_mohammad" }, { "code": "", "text": "Are the same CPU spikes happening in prod?", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88 ,We have not moved to production Yet!Concern is without any load the CPU is having spike.\nScreenshot 2022-12-15 at 10.47.51 PM1598×1368 126 KB\n\n\nScreenshot 2022-12-15 at 10.52.40 PM1602×740 34.7 KB\n3.3 the CPU spike is observed even there’s no activity in the cluster and it’s happening every 4hours constantly. we want to nail down what is causing it at thread level.Also Can you please help us in profiling the mem usage and cpu usage?\nYour Help Is Much Appreciated Thank You In Advance!\nVaseem", "username": "Vaseem_Akram_mohammad" }, { "code": "top", "text": "Hi @Vaseem_Akram_mohammadJust an idea, could you check out the output of e.g. top or similar that can show what process is creating these spikes?Also I don’t think you mentioned how MongoDB was installed and on what OS. Are you using the instructions in Install MongoDB Community Edition on Linux and are all settings conforming the recommendations in the production notes?If all else failed, have you tried rebuilding everything from scratch, and observe the same spikes in the new builds? This way perhaps we can rule out any hardware anomalies from the AWS end.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hey @kevinadi @MaBeuLux88,This has been identified, Our infra has created the cluster using Ansible.\nRC:There is a scheduled process which runs for every 4 hours to look after the configurations against Ansible script, when we have turned off this scheduled process there was no spike and when we manually triggered the .sh file, we got to see the spike, So this is not the problem of mongo.Thank you guys for your helpThanks & Regards,\nVaseem | Principal Engineer, QA", "username": "Vaseem_Akram_mohammad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cpu spike for every 4 hours
2022-12-01T03:41:02.427Z
Cpu spike for every 4 hours
3,488
null
[]
[ { "code": "", "text": "I was wondering if there is a recommended approach to create something like an admin board using Realm. So my first thought was, that it would be good, to use the existing application and add an admin area there. But as I am really concerned about security issues there, I am a bit sceptical.How would I protect my Atlas Functions to be used by accounts that should not? Is there something like a “power user”, I could create, that can just read/write anywhere?Or would it be better to create a seperate web application, but then I have to redesign most screens from scratch. Example functionality I would need:I wonder if anyone has experience and suggestions on how to deal with that.", "username": "Thomas_Anderl" }, { "code": "context.user.idcontext.runningAsSystem()context.user.id", "text": "You can make a function and set its Authentication (in the function’s Settings) to System, then in the function the context.user.id will show you which user has called the function (anyone can call it but the context.user will be securely set by Atlas). You can use context.user.custom_data.rank or .permissions or something like that to check which permissions this user has and based on that do any operation or return values from the function.Use context.runningAsSystem() and context.user.id (which is null when running as system) to check wether the function was called by you from backend, or from any user client.", "username": "SirSwagon_N_A" }, { "code": "", "text": "Thank you for your insight. Would you recommend adding this functionality to the mobile app itself and put it in a secured area, or create a web app whose sole purpose is this admin area? It comes with it pros and cons. And I would like to get others opinions on it.When I put it into the app, I can use screens that I am used to know but make things complicated within a production app.", "username": "Thomas_Anderl" }, { "code": "", "text": "If you want to put an admin area in the user’s app depends entirely on who has to use it.\nIf only you and specific people can use it, you shouldn’t put it in the same app but make a new special app just for the admin area or, like you said, a custo Web interface for that.\n2 reasons for that:Also it’s super easy to make this admin area in a special app, because you don’t even have to upload it to Google Play. Just install it directly to the admin’s devices. Or just make it a web interface.Otherwise, if it’s an area some users you don’t know personally will probably get access to, it might be better to implement it in the app. It would be securer but maybe harder to use for those users if you just give them access to the web interface.", "username": "SirSwagon_N_A" }, { "code": "", "text": "Thank you again for your insight. Yes it is indeed only for a specific group of users, so I will go with a web app.", "username": "Thomas_Anderl" } ]
Recommended approach for admin area
2023-01-04T08:31:25.132Z
Recommended approach for admin area
880
null
[ "database-tools", "backup", "upgrading" ]
[ { "code": "", "text": "Happy new year!I can’t find any information about what will happen to the snapshots that I have on my current production cluster when I upgrade the cluster to a new major release version.I am going from 4.2 to 4.4 and then from 4.4 to 5.I have read here that I cannot restore a backup on a different major version, but does that mean that all my backups will become unavailable once I upgrade the cluster?", "username": "Edouard_Finet" }, { "code": "mongodumpmongorestoremongodump", "text": "Hi @Edouard_FinetI have read here that I cannot restore a backup on a different major versionAlthough it’s not officially supported, most times, features are added to newer major versions instead of removed. This means that you can mongodump from an older version, but doing mongorestore might not be as straightforward, but still within the realm of possibility.Having said that, if I’m in your shoes, I would do a complete mongodump on each major version step, so that your dumps are always up to date, and ready to be restored just in case the latest upgrade step doesn’t work out.Hope this helps!Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "thank you! I came to the same conclusion but it’s always nice to have it confirmed by someone independently ", "username": "Edouard_Finet" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What happens to my snapshots / backups after a major version upgrade of my clusters?
2023-01-03T09:24:29.920Z
What happens to my snapshots / backups after a major version upgrade of my clusters?
1,463
null
[]
[ { "code": "", "text": "Is it possible to get “Time Zone” of the system on which the mongo db is installed? I understand Mongo DB stores dates in UTC. I wanted to know in which time zone is the server installed or system time zone and if that can be retrieved in Mongo DB QueriesThanks", "username": "Vikram_Bade" }, { "code": "var now = new Date();\nnow.getTimezoneOffset()\n", "text": "You can get an offset but that will work in time zone without day light saving. But, if you are in day light saving country, the offset cannot be used for a past date. So, I am still interested to know “which time zone” the mongo DB installation is on the system. Ideally if it can tell us Olsen Time Zone such as \" America/New_York\", it will be really great.", "username": "Vikram_Bade" }, { "code": "", "text": "I take back this comment. getTimezoneOffset gives me the client offset and not server offset.What I really need is server offset or server time zone.", "username": "Vikram_Bade" }, { "code": "ssh user@hostname \"cat /etc/timezone\"", "text": "Hi @Vikram_Bade,As far as I’m aware there’s no straightforward way to find the server timezone via a MongoDB query, but you could instead connect to the host environment (for example, using something like ssh user@hostname \"cat /etc/timezone\" on Linux).However, since BSON dates are stored in UTC irrespective of the server timezone I’m curious if you can share more information on your intended use case.In my own experience many admins use UTC for production servers to normalise times for cron jobs, log comparison, and diagnostics without having to deal with time offsets or daylight savings changes.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "HI @Stennie_X,Thanks for your response. In our team, we were discussing the need of storing “off set” of audit date columns.For all functional columns, we are storing the offset as a field in Mongo DB Document.We were thinking if we need to store the offset of audit columns (such as created date, updated date) as these dates are nothing but the time zone of the system (where mongo DB is installed). If I am able to get the time zone of the system, I do not need to worry about storing the offset and I can use the time zone to get my “local system time” (non UTC).This is especially useful in multi time zone system which also has to deal with Day light savings. For example, if we need to write a query “Get me all documents which were updated on a day between this time and this time”, there the time will be given to me as “local time”, unless I know the offset or time zone, I cannot get data by just using UTC.Hope this clarifies the use case.", "username": "Vikram_Bade" } ]
Time zone of Mongo DB Installation
2023-01-05T10:42:38.379Z
Time zone of Mongo DB Installation
1,915
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "I want an email/password authentication system, but with a specialty that’s not possible in the basic email/password auth provider: I don’t want case sensitive emails allowed.If [email protected] is already registered, I don’t want [email protected] to be able to register.At the login, I want ANY casing of [email protected] to work (if the password matches of course).At the password reset, I want any casing to trigger the password reset email, BUT it has to be sent to the initial casing of the email address that was provided on registering. If user registered with [email protected], at the password reset someone puts [email protected] in the field, then the mail has to be sent to [email protected] .I guess that a custom function would be the best thing for this to work, but how exactly can I implement this?Do I have to salt, hash the passwords myself? Do I have to do the session key thing myself? Do I have to encrypt the communication between Realm Server and Client SDKs myself?", "username": "SirSwagon_N_A" }, { "code": "{\n_id : ... , \nuser : \"[email protected]\",\npassword : \"Abc123\",\norigCase : \" [email protected]\"\n}\nexports = async function (payload) {\n // 1. Parse the `payload` object, which holds data from the\n // FunctionCredential sent by the SDK.\n const { username, password } = payload;\n\n // 2. Get the user by lower case to ignore casing\n\n const userLogin = await context.services.get(\"<YOUR-CLUSTER-NAME>\").db(\"login\"). collection(\"users\").findOne({user: username.toLowerCase(), password : password})\n\n // 3. Return a unique identifier for the user. Typically this is the\n // user's ID in the external authentication system or the _id of a\n // stored MongoDB document that describes them.\n //\n // !!! This is NOT the user's internal account ID for your app !!!\n return userLogin._id;\n};\n", "text": "Hi @SirSwagon_N_A ,You have an example that might be altered a little to support what you described.When user registered with email “[email protected]” the registration will use an internal collection in login db to store this data with both lower case and origCase:Then in this example the auth function needs to run toLowerCase() on the user:This will find any case of this user. The reset password can still send back the origCase data while using the user field with lower case to find him.Make sure to set a unique index on user field to not allow duplicate usernames.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "const userLogin = await context.services.get(\"<YOUR-CLUSTER-NAME>\").db(\"login\"). collection(\"users\").findOne({user: username.toLowerCase(), password : password})Credentials customFunctionCredentials =\n Credentials.customFunction(\n new org.bson.Document(\"username\", \"bob\", \"password\", \"password123\"));\napp.loginAsync(customFunctionCredentials, it -> {\n ...\n});\nreturn userLogin._id;", "text": "Thank so much Pavel Duchovny for the answer!!!I have some more simple questions but can’t find the answers:In this line\nconst userLogin = await context.services.get(\"<YOUR-CLUSTER-NAME>\").db(\"login\"). collection(\"users\").findOne({user: username.toLowerCase(), password : password})\nthe db “login” and the collection “users” is that something I need to create myself or is that automatically created for every Realm App? And do I need to hash the password before saving it in this collection?If I call the loginAsync with the customFunctionCredentials in Java SDK, will it get encrypted automatically or will username and password be sent as plain text?Here it says\n“MongoDB Atlas uses encryption in transit from application client to server and within intra-cluster communications by setting a set of certificates for the servers”\nSo my question is, does this also apply before the user is authenticated?If you could answer these questions it would mean so much to me!", "username": "SirSwagon_N_A" }, { "code": "", "text": "Hi @SirSwagon_N_A ,The collection is something you need to create and manage, it will write new users on signup and check on login. Yes you have to provide the password in the payload as well.i believe no payload especially auth payload is sent clear text. We encrypt it using built in sdk encryption, it has nothing to do with cluster encryption, it is separate TLS.Look for some guides on working with app service auth on MongoDB Developer CenterTy", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to do Custom Function Authentication without external provider? How to do case insensitive email/password auth?
2023-01-03T22:15:58.074Z
How to do Custom Function Authentication without external provider? How to do case insensitive email/password auth?
2,104
null
[ "queries" ]
[ { "code": "", "text": "How does the change stream return events on the restored oplog collections.\nUsing DB version 4.2 when the oplog.rs is queried there are events returns but on the same oplog the change stream does not give the insert events which are existing in the oplog.rs collection.\nThe dump restore with oplog replay is done to have copy of production data on staging environment. Why are only insert events missing in change stream when a restore is done.", "username": "Sajida_Begum" }, { "code": "", "text": "Hi @Sajida_Begum welcome to the community!Are you missing events from the change stream, or does the restored oplog itself missing events?I tend to think that if the restored oplog is missing things, it’s either 1) events that happens between the backup and the restore would naturally be missing, and 2) maybe the backup was not properly made.More information would probably needed here:Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi, Thank you for your response.The restored oplog does have the all the events. nothing is missed.Before we start the backup. we do write lock on the replica set so that no new event is added at the time of backup and once the backup is complete the write lock is removed so that events get replicated correctly. so that when the next backup is taken from the timestamp it has all the events from the last time backup was done.We use mongodump and mongorestore with oplogReplay option. we do this on daily basis to have the data replicated on staging environment.To specify the oplog.rs does have the insert event after the restore is done. when the change stream is started with option startAtOpertionTime, and the MongoBSON Timestamp is in the range of oplog replication, except the insert event all other events are been notified. This happens on the data where the restore is done. But if the insert event is done real time the there is no issue.The DB version used is : 4.2.22\nHope have shared enough information on the issue.NOTE : Have tried to check this with upgraded version 4.4. There is no issue with any of the events listening on the restored data. The same oplogReplay process is done on 4.4 as well. So what could be the issue when the restore is done on version 4.2 .Thanks in Advance,\nSajeeda", "username": "Sajida_Begum" }, { "code": "startAtOpertionTime", "text": "Hi @Sajida_Begum sorry for the delayed response.If I understand correctly, the restored oplog has all the events, right? That means that the oplog was 100% restored with no missing data? However you’re missing some insert events during the backup event from the change stream’s point of view. Is this correct?One thing I can think of is that the startAtOpertionTime parameter somehow didn’t include the events of interest. Perhaps you can tweak this value to an earlier timestamp and see if this is still an issue?If this is still an issue for you, could you attach some reproduction script, or a reproduction method so we can replicate what you’re seeing exactly?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi,Thank you for response.\nYes, your understanding is correct. I did check with changing the timestamp for startAtOperationTime but that did not solve the problem.Anyway we got our database upgraded to 4.4 which has solved the problem.Steps i did to reproduce the issue on 4.2 :Best Regards,\nSajeeda Begum", "username": "Sajida_Begum" } ]
Change Stream On Restored Oplog
2022-12-14T12:06:07.201Z
Change Stream On Restored Oplog
1,512
null
[ "dot-net" ]
[ { "code": "</>", "text": "I am able to bind the Realm Object directly from the QueryProperty[QueryProperty(nameof(Owner), nameof(Owner))]<Entry Text=\"{Binding Owner.Name, Mode=TwoWay}\" FontSize=“Medium” />I haven’t been able to get this to work for a field with a To-Many relationship.Example: Owner.AuthorizationsHow do you recommend implementing this?", "username": "difschultz" }, { "code": "IValueConverter", "text": "That sounds more like a MAUI question than a Realm one. You probably need some sort of IValueConverter that converts between your collection and a string. I’m not sure how you want your entry to behave - do you plan to concatenate the authorizations and then split the string by some character to recreate the collection when the user edits the text?", "username": "nirinchev" }, { "code": "", "text": "I was trying to-do a simple Entry with something like Binding {Owner.Authorizations.FirstOrDefault().Role", "username": "difschultz" }, { "code": "public class AuthorizationsConverter : IValueConverter\n{\n public object Convert(object value, Type targetType, object parameter, CultureInfo culture)\n {\n if (value is IEnumerable<Authorization> authorizations)\n {\n return authorizations.FirstOrDefault()?.Role;\n }\n\n throw new NotSupportedException();\n }\n\n public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)\n {\n throw new NotImplementedException();\n }\n}\nText={Binding Owner.Authorizations, Converter={StaticResource AuthorizationConverter}}ConvertBackFirstAuthorizationOwnerpublic partial class Owner : IRealmObject\n{\n public IList<Authorization> Authorizations { get; }\n\n public Authorization? FirstAuthorization => Authorizations.FirstOrDefault();\n}\n{Binding Owner.FirstAuthorization.Role}AuthorizationsFirstAuthorizationpublic Owner()\n{\n (Authorizations as INotifyCollectionChanged).CollectionChanged += (s, e) =>\n {\n RaisePropertyChanged(nameof(FirstAuthorization));\n }\n}\n", "text": "Ah, I see - I don’t believe this is possible via databinding. Your options are either to use a converter which looks something like:Then you’ll databind to Text={Binding Owner.Authorizations, Converter={StaticResource AuthorizationConverter}}. If you want this to be a two-way databinding, you’ll need to implement ConvertBack. Alternatively, you can add a FirstAuthorization computed property on your Owner model:And then databind to {Binding Owner.FirstAuthorization.Role}. One thing to be aware of with the second approach is that you wouldn’t get binding updates if the first authorization changes. If you need that, you’ll need to subscribe for change notifications on Authorizations and raise property changed for FirstAuthorization:Disclaimer: I’m not very proficient with MAUI and there may be other ways to do it. I also didn’t test the code, so there may be typos.", "username": "nirinchev" }, { "code": "", "text": "Thanks for pointing me the right direction. What’s the chance you have Xamain.Forms sample application updated?", "username": "difschultz" } ]
.NET MAUI Databinding
2023-01-05T23:49:52.539Z
.NET MAUI Databinding
951
null
[ "atlas-cluster", "database-tools" ]
[ { "code": "", "text": "Hi All.I`m trying to import a document with thousand of lines generated by oracle dump “backup”, like below:“24/10/16 00:00:00.000000000”,4,\"Jovem \",1,“teste”,1,1,“BRANCA”,1,“MASCULINO”,“NORTE”,“RO”,“RONDONIA”,“COLORADO DO OESTE”,-13.1305638414553,-60.5550674630789,\n“22/01/17 00:00:00.000000000”,4,\"Jovem \",5,“teste”,2,1,“BRANCA”,1,“MASCULINO”,“NORTE”,“RO”,“RONDONIA”,“COLORADO DO OESTE”,-13.1305638414553,-60.5550674630789,The problem is even specifying the date_oracle white as the acceptable oracle date format I`m getting a transformation error during the import.mongoimport mongodb+srv://***********.mongodb.net -u ************ -p *********** --authenticationDatabase admin --db test --collection alteracao --drop --file sample.csv --type=csv --columnsHaveTypes --fields=“field1.date_oracle(DD/MM/YY HH24:MM:SS.FF9),field2.int32(),field3.string(),field4.int32(),field5.string(),field6.int32(),field7.int32(),field8.string(),field9.int32(),field10.string(),field11.string(),field12.string(),field13.string(),field14.string(),field15.decimal(),field16.decimal()”In my mind, considering mongoimport documentation, any date format accepted by Oracle is possible to be imported.Are my thoughts correct? Do we have any limitations regarding the format process acceptable by mongoimport?", "username": "Felipe_Cabral1" }, { "code": "mongoimport", "text": "Hi @Felipe_Cabral1 welcome to the community!In my mind, considering mongoimport documentation, any date format accepted by Oracle is possible to be imported.Unfortunately I don’t think this is true. The database tools are written in Go, so any compatibility with any other database products not under MongoDB’s control (such as Oracle) would be limited.The mongoimport manual gave an example of a compatible Oracle-style format:created.date_oracle(YYYY-MM-DD HH24:MI:SS)and specifically linked to Oracle’s TO_DATE format. So for best results, you may want to create the data dump using this particular format.Best regards\nKevin", "username": "kevinadi" } ]
Mongimport csv with milliseconds of 9 positions
2023-01-05T16:19:16.639Z
Mongimport csv with milliseconds of 9 positions
981
null
[]
[ { "code": " useEffect(() => {\n async function fetchData() {\n const realm_id = //id omitted\n const app = new Realm.App({ id: realm_id });\n const credentials = //credentials omitted\n\n try {\n const user = await app.logIn(credentials);\n debugger;\n await user.functions\n .getInfoById(id)\n .then((result) => setInfo(result));\n } catch (error) {\n console.log(\"This went wrong: \", error);\n }\n }\n fetchData();\n }, [id]);\n", "text": "Error: attribute d: Expected number, “MNaN,NaN\\n A32.5,…”. bundle.js:16703I seem to be getting this error after calling a function (getInfoById(id)).I have looked up this error and all I can find online is that it looks like it might be an issue with nested data. But the result is an object, as expected, so I can’t figure out what is happening. Everything is working like I expect, I’m just trying to clear out/ understand this error.", "username": "Jess_Dever" }, { "code": "getInfoById(id)", "text": "Expected numberI believe getInfoById(id) is your handcrafted function on the Atlas. So, now, what “id” is it you send from this app, and what does your function expect it to be?you might be missing a type conversion between number and string, or it might be something completely different than expected. put some printing on both sides and check what really is sent and then received.or it might be the result you are sending back from the function, and thus the “info” is not what you expect. check that too with printing before “setInfo”.", "username": "Yilmaz_Durmaz" }, { "code": "exports = function(arg){\n \n let collection = context.services.get(\"ClusterName\").db(\"DBName\").collection(\"Info\");\n \n return collection.findOne({id:(arg)});\n};\n", "text": "Thanks, Yilmaz!I can verify on the client side that the id being passed is a string. But I’m not sure how to tell what the function expects on the backend. This is the function in MongoDB:Since it’s so basic, and I’m not defining the type for the parameter, I’m not sure how it would know what to expect. Can you help me determine that in the function, or is there anything in this function that you’re going: “Oh you’re missing this”?Also, just want to clarify: the “info” is exactly what I expect; it returns an object just like I expect, and the app is working as expected. It’s just also throwing that error. So really just trying to clear out the error. Not sure if that helps.Thanks for taking a look at this, and any help you can provide!", "username": "Jess_Dever" }, { "code": "SVGD3.jsfindOnefind", "text": "My first instinct was to check these values and functions because I thought they could be responsible.It turns out this error comes from creating an SVG image. you are either directly trying to create one, or using some library like D3.js.The problem here is that you are either not storing correctly formatted data, or you are not using your data in the correct order/type required by that svg-creating function.it can be a number parsing error, or wrong object depth, or wrong array indexing. just check your returned info data and make sure you are using the correct names, indices and values. for example, findOne method gives you a single object, but find method gives an array of objects even when it has a single object.", "username": "Yilmaz_Durmaz" } ]
Error <path> attribute d: Expected number, "MNaN,NaN\n A32.5,…"
2023-01-05T17:09:54.233Z
Error &lt;path&gt; attribute d: Expected number, &ldquo;MNaN,NaN\n A32.5,…&rdquo;
2,496
null
[ "dot-net", "atlas-device-sync" ]
[ { "code": " [Required]\n public string _partitionKey { get; set; } /* user Id */\n\n [Required]\n [Indexed]\n public string Name { get; set; }", "text": "Hi, i have this document defined , and in Atlas only have indexes for _id and _partitionKey no index is created to Name, why?\nThis index will exist only in local sync realm ?\npublic class FavoritedVerbete : RealmObject\n{\n[PrimaryKey]\npublic string _id { get; set; } = Guid.NewGuid().ToString(“N”);", "username": "Sergio_Carbonete" }, { "code": "", "text": "Yes, indexes defined in your client-side models are applied only for the local database and will not affect your Atlas cluster. The reason is that you may have different client applications that have different query patterns and may have different local indexes.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why Index not exist in Atlas when defined in c# class
2023-01-05T20:04:38.998Z
Why Index not exist in Atlas when defined in c# class
1,026
null
[ "atlas-functions", "app-services-user-auth" ]
[ { "code": "", "text": "So basically I would like the email/password provider to be more flexible.\nI don’t want to write a Custom Authentication System where I’d have to hash the passwords myself etc.I want to add 2 things:on register: if another user is already existing with the same email address BUT different casing (e.g. [email protected] exists and new user tries to register with [email protected]) then the registration should be blockedon login: if the user enters his email address in another casing but with the correct password he/she should still get logged in to his/her accountWhat would be the best way to accomplish this?", "username": "SirSwagon_N_A" }, { "code": "", "text": "I would probably lowercase the email before calling the login or signup function. This is would avoid the problem of case sensitivity as emails are always considered case insensitive.", "username": "Mohit_Sharma" }, { "code": "", "text": "@Mohit_Sharma thanks for answering!The problem is there are actually some email services that give out the same emails but with different casing. So it could possibly be very dangerous to just lowercase any email address. Imagine [email protected] (“Example”) registers, but the registration email gets sent to [email protected] (“example”) because we lowercased it client-side. But those email addresses might be operated by completely different persons. Example can then click “verify” in the email he got. example just misses the fact that he didn’t click on the verify button, just tries to login anyhow and it works. example now uses the app for some time. At any point Example can trigger a password reset, gets the password reset link sent to his mail and can then without a problem steal the account which example maybe even paid for services etc.", "username": "SirSwagon_N_A" } ]
Atlas/Realm Email/Password provider custom register and login trigger functions?
2023-01-04T22:33:00.130Z
Atlas/Realm Email/Password provider custom register and login trigger functions?
1,166
null
[ "on-premises" ]
[ { "code": "", "text": "Can MongoDB Atlas be offered as managed service offering to MSPs hosting on-prem MongoDB on K8 for their end customers in regulated industries and/or govt sector.This is in context to the Sovereign Clouds MSP business. I work for VMware.", "username": "Neha_and_Manish_Arora" }, { "code": "", "text": "There are various options for MongoDB partners to offer or integrate with Atlas and/or partner offerings for MongoDB-as-a-service. I’m leading the Partner Ecosystem at MongoDB, please contact me at [email protected],:Lars", "username": "Lars_Herrmann" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas as managed service for on-prem Sovereign Clouds
2023-01-04T17:01:54.689Z
MongoDB Atlas as managed service for on-prem Sovereign Clouds
1,227
null
[ "atlas-functions" ]
[ { "code": "exports = async ({limit = 10, offset = 0}) => {\n const db = context.services.get('mongodb-atlas').db('DATABASE_NAME');\n const results = await db.collection('COLLECTION_NAME')\n .find({})\n .sort({id: 1}) // Sort by ID\n .skip(offset) // Skip `offset` results.\n .limit(limit) // Limit to `limit` results\n .toArray();\n\n console.log('results returned', results.length);\n return results;\n}\nlimitoffsetlimitlimitlimitskipfindfind", "text": "Hello! To implement GraphQL pagination, there appear to be two options: offset-based or cursor-based. I decided to try offset-based pagination to keep my test client a little simpler. This thread indicates pagination has to be done with a custom resolver, so I started with a simple pagination function.…or rather, I thought it would be simple. See the following function code:For a collection of size 100 with default limit 10 and offset 0:\nExpected number of results: 10 (matching limit)\nActual value printed: 90 – what?If you flip the order of operations and do the limit before the skip, the result is always 0.This seems like it should work. The API limitations guide does not list limit or skip as unsupported options for the find command. The Realm docs don’t mention limit/skip at all.The only docs I could find on skip/find in JS explicitly state:The order in which you call limit and sort does not matterThis is definitely not true in Realm Functions (although I’m not sure if these docs apply, which is confusing as this is MongoDB running in Node, but the find() API is a little different with projections in Realm vs. args in Node).Concrete suggestions:Whew! That ends my functions adventure for the evening. Thanks for all of your work on Realm, and I hope these suggestions are helpful for the future. ", "username": "mdierker" }, { "code": "skipfindskip find()limit() skip()", "text": "Hey Matthew -Sorry that you had to run through so many hoops while implementing pagination.The API limitations guide that you linked does show skip as an unsupported operation with find. It also doesn’t show skip as a supported operation at all. I’d love to get your feedback on how we can make this a more clear on our end and pass this onto the documentation team.The fact that limit() does not work when preceded by a skip() is a bug we’re investigating. I’m happy to keep this thread updated when it is resolved.In general (as pointed out in the thread you linked), we recommend using find() and limit() when implementing pagination due to the skip() limitation.This is also feedback I will pass onto the documentation team.", "username": "Sumedha_Mehta1" }, { "code": "skipfindskip", "text": "Thanks for the reply. You’re absolutely right – the limitations guide does list skip for find as unsupported. That is entirely my mistake!As another suggestion: a function returning bad results is worse to me than a function returning no results (i.e. calling skip should throw an error or not compile at all rather than returning something nondeterministic). That will make it very explicit that the operation isn’t supported [although I appreciate the docs too!].Thanks for passing suggestions along!", "username": "mdierker" }, { "code": "", "text": "Hey Matthew - just looked into this issue deeper and I realize I made some incorrect statements in the above post. skip() is not supported with find() in the Wireprotocol - this is not what you were using, therefore my links to the docs were not valid.What you ran into was a simple bug that should be resolved as of 2 weeks ago. If you see any errors in the behavior now (I just tried your snippet with limit 5 and offset 0 and it returned 5 documents as expected) let me know", "username": "Sumedha_Mehta1" }, { "code": "", "text": "It works for me. Seems to be resolved.", "username": "SirSwagon_N_A" } ]
Realm functions limit() and skip() are badly broken
2021-01-10T06:37:16.301Z
Realm functions limit() and skip() are badly broken
7,600
null
[ "node-js", "python", "compass", "php" ]
[ { "code": "$cursor = $database->command([\n 'geoNear' => 'restos',\n 'near' => [\n 'type' => 'Point',\n 'coordinates' => [-74.0, 40.0],\n ],\n 'spherical' => 'true',\n 'num' => 3,\n]);\n", "text": "The tutorial titled “” on [this page](at Execute Database Commands — PHP Library Manual upcoming) includes the following suggested code:Which results in the following fatal error:Fatal error: Uncaught MongoDB\\Driver\\Exception\\CommandException: no such command: ‘geoNear’ in…Apparrently support for geoNear has been gone since MongoDB version 4.2 as per MongoDB Manual 6.0 .What am I missing?There are several other deprecated references in Atlas start-up guide and tutorials for use of PHP library. In fact, PHP library references are completely removed from this seminal “connect to your cluster” tutorial. Only access options described are for: PyMongo Driver, Node.js Driver, MongoDB Shell, and Compass.", "username": "G_Chase" }, { "code": "", "text": "Documentation issues can be reported here.You might also fill out the MongoDB Documentation Survey and maybe point them at this problem if it allows such.", "username": "Jack_Woehr" }, { "code": "", "text": "[NOT ALLOW URLS SO HAVE REMOVED FORM BELOW]Thanks. Have created a documentation issue ticket illustrating the following sloppiness. Consider myself an advanced programmer, though completely new to MongdoDB. Spent an entire day trying to configure PHP script access to Atlas test environment. Not successful. Documentation deficiencies not helping. Very surprised documentation not better; Mongo is a pretty mature company at this point. For example:On reference page MongoDBDatabase-command :$cursor = $database->command([‘listCollections’ => 1]);\nvar_dump($c->toArray());$c->toArray() should be $cursor->toArray()Function template is:function command($command, array $options = ): MongoDB\\Driver\\CursorYet actual and example call via object class is:$cursor = $database->command([‘ping’ => 1]);The passed parameter is an array, though the template displays a string followed by an array. Further the value in the array (“1”) is not explained.The exampled listCollections commands executed via command function appears to be comparable to the listCollections() function] (at MongoDBDatabase-listCollections). The similarities and differences between the two calling methods are not described for beginners.$serverApi = new ServerApi(ServerApi::V1);Triggers error: “Fatal error: Uncaught Error: Class ‘ServerApi’ not found in…”", "username": "G_Chase" }, { "code": "", "text": "You’ll get no argument from me, @G_Chase .\nGenerally in the programming field one is finding PHP to attract less attention than it once did.\nI work with legacy IBM systems (which are in fact quite modern, they have all the goodies) where PHP has a big presence.\nAll I can say is that I do use PHP with MongoDB and it works very well, even if the docs have problems.", "username": "Jack_Woehr" }, { "code": "<?php\nrequire_once 'vendor/autoload.php';\n$mongodb_client = new MongoDB\\Client('mongodb+srv://myid:[email protected]/test?retryWrites=true&w=majority');\n$db = $mongodb_client->selectDatabase('admin');\n$cursor = $db->command([\"serverStatus\" => 1]);\nvar_dump($cursor->toArray()[0]);\n?>\n", "text": "Anyway, here’s a silly little example, if this helps.Of course, you have to change id, password, and cluster name for it work.", "username": "Jack_Woehr" }, { "code": "", "text": "I assume the subject of this thread is a typo, as 1.15.0 is the most recent PHPLIB release.I followed up on all of the issues raised in PHPLIB-1055 but I’ll address some other points below.In fact, PHP library references are completely removed from this seminal “connect to your cluster” tutorial. Only access options described are for: PyMongo Driver, Node.js Driver, MongoDB Shell, and Compass.This wasn’t mentioned in PHPLIB-1055, so I’ll field this here.For language-specific examples within other product manuals (e.g. Atlas, MongoDB server), drivers teams generally get a produce example code, which then gets added to our test suite (for ongoing test coverage) and parsed by other teams to incorporate into their own docs. One example of this is PHPLIB-350 and the DocumentationExamplesTest.php file in the PHPLIB repository.I’m not familiar with that page but it appears to be maintained by the Atlas team. I don’t recall ever seeing a ticket related to the Atlas connection examples you referenced above, but I’ll follow up internally with that team to ask why only a subset of drivers are included (PHP is not the only one missing).There are several other deprecated references in Atlas start-up guide and tutorials for use of PHP library.I assume this is referring to something other than the missing connection examples above. Can you share references to these docs pages so I can look into this further?Documentation issues can be reported here.Slight correction. Most MongoDB manual pages should have a “Share Feedback” widget in the bottom right corner. If so, that’s the preferred way to report documentation issues since it will spawn a JIRA ticket with the correct template and context (also without requiring the user to have a JIRA account). In the absence of that, creating a DOCS ticket directly will still get the attention of the right people and can be triaged or moved to other teams as needed.PHP driver docs are handled entirely by the driver engineers, so reporting issues directly in either the PHPLIB or PHPC (for extension docs on PHP.net) is preferable.In @G_Chase’s case, one of the issues pertained to code examples within the Atlas UI itself. Since that’s a separate team, I created an internal ticket for them to look into that and cross-referenced it with PHPLIB-1055 for context.", "username": "jmikola" }, { "code": "", "text": "Thank you, have reprorted doc error as open sisue. And thank you @Jack_Woehr for confirmation of general php doc deficiencies.", "username": "G_Chase" }, { "code": "", "text": "As discussed above, the geonear command example is still provided in tutorial, but the command no longer exists. Has it been replaced by “nearSphere”? I have been completely unable to find a PHP example of searching the sample Atlas collections by lat/long. Can you provide a PHP example of applying nearShpere to the sample collection: sample_geospatial.shipwrecks?Thanks", "username": "G_Chase" }, { "code": "geoNear$geoNear$nearSphere$near$geoNear$nearSphere$near", "text": "the geonear command example is still provided in tutorialmongodb/mongo-php-library#1024 is the corresponding PR for PHPLIB-1055. It has yet to be merged and published, which is why the geoNear example is still visible in the Executing Database Commands tutorial.Has it been replaced by “nearSphere”?Since geoNear was deprecated and removed in MongoDB 4.0 and 4.2, respectively, I searched up the final copies of its documentation in the server manual:The suggested alternatives to the geoNear command are as follows:Note that the $geoNear stage and $nearSphere/$near query operators all return documents in a sorted order (nearest to farthest).Can you provide a PHP example of applying nearShpereI don’t have any PHP example handy, but the above operators would be no different than using any other aggregation stages or query operators in the PHP driver. The BSON documents in the server manual would just need to be translated to a PHP structure (e.g. associative array).", "username": "jmikola" }, { "code": "", "text": "Slight correction. Most MongoDB manual pages should have a “Share Feedback” widget in the bottom right corner. If so, that’s the preferred way to report documentation issues since it will spawn a JIRA ticket with the correct template and context (also without requiring the user to have a JIRA account). In the absence of that, creating a DOCS ticket directly will still get the attention of the right people and can be triaged or moved to other teams as needed.Thanks for clarifying the preferred procedurie, @jmikola", "username": "Jack_Woehr" } ]
PHP Library 1.15 seems to include outdated tutorials (e.g. references removed geoNear command)
2022-12-26T23:47:22.050Z
PHP Library 1.15 seems to include outdated tutorials (e.g. references removed geoNear command)
1,778
null
[]
[ { "code": "payload.body.text()", "text": "I am using AWS kinesis firehose to stream Cloudwatch logs to my Mongo Altas database. In doing so, I created an Altas API endpoint which is exposed to AWS kinese Firehose to call for sending over the log data. The log data will be processed first by the Altas Function, which decodes them and writes to MongoDB in blukwrite. However, during the process, I keep getting error about string formatting during JSON parsing. I looked into it and found that the body text that I got inside the ALtas function with payload.body.text() has been truncated to around 760 characters. When I log also, the request data, I can see that the full body (over 4000 characters) is there, just that somehow it was truncated when the data is accessed in the ALtas function. I searched over all documentation and forums but there is no mention of this problem. Just wonder if there is any hard limit to the length of the payload bodyReplication of this problem can be done via creating a https endpoint with a altas function, then send over a POST request with a large Body (over 3000 characters).Would be great if someone has experience using HTTPs endpoint with large payload can share your method", "username": "Henry_Yeung" }, { "code": "exports = function(payload, response) {\n\n /* Using Buffer in Realm causes a severe performance hit\n this function is ~6 times faster\n */\n const decodeBase64 = (s) => {\n var e={},i,b=0,c,x,l=0,a,r='',w=String.fromCharCode,L=s.length\n var A=\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"\n for(i=0;i<64;i++){e[A.charAt(i)]=i}\n for(x=0;x<L;x++){\n c=e[s.charAt(x)];b=(b<<6)+c;l+=6\n while(l>=8){((a=(b>>>(l-=8))&0xff)||(x<(L-2)))&&(r+=w(a))}\n }\n return r\n }\n\n console.log(\"data3\", payload.body.toBase64());\n console.log(\"data3\", payload.body.text());\n\n // Get AccessKey from Request Headers\n const firehoseAccessKey = payload.headers[\"X-Amz-Firehose-Access-Key\"]\n\n // Check shared secret is the same to validate Request source\n if(firehoseAccessKey == context.values.get(\"FIREHOSE_ACCESS_KEY\")) {\n \n // Payload body is a JSON string, convert into a JavaScript Object\n const data = JSON.parse(payload.body.text())\n\n // Each record is a Base64 encoded JSON string\n const documents = data.records.map((record) => {\n const document = JSON.parse(decodeBase64(record.data))\n return {\n ...document\n }\n })\n\n response.addHeader(\n \"Content-Type\",\n \"application/json\"\n )\n \n // Perform operations as a bulk\n context.services.get(\"mongodb-atlas\").db(\"monitors\").collection(\"firehose\").insertMany(documents).then(() => {\n // All operations completed successfully\n response.setStatusCode(200)\n response.setBody(JSON.stringify({\n requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],\n timestamp: (new Date()).getTime()\n }))\n return\n }).catch((error) => {\n // Catch any error with execution and return a 500 \n response.setStatusCode(500)\n response.setBody(JSON.stringify({\n requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],\n timestamp: (new Date()).getTime(),\n errorMessage: error\n }))\n return\n })\n } else {\n // Validation error with Access Key\n response.setStatusCode(401)\n response.setBody(JSON.stringify({\n requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],\n timestamp: (new Date()).getTime(),\n errorMessage: \"Invalid X-Amz-Firehose-Access-Key\"\n }))\n return\n }\n}\nconsole.log(\"data3\", payload.body.toBase64());\nconsole.log(\"data3\", payload.body.text());\n", "text": "This is my Altas function. Bothprints trucated output of my request body", "username": "Henry_Yeung" } ]
Altas Function Payload body truncated in the function
2023-01-05T16:25:06.810Z
Altas Function Payload body truncated in the function
1,273
null
[ "aggregation", "queries" ]
[ { "code": "I have two collections\ncollection1 => { userData : [\n{fieldId:ObjectId('63b422210b0d84048e4cc401'), fieldValue:\"123\"},\n{fieldId:ObjectId('63b422210b0d84048e4cc404'), fieldValue:\"value2\"}\n]}\n\ncollection2: fieldresult : [{\n _id: ObjectId('63b422210b0d84048e4cc401'),\n title: \"Field1\",\n fieldType:\"text\",\n dataType: \"String\"\n}, {\n _id: ObjectId('63b422210b0d84048e4cc404'),\n title: \"Field2\",\n fieldType:\"text\",\n dataType: \"String\"\n}]\n\nThe required output is: \n\noutput: [{\n _id: ObjectId('63b422210b0d84048e4cc401'),\n title: \"Field1\",\n fieldType:\"text\",\n dataType: \"String\",\n fieldValue:\"123\"\n },\n {\n _id: ObjectId('63b422210b0d84048e4cc404'),\n title: \"Field1\",\n fieldType:\"text\",\n dataType: \"String\",\n fieldValue:\"value2\"\n }]\n", "text": "", "username": "harsha_Khanwani" }, { "code": "", "text": "Please share what you have tried and how it failed to deliver the desired result.This will help us help you so that we avoid going in a direction that you already is wrong. Something just a little tweak of what you have is the solution.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation Pipeline(merge the collection if ids match)
2023-01-04T16:24:45.771Z
Aggregation Pipeline(merge the collection if ids match)
539
null
[ "dot-net", "unity" ]
[ { "code": "Debug.Log($\"Awaiting connection to Realm App <color=green>censored</color>:\");\n_app = Realms.Sync.App.Create(RealmAppID);\n_user = await _app.LogInAsync(Credentials.EmailPassword(\"[email protected]\", \"password\"));\n", "text": "Hello! Just came here from the stream that Luce and @nraboy hosted, classic Unity experience (I teach Unity to first-timers a lot, so I always squirm when I see even seasoned software pros get stuck :))But here’s my question / issue:I have some code like this and it works great with Realm / Realm Sync. But it only works once, so I hit play, it works, then it fails. Once I have any code changes that trigger a recompile, it works.Quick repro video of what this feels like:It hangs on the await in the last statement of the code snippet above.This is caused by Unity not reloading the .NET Domain, which is an advanced option that can be disabled in Project Settings->Editor->Enter Play Mode Settings. Recompiles always trigger domain reloads, which is why that alleviates the problem temporarily. This setting is used a lot by devs around the world because it seriously speeds up iteration time as projects grow - normally, Unity reloads the Domain whenever the Play button is pressed, but that quickly takes 1, 2, or more seconds (especially in DOTS projects or mature production code bases). It does come with some caveats that require slightly cleaner coding practices.Usually, this sort of “run-once” behaviour is an indicator that some resources that live in a static context aren’t properly unloaded or closed. A acceptable fix would be to expose some functions that manually control the lifecycle of Realm’s runtime that does “nothing” on first run but will re-initialize all static fields and singletons with null, so they will be properly recreated as needed.", "username": "thygrrr" }, { "code": "", "text": "Background / Protip / How to shoot yourself in the foot:If you’ve ever been annoyed by the “Reloading Script Assemblies” progress popup, I have a treat for you and that’s Skipping the Domain Reload …\nThis shows the difference in iteration times (first the default out of the box behaviour, then with Enter Play Mode Options enabled), already very pronounced on a very small project (~70 lines of code). I had DOTS prototypes that had domain reloads of 30+ seconds, but also a medium size mobile game will quickly rack up 5+ seconds of “Reloading Script Assemblies” time.", "username": "thygrrr" }, { "code": "", "text": "@nirinchev, do you have any experience with Domain Reloads?cc @Luce_Carter", "username": "nraboy" }, { "code": "", "text": "Thanks for the quick reply and signal boost. Also thanks for loopin Luce in, I couldn’t find the right handle.Linking some supplemental documentation here:", "username": "thygrrr" }, { "code": "", "text": "Hey folks - to be honest, I don’t have a whole lot of large project experience with Unity and didn’t know about that setting. I’ll need to test it out and understand what we’re doing wrong there.", "username": "nirinchev" }, { "code": "NativeCommon.Initialize()", "text": "No problem, if I wanted to give it a concise issue summary:App.LogInAsync(…) only works once per Domain Reload in multiple Unity Play Mode sessionsMy lowkey suspicion is it may be NativeCommon.Initialize() and that it perhaps needs some of the goodness from the Domain Reloading article in the Unity Manual applied to it (or its friends). The attribute in question can be used in arbitrary classes, doesn’t have to be a UnityEngine.Object decendant.(kind of interesting that other .NET devs haven’t encountered this issue, Unity is an outlier with its frequent Domain Reloads by default; a behaviour they are actively working to get rid of btw., and that’s keeping them from going to .NET 6.0)", "username": "thygrrr" }, { "code": "", "text": "Yeah, domain reloading has been a bit of a pain to deal with when adding Unity support and it’s clear we didn’t handle all corner cases. Your detailed report and pointers will help a great deal here and I hope the fix won’t be too involved. I filed a Github issue that you can subscribe for updates. In interest of transparency though, we’re having an engineering offsite next week and the schedule is packed with discussions, so I don’t think we’ll be able to get to it until after that.", "username": "nirinchev" }, { "code": "", "text": "@thygrrr Thank you very much for this in-depth and insightful post!And thank you @nirinchev for providing a fix (Investigate Unity initialization when domain reloading is turned off · Issue #2898 · realm/realm-dotnet · GitHub). It’s a pre-release, and from the CHANGELOG file in the package there are quite a few changes in there since the last release (10.18.0), but I’m very happy to be using this version even if some of those changes might still require additional testing effort. The improvement in turn around time during development with domain reloading disabled is so welcome!", "username": "ebbe_brandstrup" } ]
Unity: Realm stalls when using Enter Play Mode Options active (depends on Domain Reload)
2022-03-31T17:13:42.662Z
Unity: Realm stalls when using Enter Play Mode Options active (depends on Domain Reload)
8,948
null
[ "atlas-device-sync", "app-services-user-auth" ]
[ { "code": "", "text": "Let me jump right in.I have set up the whole Sign-In-With-Apple Auth flow, and it works fine. Now I am wondering about a certain part:RealmSync requires the Apple Auth identity token for authentication, but this particular token expires after 24 hours. When signing up/in I get a unique secret user identifier which is used to check the authorizated credential state, but not to get a fresh identity token.\nMy research showed the only way to get a non-expired identity token is performing the Sign-In-With-Apple authentication again, meaning the user is presented with the login UI once again every single day.But requiring this every day seems unreasonable, especially from the UX perspective:\nAn edge-case where a person is signing up at 3pm, the identity token added to the secure storage. The person uses the app again the next day at around 2:59pm but requires a fresh sign-in at 3pm (which might be during actively working with the App).How do you keep Realm Sync Apple Auuth credentials valid for longer than a single day?", "username": "Philip_Niedertscheid" }, { "code": "", "text": "Following this topic, as I am looking into enabling Apple Auth as well.", "username": "Christian_Wagner" }, { "code": "", "text": "Hi Folks – In this case Realm’s authentication should just be respecting the exp claim of the token that we’re passed. I believe raising the exp should be possible on your end.", "username": "Drew_DiPalma" }, { "code": "", "text": "HI Drew, thanks for your answer and my late response.I looked into the Sign-in-with-Apple process and I can’t find a resource which allows me to refresh the ID token without showing the user an UI, or setting a higher expiration.After reading this blog post, it also seems like this would be bad practice and instead we should use a refresh token system with our own server, ergo MongoDB Realm.\nhttps://blog.curtisherbert.com/so-theyve-signed-in-with-apple-now-what/Can you please provide me with a link to the documentation explaining how to raise the exp?", "username": "Philip_Niedertscheid" }, { "code": "", "text": "@Drew_DiPalma any further ideas?", "username": "Philip_Niedertscheid" }, { "code": "let appId = \"myappid-sxwrg\"\nlet realmApp = RealmSwift.App(id: appId)\n\n[...]\n\n// Check if there is a currentUser\nif let currentUser = realmApp.currentUser{\n\n // Check if the currentUser is loggedIn\n if currentUser.isLoggedIn {\n // Current User is already loggedIn so you can sync Realm (Realm.asyncOpen(configuration: ...)\n startSync(user: currentUser)\n }\n else{\n // User is not loggedIn\n // Don't know exactly what to do here but you can do an Apple authentication\n }\n}\nelse {\n // there is no currentUser\n // Do apple authentication here\n}\n", "text": "For those who are still looking for a solution, you don’t need an apple authentication each time the app launch. I think MongoDB Realm manages itself the refreshToken stuff. You just need to check if there is a current RLMUser.Here is what I did :Hope it helps.", "username": "Ruben_Moha" } ]
Realm Sync Sign-In-With-Apple identity token expires too soon
2021-07-27T10:36:53.852Z
Realm Sync Sign-In-With-Apple identity token expires too soon
4,921
https://www.mongodb.com/…d6e86024b609.png
[ "mongodb-shell" ]
[ { "code": "", "text": "", "username": "Sujith_Basham" }, { "code": "mongosh 'mongodb://127.6.0.1/students' --quiet\nstudents> db.dropDatabase()\n{ ok: 1, dropped: 'students' }\nstudents> db.adminCommand({listDatabases:1}).databases.forEach(x => print(x.name))\nadmin\nconfig\nlocal\ntest\n\nstudents> use admin\nswitched to db admin\nadmin> show dbs\nadmin 40.00 KiB\nconfig 48.00 KiB\nlocal 72.00 KiB\ntest 72.00 KiB\n", "text": "Hi @Sujith_BashamThat is normal in mongosh. The database was dropped, but your database context still exists. The moment you create an object in the database it will be actually created.If you change db to admin and list the databases you will see that students does not exist. Or you can use the listDatabases command to the same effect. Example of both below:", "username": "chris" } ]
Hi guys I am a beginner in learning mongodb ,I had dropped my database using dropDatabase( ) and after I had checked the db list it is not there in it.But when I tried to create a new database with droped db name it showing that it is alreedy exist.why?
2023-01-05T12:55:53.701Z
Hi guys I am a beginner in learning mongodb ,I had dropped my database using dropDatabase( ) and after I had checked the db list it is not there in it.But when I tried to create a new database with droped db name it showing that it is alreedy exist.why?
852
https://www.mongodb.com/…e_2_1024x797.png
[]
[ { "code": "isIdleTimerDisabledintegrating changesets failed: retryable error while committing integrated changesets: failed to apply changesets: failed to apply to state store: error applying changes to the state store: error performing bulk write to MongoDB {table: \"xxx\", err: connection(xxx]) incomplete read of message header: context canceled} (ProtocolErrorCode=201)\n", "text": "I’m having about 30K of records totalling about 12mb being written to a realm in the background. They show up fine locally but can’t sync to Atlas in a reasonable time. I’m test driving a M10 cluster.The record comes from a data import routine and they’re written to disk as they’re parsed. I’m using the Swift SDK btw.\nScreenshot 2023-01-02 at 1.51.41 AM1492×1162 113 KB\nI retried with isIdleTimerDisabled disabled and left the device on for long and killed the app after some time. Turns out it failed a few minutes in with a different message:Any ideas how to approach this?", "username": "anh" }, { "code": "", "text": "Hi @anh ,You’ve obscured all the significant details that could help understanding the issue in depth: one obvious issue in what we can see is that you’re trying to insert ~30K objects in a single transaction. This is indeed an issue: a transaction should ideally be the minimal change consistent in itself, and while it may be convenient to group some insertions/updates together, doing the whole import in one go is not advisable, especially when manipulating more than a couple hundred objects.Can you split the import in more transactions, of no more than few hundred objects each, and check how it goes?", "username": "Paolo_Manna" }, { "code": "", "text": "Gotcha. Indeed I overlooked that details. I thought the 30K objects have been all written to disk and show up fine in the UI, and that meant the issue was with the connection between the device and the Atlas cluster somehow. It didn’t occur to me that the amount of writes within each realm.write block corresponds to the number of changesets being uploaded.Indeed this goes away when I separate them into 500-something writes a transaction.Thanks for the help Paolo.", "username": "anh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to sync about 10mb worth of data?
2023-01-04T06:56:51.598Z
Unable to sync about 10mb worth of data?
1,006
null
[]
[ { "code": "[\n {\n \"id\": \"1\",\n \"imageOrder\": 1,\n \"userId\":\"1\",\n \"imageUrl\": \"url1\"\n },\n {\n \"id\": \"2\",\n \"userId\":\"1\",\n \"imageOrder\": 2,\n \"imageUrl\": \"url2\"\n },\n {\n \"id\": \"3\",\n \"userId\":\"1\",\n \"imageOrder\": 3,\n \"imageUrl\": \"url3\"\n }\n]\n[{id:1,imageOrder:1},{id:3,imageOrder:2}][{id:2,imageOrder:1},{id:3,imageOrder:2}][{id:1,imageOrder:1},{id:3,imageOrder:2}][{id:3,imageOrder:1}]", "text": "I have photos collection like thishereI have an endpoint that allows user to delete his/her photo, logic roughly works like below\nlets say you want to delete 2nd image in the above example,I noticed an unpredictable behaviour, if 2 parallel calls are made to the delete api for the same user\nex:delete call to image 1 and 2 is triggered by the userboth delete 1 and 2 would have read all 3 images to memory using find callcall to delete image 1 would have [{id:2,imageOrder:1},{id:3,imageOrder:2}] in memorycall to delete image 2 would have [{id:1,imageOrder:1},{id:3,imageOrder:2}] in memorynow which even is updating the DB last will be persisted.the expected behaviour is that you have only [{id:3,imageOrder:1}] in the collection.\nI tried using transactions with sessions but dint seem to help.How do i Achieve this?PS:\nusing mongo atlas v5.0.14", "username": "MAHENDRA_HEGDE" }, { "code": "\"imageOrder\"\"id\"_id import pymongo\n from pymongo import MongoClient\n conn = pymongo.MongoClient(\"mongodb+srv://cluster0.sqm88.mongodb.net/test\")\n db = conn[\"test\"]\n collection = db[\"forum205166\"]\n\n from pymongo import MongoClient, InsertOne, DeleteOne, ReplaceOne, UpdateOne\n\n def callback(session):\n requests = [InsertOne({'_id': 1, \"imageOrder\": 1, \"userId\": 1, \"imageURL\": \"url1\" }),\n InsertOne({'_id': 2, \"imageOrder\": 2, \"userId\": 1, \"imageURL\": \"url2\" }),\n InsertOne({'_id': 3, \"imageOrder\": 3, \"userId\": 1, \"imageURL\": \"url3\" }),\n UpdateOne({'id': 3}, {'$set': {'imageOrder': 2}}),\n DeleteOne( { \"id\": 2})]\n result = mycollection.bulk_write(requests, session=session)\n \n\n with conn.start_session() as session:\n session.with_transaction( callback,read_concern=ReadConcern(\"local\"), write_concern=wc_majority, read_preference=ReadPreference.PRIMARY)\n\n", "text": "Hi @MAHENDRA_HEGDE and welcome to the MongoDB community forum!!The above condition mentioned sounds like a race condition situation where two parallel calls are made to update the \"imageOrder\" and delete the image with a specific \"id\".However, if updating the imageOrder is not significantly important for the application, the recommendation would be to avoid using the field value. The imageOrder field certainly causes race condition for your situation with the overhead of updating more documents inside the collection.Consider a scenario where, you delete an image for user 1 with imageOrder1 and your collection contains, 1 million images for user 1. In the operation mentioned above, one delete operation would include a million updates into the collection.\nFurther, if this field is only used to sort images, can this function be achieved using the _id field?I tried using transactions with sessions but dint seem to help.To understand the requirement with more understanding, could you help with the following information regarding using transactions for the same:However, the below sample code in Pymongo version using transaction could be helpful for the same:Let us know if you have any further queriesBest Regards\nAasawari", "username": "Aasawari" } ]
Handling concurrent updates Across the collection
2022-12-20T13:48:21.161Z
Handling concurrent updates Across the collection
922
null
[ "aggregation", "queries", "atlas-functions" ]
[ { "code": "", "text": "Hi Team,I have a dedicated cluster with M10 facing issue with server-less function unable to call a collation index with case insensitive:\ncontext.service.getDb(“”).getCollection(“”).find( {name: { $regex:“kish”, $options: “i” } }).collation({locale: “en_US”})Getting the following error:\n{“error”:“{\"message\":\"‘collation’ is not a function\",\"name\":\"TypeError\"}”,“error_code”:“FunctionExecutionError”,“link”:“App Services<>/apps/<>/logs?co_id=<>”}Please help what did i miss? Is the Collation index function does not work in Server-less function as well ?", "username": "RamaKishore_K" }, { "code": "collation(…)find(…)collation", "text": "Hi @RamaKishore_K ,Is the Collation index function does not work in Server-less function as well ?That’s indeed the case: collation(…) is a method for a cursor, and the find(…) version in Functions returns a limited version of it. Overall, the set of the API available to Functions is trimmed down, and the details are documented on the specific App Services’ page.You may be able to use a form of collation as query option (as opposed to a function) when running the function as System, however.", "username": "Paolo_Manna" }, { "code": "", "text": "Hi @Paolo_Manna ,I have used options in find method and applied collation index, it worked.Thank you.", "username": "RamaKishore_K" } ]
Dedicated atlas instance :: collation is not a function error
2023-01-04T08:46:12.134Z
Dedicated atlas instance :: collation is not a function error
1,099
https://www.mongodb.com/…db77d8166397.png
[ "indexes", "atlas-search" ]
[ { "code": "", "text": "Hi there,I’m having a mongodb atlas cluster with the following configuration,I’m referring this video for creating a index with field, which is an array of objects.\nFor creating and editing index with field mapping I’m using Visual Editor.\nSo in add field mapping for Data Type Configuration I’m not getting Data Type option for document & embeddedDocuments type.\nI’m just getting selection for String, Autocomplete, StringFacet, NumberFacet, DateFacet, Boolean, ObjectId, Date, Number, and Geo.Please suggest what should I do for getting document & embeddedDocuments data type?\nadd_field_mapping459×571 43.6 KB\n", "username": "Brijesh_Darji1" }, { "code": "embeddedDocumentsembeddedDocuments", "text": "Hello Brijesh,I can see from this documentation link that embeddedDocuments is not supported to be created using the Atlas UI Visual Index Builder.To make it simpler for now you can do the following:I hope you find this information helpful.Regards,\nMohamed Elshafey", "username": "Mohamed_Elshafey" }, { "code": "", "text": "It worked! Thank you @Mohamed_Elshafey", "username": "Brijesh_Darji1" } ]
For atlas search not getting document & embeddedDocuments data type in Edit With Visual Editor option
2023-01-03T13:43:54.652Z
For atlas search not getting document &amp; embeddedDocuments data type in Edit With Visual Editor option
1,069
null
[]
[ { "code": "", "text": "Hello team, i’m facing error from MongoDB database side when i use celery with MongoDB. Database give error like this …FAILED SQL: (‘UPDATE “django_celery_results_taskresult” SET “date_created” = “django_celery_results_taskresult”.“date_done”’,)when i try to migrate give this error.anyone have any idea about this error, so please give me solution", "username": "Devansh_Patel" }, { "code": "", "text": "Hi @Devansh_Patel,Can you share more details on how MongoDB fits into this deployment scenario? Django, Celery, and SQL updates are all external to MongoDB (as is the “Failed SQL update” error).Regards,\nStennie", "username": "Stennie_X" } ]
Facing error from MongoDB database side when i use celery with MongoDB
2023-01-05T05:38:58.401Z
Facing error from MongoDB database side when i use celery with MongoDB
1,330
null
[ "installation" ]
[ { "code": "mongodb-community", "text": "I am using macOS Big Sur. I have installed [email protected] server on mac recently with help of MongoDB Homebrew Tap. After successful installation, I run “brew services start mongodb/brew/mongodb-community” on terminal. I receive below result :Successfully started mongodb-community (label: homebrew.mxcl.mongodb-community)But, when I check for running server with “brew services list”, I get below status :\nmongodb-community error nileshpanhale /Users/nileshpanhale/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistAnd if I try to connect server with “mongo” shell the I get below error :Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:374:17As I am new to macOS and MongoDB. Request you to please help me on an urgent basis with the stepwise solution. Due to this error, my development is completely stopped.Thanks in advance.", "username": "Nilesh_Panhale" }, { "code": "", "text": "brew services start mongodb/brew/mongodb-communityIs the command to start services correct\nIt should be something like\nbrew services start mongodb/brew/[email protected]\nPlease check documentationUnless your mongod is up you cannot connect to mongodbCan you check mongod.log.It will give more details on why it failed to start\nCould be permissions issue or misssing dbpath dir etc", "username": "Ramachandra_Tummala" }, { "code": "", "text": "if we do not mention the community version then the command will take default stable version of MongoDB, mostly latest one.\nTo check logs, I can not run mongod.log command, as mongod is not working with macOS.\nIf there is permissions issue then how can I know that?\nI tried to set/create dbpath but macOS Big Sur won’t allow to modify root directory.", "username": "Nilesh_Panhale" }, { "code": "", "text": "I tried to set/create dbpath but macOS Big Sur won’t allow to modify root directory.Yes this is a known issue.They removed access to root dirFirst identify your mongod.conf.It should be under /usr/local\nIf you see the contents of config file it will show your mongo.log path\nIf it is failing due to dbpath issue simply create a directory under your home directory and update your config file\nRestart the service.It should start mongod on default port 27017You can start your own mongod on another port say 28000 by below command\nmongod --port 28000 --dbpath your_dbpath --logpath your_logpath --fork\nGive valid full path for dbpath and logpath where the owner of mongod can writeCheck mongodb documentation and this linkIn this reading I will be helping you step-by-step on setting up MongoDB and make it running…\nReading time: 4 min read\n", "username": "Ramachandra_Tummala" }, { "code": "", "text": "A post was split to a new topic: Facing error from MongoDB database side when i use celery with MongoDB", "username": "Stennie_X" } ]
Brew mongodb-community server error
2021-07-06T13:03:01.507Z
Brew mongodb-community server error
33,904
null
[ "data-modeling" ]
[ { "code": "", "text": "Requirement: MongoDB Version 4.2 collections are holding data (imported via JSON files). For one column having String datatype and format as : 08/12/1977 09:45:34 AM, which need to be converted into DATE data type and to the format : ISODate : (“1977-08-12T09:45:34.000+0000”)Either via PyMongo or directly via Mongo aggregation/conversion utilities usage can serve the purpose. Any help is highly appreciated.Best Regards\nKesav", "username": "ramgkliye" }, { "code": "db.nameOfYourCollection.updateMany( \n {whateverFilterParamsYouWantHere}, \n [\n {\n $set: { \n date: {\n $convert: { \n input: \"$date\",\n to: \"date\" \n } \n }\n }\n }\n ]\n)\n", "text": "One option is to write a script that loops toRetrieve the document you want to updateConvert that date to ISO (could use Python’s datetime package doing something like:from datetime import datetime\nmydate = datetime.strptime(‘08/12/1977 09:45:34 AM’, ‘%d/%m/%Y %I:%M:%S %p’)\nmydate = mydate.isoformat()Update the document with the new dateFor details on how to retrieve and update documents, see this Python Quick Start.If you’re using 4.2 or later, another option is to use the aggregation pipeline for update operations. You could run something like:See https://docs.mongodb.com/manual/tutorial/update-documents-with-aggregation-pipeline and https://docs.mongodb.com/manual/reference/operator/aggregation/convert/#convert-to-date for more details.", "username": "Lauren_Schaefer" }, { "code": "", "text": "Thanks a lot for the response and will give a try.", "username": "ramgkliye" }, { "code": "", "text": "Did it correctly convert time to utc by reducing 5 and a half hours?", "username": "Stuart_S" } ]
String Datatype to Date datatype conversion
2020-09-16T13:39:17.225Z
String Datatype to Date datatype conversion
4,611
null
[ "node-js", "mongoose-odm", "transactions", "storage" ]
[ { "code": "", "text": "This is what pops up when trying to run this command in the console. Im using a Mac. I have mongodb installed in my users folder and I have the mongodata folder which houses the information created after running the command. What should happen after running the command is that my new database should be up and running according to video in my udemy course for my notes app. I would appreciate any help or guidance. Thank you.bs@BSs-MacBook-Air ~ % ls\nApplications\tDesktop\t\tDocuments\tDownloads\tLibrary\t\tMovies\t\tMusic\t\tPictures\tProjects\tPublic\t\tmongodata\tmongodb\nbs@BSs-MacBook-Air ~ % /Users/bs//mongodb/bin/mongod --dbpath=/Users/bs/mongodata\n{“t”:{\"$date\":“2023-01-03T17:38:39.477-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:\"-\",“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{\"$date\":“2023-01-03T17:38:39.479-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“thread1”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:17},“isInternalClient”:true}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.481-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648602, “ctx”:“thread1”,“msg”:“Implicit TCP FastOpen in use.”}\n{“t”:{\"$date\":“2023-01-03T17:38:39.482-06:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread1”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“namespace”:“config.tenantMigrationDonors”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.482-06:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread1”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“namespace”:“config.tenantMigrationRecipients”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.482-06:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread1”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“ShardSplitDonorService”,“namespace”:“config.tenantSplitDonors”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.482-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“thread1”,“msg”:“Multi threading initialized”}\n{“t”:{\"$date\":“2023-01-03T17:38:39.482-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:10341,“port”:27017,“dbPath”:\"/Users/bs/mongodata\",“architecture”:“64-bit”,“host”:“BSs-MacBook-Air”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.482-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“6.0.3”,“gitVersion”:“f803681c3ae19817d31958965850193de067c516”,“modules”:,“allocator”:“system”,“environment”:{“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.482-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Mac OS X”,“version”:“22.1.0”}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.482-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“storage”:{“dbPath”:\"/Users/bs/mongodata\"}}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.483-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:5693100, “ctx”:“initandlisten”,“msg”:“Asio socket.set_option failed with std::system_error”,“attr”:{“note”:“acceptor TCP fast open”,“option”:{“level”:6,“name”:261,“data”:“00 04 00 00”},“error”:{“what”:“set_option: Invalid argument”,“message”:“Invalid argument”,“category”:“asio.system”,“value”:22}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.484-06:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=3584M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.631-06:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795906, “ctx”:“initandlisten”,“msg”:“WiredTiger opened”,“attr”:{“durationMillis”:147}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.631-06:00”},“s”:“I”, “c”:“RECOVERY”, “id”:23987, “ctx”:“initandlisten”,“msg”:“WiredTiger recoveryTimestamp”,“attr”:{“recoveryTimestamp”:{\"$timestamp\":{“t”:0,“i”:0}}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.660-06:00”},“s”:“W”, “c”:“CONTROL”, “id”:22120, “ctx”:“initandlisten”,“msg”:“Access control is not enabled for the database. Read and write access to data and configuration is unrestricted”,“tags”:[“startupWarnings”]}\n{“t”:{\"$date\":“2023-01-03T17:38:39.660-06:00”},“s”:“W”, “c”:“CONTROL”, “id”:22140, “ctx”:“initandlisten”,“msg”:“This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning”,“tags”:[“startupWarnings”]}\n{“t”:{\"$date\":“2023-01-03T17:38:39.660-06:00”},“s”:“W”, “c”:“CONTROL”, “id”:22184, “ctx”:“initandlisten”,“msg”:“Soft rlimits for open file descriptors too low”,“attr”:{“currentValue”:256,“recommendedMinimum”:64000},“tags”:[“startupWarnings”]}\n{“t”:{\"$date\":“2023-01-03T17:38:39.662-06:00”},“s”:“I”, “c”:“STORAGE”, “id”:20320, “ctx”:“initandlisten”,“msg”:“createCollection”,“attr”:{“namespace”:“admin.system.version”,“uuidDisposition”:“provided”,“uuid”:{“uuid”:{\"$uuid\":“40ffca7b-f52b-4984-a120-fb62260dce44”}},“options”:{“uuid”:{\"$uuid\":“40ffca7b-f52b-4984-a120-fb62260dce44”}}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.681-06:00”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“initandlisten”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{\"$uuid\":“40ffca7b-f52b-4984-a120-fb62260dce44”}},“namespace”:“admin.system.version”,“index”:“id”,“ident”:“index-1-6747955549794507941”,“collectionIdent”:“collection-0-6747955549794507941”,“commitTimestamp”:null}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.682-06:00”},“s”:“I”, “c”:“REPL”, “id”:20459, “ctx”:“initandlisten”,“msg”:“Setting featureCompatibilityVersion”,“attr”:{“newVersion”:“6.0”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.682-06:00”},“s”:“I”, “c”:“REPL”, “id”:5853300, “ctx”:“initandlisten”,“msg”:“current featureCompatibilityVersion value”,“attr”:{“featureCompatibilityVersion”:“6.0”,“context”:“setFCV”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.682-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915702, “ctx”:“initandlisten”,“msg”:“Updated wire specification”,“attr”:{“oldSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:17},“isInternalClient”:true},“newSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:17,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:17,“maxWireVersion”:17},“isInternalClient”:true}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.682-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915702, “ctx”:“initandlisten”,“msg”:“Updated wire specification”,“attr”:{“oldSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:17,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:17,“maxWireVersion”:17},“isInternalClient”:true},“newSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:17,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:17,“maxWireVersion”:17},“isInternalClient”:true}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.682-06:00”},“s”:“I”, “c”:“REPL”, “id”:5853300, “ctx”:“initandlisten”,“msg”:“current featureCompatibilityVersion value”,“attr”:{“featureCompatibilityVersion”:“6.0”,“context”:“startup”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.682-06:00”},“s”:“I”, “c”:“STORAGE”, “id”:5071100, “ctx”:“initandlisten”,“msg”:“Clearing temp directory”}\n{“t”:{\"$date\":“2023-01-03T17:38:39.682-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:20536, “ctx”:“initandlisten”,“msg”:“Flow Control is enabled on this deployment”}\n{“t”:{\"$date\":“2023-01-03T17:38:39.682-06:00”},“s”:“I”, “c”:“FTDC”, “id”:20625, “ctx”:“initandlisten”,“msg”:“Initializing full-time diagnostic data capture”,“attr”:{“dataDirectory”:\"/Users/bs/mongodata/diagnostic.data\"}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.684-06:00”},“s”:“I”, “c”:“STORAGE”, “id”:20320, “ctx”:“initandlisten”,“msg”:“createCollection”,“attr”:{“namespace”:“local.startup_log”,“uuidDisposition”:“generated”,“uuid”:{“uuid”:{\"$uuid\":“eb49e501-c198-4345-a85d-1743412f9799”}},“options”:{“capped”:true,“size”:10485760}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.703-06:00”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“initandlisten”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{\"$uuid\":“eb49e501-c198-4345-a85d-1743412f9799”}},“namespace”:“local.startup_log”,“index”:“id”,“ident”:“index-3-6747955549794507941”,“collectionIdent”:“collection-2-6747955549794507941”,“commitTimestamp”:null}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.704-06:00”},“s”:“I”, “c”:“REPL”, “id”:6015317, “ctx”:“initandlisten”,“msg”:“Setting new configuration state”,“attr”:{“newState”:“ConfigReplicationDisabled”,“oldState”:“ConfigPreStart”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.704-06:00”},“s”:“I”, “c”:“STORAGE”, “id”:22262, “ctx”:“initandlisten”,“msg”:“Timestamp monitor starting”}\n{“t”:{\"$date\":“2023-01-03T17:38:39.704-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:\"/tmp/mongodb-27017.sock\"}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.704-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“127.0.0.1”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.704-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.712-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:20712, “ctx”:“LogicalSessionCacheReap”,“msg”:“Sessions collection is not set up; waiting until next sessions reap interval”,“attr”:{“error”:“NamespaceNotFound: config.system.sessions does not exist”}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.712-06:00”},“s”:“I”, “c”:“STORAGE”, “id”:20320, “ctx”:“LogicalSessionCacheRefresh”,“msg”:“createCollection”,“attr”:{“namespace”:“config.system.sessions”,“uuidDisposition”:“generated”,“uuid”:{“uuid”:{\"$uuid\":“007ebf5e-c700-4387-ae02-e7a7244bc9be”}},“options”:{}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.740-06:00”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“LogicalSessionCacheRefresh”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{\"$uuid\":“007ebf5e-c700-4387-ae02-e7a7244bc9be”}},“namespace”:“config.system.sessions”,“index”:“id”,“ident”:“index-5-6747955549794507941”,“collectionIdent”:“collection-4-6747955549794507941”,“commitTimestamp”:null}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.740-06:00”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“LogicalSessionCacheRefresh”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{\"$uuid\":“007ebf5e-c700-4387-ae02-e7a7244bc9be”}},“namespace”:“config.system.sessions”,“index”:“lsidTTLIndex”,“ident”:“index-6-6747955549794507941”,“collectionIdent”:“collection-4-6747955549794507941”,“commitTimestamp”:null}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.916-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:56993”,“uuid”:“45d19c77-d0b4-447c-831c-440f5d560246”,“connectionId”:1,“connectionCount”:1}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.924-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn1”,“msg”:“client metadata”,“attr”:{“remote”:“127.0.0.1:56993”,“client”:“conn1”,“doc”:{“driver”:{“name”:“nodejs|Mongoose”,“version”:“4.12.1”},“os”:{“type”:“Darwin”,“name”:“darwin”,“architecture”:“arm64”,“version”:“22.1.0”},“platform”:“Node.js v18.12.1, LE (unified)”,“version”:“4.12.1|6.8.2”}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.934-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:56994”,“uuid”:“0a47c1a2-e5e1-4a51-b1aa-222b32a6ec85”,“connectionId”:2,“connectionCount”:2}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.934-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn2”,“msg”:“client metadata”,“attr”:{“remote”:“127.0.0.1:56994”,“client”:“conn2”,“doc”:{“driver”:{“name”:“nodejs|Mongoose”,“version”:“4.12.1”},“os”:{“type”:“Darwin”,“name”:“darwin”,“architecture”:“arm64”,“version”:“22.1.0”},“platform”:“Node.js v18.12.1, LE (unified)”,“version”:“4.12.1|6.8.2”}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.938-06:00”},“s”:“I”, “c”:“STORAGE”, “id”:20320, “ctx”:“conn2”,“msg”:“createCollection”,“attr”:{“namespace”:“notes-api.notes”,“uuidDisposition”:“generated”,“uuid”:{“uuid”:{\"$uuid\":“a95f0bb1-681d-44b7-be5e-c597b60ce533”}},“options”:{}}}\n{“t”:{\"$date\":“2023-01-03T17:38:39.958-06:00”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“conn2”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{\"$uuid\":“a95f0bb1-681d-44b7-be5e-c597b60ce533”}},“namespace”:“notes-api.notes”,“index”:“id”,“ident”:“index-8-6747955549794507941”,“collectionIdent”:“collection-7-6747955549794507941”,“commitTimestamp”:null}}\n{“t”:{\"$date\":“2023-01-03T17:38:50.434-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:56995”,“uuid”:“1f944881-0102-4b49-8832-83a2f049109f”,“connectionId”:3,“connectionCount”:3}}\n{“t”:{\"$date\":“2023-01-03T17:38:50.436-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn3”,“msg”:“client metadata”,“attr”:{“remote”:“127.0.0.1:56995”,“client”:“conn3”,“doc”:{“driver”:{“name”:“nodejs|Mongoose”,“version”:“4.12.1”},“os”:{“type”:“Darwin”,“name”:“darwin”,“architecture”:“arm64”,“version”:“22.1.0”},“platform”:“Node.js v18.12.1, LE (unified)”,“version”:“4.12.1|6.8.2”}}}", "username": "Brandon_S" }, { "code": "", "text": "This is what pops up when trying to run this command in the console. Im using a Mac. I have mongodb installed in my users folder and I have the mongodata folder which houses the information created after running the command. What should happen after running the command is that my new database should be up and running according to video in my udemy course for my notes app. I would appreciate any help or guidance. Thank you.It looks like MongoDB started successfully, however, the startup command does not have the --fork option. This causes your terminal to “get stuck”.Two ways to see if everything is ok:", "username": "Leandro_Domingues" }, { "code": "", "text": "I believe this what you said to do. Can you make sense of all this for me? Thank you. @Leandro_Dominguesbs@BSs-MacBook-Air ~ % /Users/bs//mongodb/bin/mongod --dbpath --=/Users/bs/mongodata\n{“t”:{\"$date\":“2023-01-03T19:18:45.246-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“thread2”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.248-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“thread2”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:17},“isInternalClient”:true}}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.259-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648602, “ctx”:“thread2”,“msg”:“Implicit TCP FastOpen in use.”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.262-06:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread2”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“namespace”:“config.tenantMigrationDonors”}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.263-06:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread2”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“namespace”:“config.tenantMigrationRecipients”}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.263-06:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread2”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“ShardSplitDonorService”,“namespace”:“config.tenantSplitDonors”}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.263-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“thread2”,“msg”:“Multi threading initialized”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.263-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:10533,“port”:27017,“dbPath”:\"–=/Users/bs/mongodata\",“architecture”:“64-bit”,“host”:“BSs-MacBook-Air”}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.263-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“6.0.3”,“gitVersion”:“f803681c3ae19817d31958965850193de067c516”,“modules”:,“allocator”:“system”,“environment”:{“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.263-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Mac OS X”,“version”:“22.1.0”}}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.263-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“storage”:{“dbPath”:\"–=/Users/bs/mongodata\"}}}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.267-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:5693100, “ctx”:“initandlisten”,“msg”:“Asio socket.set_option failed with std::system_error”,“attr”:{“note”:“acceptor TCP fast open”,“option”:{“level”:6,“name”:261,“data”:“00 04 00 00”},“error”:{“what”:“set_option: Invalid argument”,“message”:“Invalid argument”,“category”:“asio.system”,“value”:22}}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.268-06:00”},“s”:“E”, “c”:“CONTROL”, “id”:20568, “ctx”:“initandlisten”,“msg”:“Error setting up listener”,“attr”:{“error”:{“code”:9001,“codeName”:“SocketException”,“errmsg”:“Address already in use”}}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.269-06:00”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“initandlisten”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:15000}}\n{“t”:{\"$date\":“2023-01-03T19:18:45.270-06:00”},“s”:“I”, “c”:“REPL”, “id”:4794602, “ctx”:“initandlisten”,“msg”:“Attempting to enter quiesce mode”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.270-06:00”},“s”:“I”, “c”:\"-\", “id”:6371601, “ctx”:“initandlisten”,“msg”:“Shutting down the FLE Crud thread pool”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.270-06:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“initandlisten”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.270-06:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“initandlisten”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.270-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“initandlisten”,“msg”:“Shutting down the global connection pool”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“initandlisten”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“initandlisten”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“initandlisten”,“msg”:“Shutting down the ServiceEntryPoint”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“initandlisten”,“msg”:“Shutting down free monitoring”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“initandlisten”,“msg”:“Shutting down the HealthLog”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“initandlisten”,“msg”:“Shutting down the TTL monitor”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:6278511, “ctx”:“initandlisten”,“msg”:“Shutting down the Change Stream Expired Pre-images Remover”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“initandlisten”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:\"-\", “id”:4784931, “ctx”:“initandlisten”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.271-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“initandlisten”,“msg”:“Now exiting”}\n{“t”:{\"$date\":“2023-01-03T19:18:45.272-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“initandlisten”,“msg”:“Shutting down”,“attr”:{“exitCode”:48}}", "username": "Brandon_S" }, { "code": "", "text": "{“code”:9001,“codeName”:“SocketException”,“errmsg”:“Address already in use”}}}See this line:\n{“code”:9001,“codeName”:“SocketException”,“errmsg”:“Address already in use”}}}I believe that as I said mongodb is already running and when you run this same command the error returns saying that the address is already in use. As you didn’t change the port, the default port is 27017", "username": "Leandro_Domingues" }, { "code": "", "text": "What would you suggest I do? Im new to this so I don’t know the commands or how to change the port. I know how to view them in the console with the command lsof -i:3000 and how to kill a specific one with kill -9 “PID number”. For this course I’m using mongoose, studio 3T, postman, VSC with JS files. I also can’t find my localhost:27017 Mongodb connection In studio 3T. Thank you.@Leandro_Domingues", "username": "Brandon_S" }, { "code": "", "text": "Your mongod is already up and running on port 27017\nAll you have to do is open another session and try to connect using mongoshYou have to shutdown current running mongod and restart with --fork so that it runs in background and your terminal where you started mongod will not be locked or appear to hung\nor\nIf you want to start another mongod while default one is running you have to give different port_number and dbpath\nExample:\nmongod --port 28000 --dbpath newpath --logpath newpath/mongodb.log --fork", "username": "Ramachandra_Tummala" } ]
Need assistance getting mongodb to run. Im new to this and following a udemy course. The instructor and I are unsure how to continue. He says he doesn't see a specific issue
2023-01-04T00:26:15.470Z
Need assistance getting mongodb to run. Im new to this and following a udemy course. The instructor and I are unsure how to continue. He says he doesn&rsquo;t see a specific issue
2,461
https://www.mongodb.com/…3_2_471x1024.png
[ "react-native", "installation" ]
[ { "code": "", "text": "Hello,\nI am getting an error that says “Missing Realm constructor. Did you run 'pod install”? I look at everywhere but could not find anything.Could someone please help me. I have been working on this the past 3 days\n\nScreen Shot 2022-02-09 at 8.19.11 PM524×1139 112 KB\n", "username": "Furkan_Ayilmaz" }, { "code": "", "text": "Same probleme here i use Ignite-cli to start an Managed project using expo… updated expo sdk to 44… i still have the same issue.", "username": "valentin_cournee" }, { "code": "", "text": "Hello,Same issue here, but it’s random. If I reload the app it’s working as expected.", "username": "GuillaumeAp" }, { "code": "", "text": "Hi, i have the same error, but mi react native version is 0.68.1 and run the new arquitecture fabric and Hermes engine, when will this version or update of the sdk be supported? Thanks", "username": "Didier_Restrepo" }, { "code": "", "text": "i have the same issue. It’s been driving me nuts all day.", "username": "Brandon_McHugh" } ]
Missing Realm constructor
2022-02-10T03:19:36.502Z
Missing Realm constructor
6,384
null
[ "atlas-cluster", "php" ]
[ { "code": "Preformatted text#Following two lines commented out since call to ServerApi results in error: \"Fatal error: Uncaught Error: Class 'ServerApi' not found i$\n#$serverApi = new ServerApi(ServerApi::V1);\n#$client = new MongoDB\\Client('mongodb+srv://username:[email protected]/?retryWrites=true&w=majority', [], ['server$\n$client = new MongoDB\\Client('mongodb+srv://username:[email protected]/?retryWrites=true&w=majority');\n \n$database = $client->test;\n \n$cursor = $database->command(['ping' => 1]);\n\n$cursor = $database->command(['listCollections' => 1]);\nforeach ($cursor as $collection) {\n echo $collection['name'], \"\\n\";\n}\n \nforeach ($database->listCollections() as $collectionInfo) {\n var_dump($collectionInfo);\n}\n", "text": "Preformatted textSpent all day wrestling with poor documentation in as of now failed attempt to test MongdoDB for an application. Finally appear to be able to establish a connection with Atlas sample database from external Linux server with latest PHP Library and Extension, yet listCollections call yields empty response.Complete code:var_dump after $database= $client->test:[“databaseName”]=>\nstring(4) “test”\n[“manager”]=>\nobject(MongoDB\\Driver\\Manager)#2 (2) {\n[“uri”]=>\nstring(88) “mongodb+srv:/username:[email protected]/?retryWrites=true&w=majority”\n[“cluster”]=>\narray(0) {\n}\n}\n[“readConcern”]=>\nobject(MongoDB\\Driver\\ReadConcern)#8 (0) {\n}\n[“readPreference”]=>\nobject(MongoDB\\Driver\\ReadPreference)#9 (1) {\n[“mode”]=>\nstring(7) “primary”\n}\n[“typeMap”]=>\narray(3) {\n[“array”]=>\nstring(23) “MongoDB\\Model\\BSONArray”\n[“document”]=>\nstring(26) “MongoDB\\Model\\BSONDocument”\n[“root”]=>\nstring(26) “MongoDB\\Model\\BSONDocument”\n}\n[“writeConcern”]=>\nobject(MongoDB\\Driver\\WriteConcern)#10 (1) {\n[“w”]=>\nstring(8) “majority”\n}\n}No collections returned???", "username": "G_Chase" }, { "code": "$client = new MongoDB\\Client('mongodb+srv://username:[email protected]/...');", "text": "listCollections call yields empty responseMay be there is no collection in your database. Can you connect with Compass and share a screenshot of the collections you have?It looks like there is a discrepancy between the code$client = new MongoDB\\Client('mongodb+srv://username:[email protected]/...');and the following output (looks like a missing /)string(88) “mongodb+srv:/username:[email protected]/…“May be the missing slash was removed while redacting the URI.", "username": "steevej" }, { "code": "$database = $client->test;test", "text": "$database = $client->test;You appear to be connecting to your test database. Do you have any collections in that database?I think your code will probably work if you connect to a database with collections.", "username": "Justin_Jenkins" }, { "code": "", "text": "Have been able to connect and access now that collectoins are in database.Still surprised by basic error in documentation. For instance, the documentation describes Mongo Compass on the Mac as being installed in /Applications/MongoDB while it is actually installed as /Applications/MongoDB Compass. Small difference, but confusing to beginners. Why can’t the company keep documentation current and accurate?", "username": "G_Chase" }, { "code": "MongoDB Compass/Applications/MongoDB\\ Compass.app/Contents/MacOS/MongoDB\\ Compass/Applications/MongoDB Compass.appMongoDB\\ Compass.appMongoDB Compass.app", "text": "Glad you were able to connect!I’m not quite following this however:Still surprised by basic error in documentation. For instance, the documentation describes Mongo Compass on the Mac as being installed in /Applications/MongoDB while it is actually installed as /Applications/MongoDB Compass. Small difference, but confusing to beginners. Why can’t the company keep documentation current and accurate?In the documentation it says:The executable is called MongoDB Compass . The installer installs it under the Applications folder:/Applications/MongoDB\\ Compass.app/Contents/MacOS/MongoDB\\ CompassThe app is indeed called “MongoDB Compass” and it is in the /Applications/ folder (as the MongoDB Compass.app) .MongoDB\\ Compass.app == MongoDB Compass.app as the backslash is to escape the space.Seems like everything is correct?", "username": "Justin_Jenkins" } ]
Altas Test Database Beginner's Terrible Experience
2022-12-27T02:13:34.840Z
Altas Test Database Beginner&rsquo;s Terrible Experience
1,902
null
[ "node-js", "crud", "mongoose-odm" ]
[ { "code": "router.put(\"/skooliva/student/:id\", async (req, res) => {\n const student = {\n fullname: req.body.fullname,\n email: req.body.email,\n mobile: req.body.mobile,\n Date_of_birth: req.body.Date_of_birth,\n ID_number: req.body.ID_number,\n address: req.body.address,\n place_of_birth: req.body.place_of_birth\n }\n \n \n studentPre.findByIdAndUpdate((req.params.id).trim(), { $set: student }, { new: false }, (err,data) => {\n if(!err){\n res.status(200).json({code: 200, message: 'Updated successfully', updateStudent: data})\n // res.send(data);\n }else{\n console.log(err);\n }\n })\n})\n messageFormat: undefined,\n stringValue: '\"[object Object]\"',\n kind: 'ObjectId',\n value: '[object Object]',\n path: '_id',\n reason: BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer\n at new BSONTypeError (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\bson\\lib\\error.js:41:28)\n at new ObjectId (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\bson\\lib\\objectid.js:67:23)\n at castObjectId (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\cast\\objectid.js:25:12)\n at ObjectId.cast (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schema\\objectid.js:246:12)\n at ObjectId.SchemaType.applySetters (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schematype.js:1201:12)\n at ObjectId.SchemaType._castForQuery (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schematype.js:1648:15)\n at ObjectId.SchemaType.castForQuery (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schematype.js:1636:15)\n at ObjectId.SchemaType.castForQueryWrapper (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schematype.js:1612:20)\n at cast (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\cast.js:347:32)\n at model.Query.Query.cast (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\query.js:5312:12),\n valueType: 'string'\n}\n[nodemon] restarting due to changes...\n[nodemon] starting `node index.js`\nserver running\nCastError: Cast to ObjectId failed for value \"[object Object]\" (type string) at path \"_id\" for model \"studentPre\"\n at model.Query.exec (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\query.js:4884:21)\n at model.Query.Query.findOneAndUpdate (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\query.js:3444:8)\n at Function.Model.findOneAndUpdate (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\model.js:2635:13)\n at Function.Model.findByIdAndUpdate (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\model.js:2749:32)\n at C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\routes\\index2.js:73:16\n at Layer.handle [as handle_request] (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\express\\lib\\router\\layer.js:95:5)\n at next (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\express\\lib\\router\\route.js:144:13)\n at Route.dispatch (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\express\\lib\\router\\route.js:114:3)\n at Layer.handle [as handle_request] (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\express\\lib\\router\\layer.js:95:5)\n at C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\express\\lib\\router\\index.js:284:15 {\n messageFormat: undefined,\n stringValue: '\"[object Object]\"',\n kind: 'ObjectId',\n value: '[object Object]',\n path: '_id',\n reason: BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer\n at new BSONTypeError (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\bson\\lib\\error.js:41:28)\n at new ObjectId (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\bson\\lib\\objectid.js:67:23)\n at castObjectId (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\cast\\objectid.js:25:12)\n at ObjectId.cast (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schema\\objectid.js:246:12)\n at ObjectId.SchemaType.applySetters (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schematype.js:1201:12)\n at ObjectId.SchemaType._castForQuery (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schematype.js:1648:15)\n at ObjectId.SchemaType.castForQuery (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schematype.js:1636:15)\n at ObjectId.SchemaType.castForQueryWrapper (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\schematype.js:1612:20)\n at cast (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\cast.js:347:32)\n at model.Query.Query.cast (C:\\Users\\user\\OneDrive\\Bureau\\SKOOLIVA_u\\backend\\node_modules\\mongoose\\lib\\query.js:5312:12),\n valueType: 'string'\n}\nconnected successfully\n", "text": "Hello i have an issue with my code here. Initially my crud on chrome does every work i want but then my update query releases some errors i know nothing of. I have googled for about 4 hours now but nothing. Please i need some help.This is the error i am gettingWhat am trying to say is this i am able to create a new user, read and also delete but the update button works only on my browser and when i console log i get the data i updated but simply doesnt display on my db.", "username": "Chi_Samuel" }, { "code": "ObjectIdObjectId()DB> ObjectId(\"6348acd2e1a47ca32e79f46f\") /// <--- Valid ObjectId\nObjectId(\"6348acd2e1a47ca32e79f46f\") /// <--- No error\n\nDB> ObjectId(\"6348acd2e1a47ca32e79f46fX\") /// <--- Invalid ObjectId\nBSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer\nreq.params.idObjectId()", "text": "Hi @Chi_Samuel,BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integerHopefully you’re not experiencing this error still but based off the error, are any of the ObjectId’s invalid in your code? When an invalid ObjectId() is used, this error generally occurs. E.g.:From a quick initial glance, I can see you trim() the req.params.id value, is this to form it into an ObjectId()? I’m not sure where else the issue could occur as they are from the request itself. Perhaps you could print the values (specifically for ObjectId() fields) that are used when you receive this error and verify they are valid ObjectId()'sRegards,\nJason", "username": "Jason_Tran" }, { "code": "const getUser = (req, res, id) => {\n try {\n ObjectID(id)\n User\n .findById(id)\n .exec((error, user) => {\n if (error) {\n return res.send(error)\n }\n if (user) {\n return res.json(user)\n } else {\n return res.send(`couldnt find user with id ${id}`)\n }\n }) \n } catch(err) {\n return res.send(`couldnt find user with id ${id}`)\n }\n}\n", "text": "thanks for your answer @Jason_Tran! Can you give some examples on how to conveniently check if this ObjectID is valid?\nI’m currently checking it like that, but I don’t really like this answer", "username": "Matthias_Bartholomaus" }, { "code": "", "text": "Can you give some examples on how to conveniently check if this ObjectID is valid?Just to clarify, when you state “check if the ObjectID is valid?“, do you mean that the document with a specific ObjectId value exists? Based off your code snippet, I cannot see where an ObjectId validity check is being done. I.e., by “valid” do you mean:Additionally, could you advise if you’re getting the same error message as the OP?I don’t have any specific examples handy but depending on your driver and driver version (I am assuming node in this case but please correct me if I am wrong here), you may wish to use the isValid() method for ObjectId.Note: docs linked above is for version 4.13 of the Node.JS Driver for MongoDBRegards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer
2022-09-20T08:39:30.225Z
BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer
39,400
null
[ "node-js" ]
[ { "code": "", "text": "I am working a project where I create database for every organization.I mean we have a core database for our important details and every organization has a database for their data.If we will have 100 organization we will have 100 database. We did this for our client requirments.So,Is there any solution to get all collection data of a database in a single query??", "username": "Moyen_Islam" }, { "code": "listCollections", "text": "Hi @Moyen_Islam,We did this for our client requirments.So,Is there any solution to get all collection data of a database in a single query??I believe there isn’t a directly a single command available currently that would retrieve all documents from all collections in a single database.One idea could be to possibly use listCollections and then pass the resulting collection names to a script which runs the query for the data you require.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "I think we’ve connected in the past Moyen, so pardon any repetition, but you will be able to do this shortly and easily using Atlas Data Federation.Is use case to basically have something like:\nDb1.CollEvents\nDb2.CollEvents\nDb3.CollEventsAnd be able to query GlobalDB.CollEvents and have this represent data from each of the 3 CollEvents above?", "username": "Benjamin_Flast" }, { "code": "", "text": "Could you suggest any course or documentation about this. I have learn more about this.", "username": "Moyen_Islam" }, { "code": "{\n \"databases\": [\n\n \"name\": \"GlobalVirtualDB\",\n {\n \"collections\": [\n {\n \"name\": \"user_feedback\",\n \"dataSources\": [\n {\n \"collection\": \"user_feedback\",\n \"databaseRegex\": \".*\",\n \"provenanceFieldName\": \"_provenance_data\",\n \"storeName\": \"Storename\"\n }\n ]\n }\n ],\n \"views\": []\n }\n ],\n \"stores\": [\n {\n \"clusterName\": \"<CLUSTER_NAME>\",\n \"name\": \"StoreName\",\n \"provider\": \"atlas\",\n \"readPreference\": {\n \"mode\": \"secondary\",\n \"tagSets\": []\n }\n }\n ]\n}\n", "text": "Hey @Moyen_Islam , we just released this actually. You can read up on the documentation here:To make it a bit easier though, this example shows how you can get collections named “user_feedback” across multiple different databases combined into one collection in a single database using data federation. One bonus is that this will also use our new provenance feature to include the source in a new field in your documents called “_provenance_data”. In this case that source is the source db, source collection, source cluster where the data is coming from.", "username": "Benjamin_Flast" } ]
Can I get all collection's data in a single request?
2022-11-21T08:49:08.838Z
Can I get all collection&rsquo;s data in a single request?
2,107
null
[ "sharding" ]
[ { "code": "", "text": "We want to see how we can migrateThanks\nGiri", "username": "giribabu_venugopal" }, { "code": "", "text": "If you are on MongoDB 6.0 their new tool Cluster to Cluster sync could be your friend here. This is what the tool was made for it can sync data between two clusters even on prem to cloud.", "username": "tapiocaPENGUIN" } ]
How to Migrate Windows (Mongo Sharded/Rs - on premises) to AWS Ubuntu
2023-01-04T18:19:08.905Z
How to Migrate Windows (Mongo Sharded/Rs - on premises) to AWS Ubuntu
714
null
[ "data-modeling" ]
[ { "code": "", "text": "If you have a Users collection and there is a many to many relationship, such as subscriber and subscribed, then you have 2 fields to hold these 2 distinct participants of this relationship.However, there is no particular relationship between the 2 friends, how do you name the fields? It’s not predictable in what field to look for a particular member?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "// Profiles\n{ \n_id : \"A\",\nname : \"aaa\",\n\"subscribedFriends\" : [ { \"_id\" : \"B\", name : \"bbb\" } , { \"_id\" : \"C\", name : \"ccc\"}],\n\"followingFriends\" : [{\"_id\" : \"D\", name : \"ddd\" }]\n}\n\n{ \n_id : \"B\",\nname : \"bbb\",\n\"subscribedFriends\" : [ { \"_id\" : \"D\", name : \"ddd\" } , { \"_id\" : \"C\", name : \"ccc\"}]\n\"followingFriends\" : [ { \"_id\" : \"A\", name : \"aaa\" } ]\n}\n...\n", "text": "Hi @Big_Cat_Public_Safety_Act ,Not sure what exactly your question is?Usually many to many will be stored as reference arrays inside the related documents:To avoid outliers where users have thousands of followers or subscribers we use the outlier pattern:The Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data setDoes that answer your question?Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "subscribedFriendsfollowingFriendsfriends", "text": "The question is in the event of the outlier where the data needs to be stored in a separate collection. And also, there is no distinction between the members of the relationship. In your example, you have subscribedFriends and followingFriends. In my case, it’s just friends. Both members are just friends.", "username": "Big_Cat_Public_Safety_Act" }, { "code": "\n{ \n_id : \"A\",\nname : \"aaa\",\n\"friends\" : [ { \"_id\" : \"B\", name : \"bbb\" } , { \"_id\" : \"C\", name : \"ccc\"} ... 500 friends],\nbucketFriends : 500\n}\n\n{ \n_id : \"A-outlier-1\",\nname : \"aaa\",\n\"friends\" : [ { \"_id\" : \"X\" ... 400 friends],\nbucketFriends : 400\n}\n\n{ \n_id : \"B\",\nname : \"bbb\",\n\"friends\" : [ { \"_id\" : \"A\", name : \"aaa\" } ]\n}\ndb.collection.find({ _id : /^A/ })\n", "text": "Ok so they will apear in each others lists:Here the outlier sit in the same collection so _id : “A” , and its outlier : “A-outlier-a” are still in the same collection to get all friends of “A” run:Thanks\nPavel", "username": "Pavel_Duchovny" } ]
How to design a collection to store friendships?
2023-01-03T17:59:19.780Z
How to design a collection to store friendships?
1,225
null
[]
[ { "code": "", "text": "Hi I am trying to store a bcrypt hashed passwd in atlas, in a binary field but when i compare with one generated locally they dont match, I dont think it’s the code, it’s straight out of the python brcypt how to. But binary fields seem to do something to the data.thanks, and apologies if this has been raised before, blame impatient newbie.", "username": "Danny_corgan1" }, { "code": "", "text": "You should really provide both the code for the client side as well as the code that runs on the server so someone can help you with this", "username": "SirSwagon_N_A" } ]
Storing password hash values in atlas for recalculating locally
2020-12-07T22:35:32.953Z
Storing password hash values in atlas for recalculating locally
1,578
null
[ "atlas-functions", "app-services-data-access" ]
[ { "code": "", "text": "Mongodb realm rules not working with functions even when just returning truei have used mongodb function rules on a lot of project and never had issues my original function returned true but didn’t work so I made one that was just true and it still doesn’t work\nhttps://csc.maksv.me/qZfrDQ\nhttps://csc.maksv.me/M8OTjo", "username": "Maks_V" }, { "code": "", "text": "It looks that tech support of MongoDB Atlas that is not reachable by another way is not interested at least to answer you.", "username": "kulXtreme_N_A" } ]
Mongodb realm rules not working
2021-11-27T18:56:37.964Z
Mongodb realm rules not working
3,689
null
[ "swift" ]
[ { "code": "app.emailPasswordAuth.registerUser(email: email!, password: password!, completion: { [weak self] (error) in\n DispatchQueue.main.async {\n guard error == nil else {\n self!.signUpFailed(with: error!)\n return\n }\n self!.signIn(with: self!.email!, and: self!.password!)\n }\n })\napp.login(credentials: Credentials.emailPassword(email: email, password: password)) { [weak self] (result) in\n DispatchQueue.main.async {\n switch result {\n case .failure(let error):\n self!.signInFailed(with: error)\n return\n case .success(let user):\n self!.continueLoggingIn()\n }\n }\n }\n", "text": "I’m using MongoDB’s Atlas Device Sync (until recently it was called Realm Sync) to handle login for my iOS app, coded in Swift.I am UK based, and the app works fine for users in the UK. However, I recently sent the app to contacts in Eastern Europe (Poland, Belarus, potentially other countries as well. One person also tried logging in using a French VPN apparently) and they’ve all received the same error when creating an account or logging in with an already created account.The localised description of this error is “cannot parse response”.Unfortunately I am based in the UK so I can’t replicate it on my own device. However, I know that the error when creating an account is being thrown from the below code:And I know that the error when logging in to an already created account is being thrown from the below code:I’m at a bit of a loss here. I have no idea why the response can be parsed in the UK but not other countries. I assume it’s an issue with Mongo/Realm but I could be wrong. If anyone can shed any light it would be greatly appreciated.", "username": "Laurence_Collingwood" }, { "code": "https://realm.mongodb.com/groups/${GROUP_ID}/apps/${APP_ID}", "text": "Hi, can you send your app_id (it is the value in the url https://realm.mongodb.com/groups/${GROUP_ID}/apps/${APP_ID}). This is safe to send but if you prefer to email you can send it to me at [email protected]", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks for your response Tyler - the app_id is 6054d83e21b232a4b4a685b1.One other thing to note since I wrote my initial post is I checked the logs at the time at which one of the users in Poland tried and failed to log in/create an account, and no error logs showed up, nor any other logs relating to that users usage of the app.", "username": "Laurence_Collingwood" } ]
"Cannot Parse Response" error when logging in from certain countries using Atlas Device Sync (prev. Realm Sync)
2023-01-04T10:32:07.848Z
&ldquo;Cannot Parse Response&rdquo; error when logging in from certain countries using Atlas Device Sync (prev. Realm Sync)
1,196
null
[ "react-js" ]
[ { "code": "", "text": "I want to implement rest password by entering an email then getting the reset link in email, but I couldn’t is there any examples for that using realm app services?", "username": "deem_moh" }, { "code": "", "text": "Hi deem_mohWelcome to the MongoDB Community forums! We do have the following articles in our Developer Center that might be helpful to you:Additionally, our React Native SDK documentation also provides you with the specific methods you should use in order to implement Reset Password.I hope this helps.Regards,\nMar", "username": "Mar_Cabrera" } ]
Reset user password
2023-01-04T13:17:16.716Z
Reset user password
1,532
null
[ "aggregation", "node-js", "compass" ]
[ { "code": "", "text": "Hi, I need to create a query which will return documents where all the elements in an array must be present in an array field.I know how to do so using aggregation and the $all operator, I’m just nervous about performance.Both my input array and the array on the documents can have up to 256 elements, each being a 6-character long string (HTML color hex codes). I currently have over 2000 documents, but\nkeep adding more, could easily reach 10s of thousands some day. Seems like this could easily cost millions of operations to find a match (though i dont have much knowledge of how mongo works under the hood).My questions:", "username": "Jim_Bridger" }, { "code": "", "text": "Is the performance of this query something I should worry about?You should always worry about performance. But you should not spend time optimizing before you have performance issue. Make it right then make if fast.Should I find another solution such as keeping all data of all my documents in memory?The server is doing that for you as it tries to keep the working set in memory.If I do implement it, should I avoid doing it on page requests and only use it on the back end to cache things?Do not complicate your code with early optimization.Could indexing the field help? Would that be a bad idea on a field like this?Indexing fields used in queries always help. If you need that field, and that field is a frequent use-case, then yes index it. You may always remove the index later if you find it detrimental to your other use-cases.each being a 6-character long string (HTML color hex codes)Strings take more space and are slower to compare (a string a compared character per character while a number is compared in 1 operation). In your case I would keep the colours as a number. An hex code is a number after all.One smart technique I never thought before, presented in Atlas Search to return all the applicable filters for a given search query without specifying - #2 by Erik_Hatcher in a different context might be used here. You could, for example, have frequent colour schemes (the facet_attributes) represented by a single number, an _id in another collection. So you would query with $all in a much smaller table and then $lookup using the single number in the huge collection.", "username": "steevej" } ]
Question about performance with the $all operator
2023-01-03T01:25:47.863Z
Question about performance with the $all operator
1,001
null
[ "aggregation" ]
[ { "code": "{\n $lookup: {\n from: \"metadata\",\n let: {\n a_local: \"$a\",\n b_local: \"$b\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $eq: [\"$aEqv\", \"$$a_local\"] },\n { $eq: [\"$bEqv\", \"$$b_local\"] },\n ],\n },\n },\n },\n ],\n as: \"metadata\",\n },\n },\n{\"metadata.bEqv\": 1, \"metadata.aEqv\": 1, metadata.fieldOfInterest: 1}\nbEqv", "text": "The MongoDB docs at https://www.mongodb.com/docs/manual/reference/operator/aggregation/lookup/#perform-multiple-joins-and-a-correlated-subquery-with--lookup make the following statement (for 6.0):We have the following $lookup query in our pipeline:along with a compound index,This lookup will be used over a massive number of Documents so it must use an index. In this particular case it is not strictly critical that it uses the complete compound index - just the metadata.bEqv part of it would be ok as that has high cardinality, though if it could cover the whole query using the complete compound index that would be ideal.Unfortunately, the documentation does not provide any examples of more than 1 field path, so it is unclear whether an index will be used in the above query. Enlightenment would be much appreciated.", "username": "Jonathan_Goodwin" }, { "code": "", "text": "Hi @Jonathan_Goodwin,Apologies for the late response.it is unclear whether an index will be used in the above query. Enlightenment would be much appreciated.You can use the explain method to see a document with the query plan and, which index it’s utilizing.Although, a general rule of thumb one considers when dealing with compound indexes is that you don’t need an additional index if your query can be covered by the prefix of the existent compound index.I would highly suggest you read the following to cement your knowledge about indexes.Please let us know if there’s any confusion in this. Feel free to reach out for anything else as well.~ Kushagra", "username": "Kushagra_Kesav" } ]
Clarity required around use of indexes in uncorrelated sub-query using $lookup
2022-08-30T05:50:46.817Z
Clarity required around use of indexes in uncorrelated sub-query using $lookup
1,371
https://www.mongodb.com/…6_2_1024x800.png
[ "java" ]
[ { "code": "", "text": "How much I scored?\nimage2046×1600 247 KB\n", "username": "Jaikrat_Singh" }, { "code": "", "text": "is it sum of all divided by number of categories?", "username": "Jaikrat_Singh" }, { "code": "pass/fail", "text": "Hi @Jaikrat_Singh,Welcome to the MongoDB Community Forums In keeping with certification industry best practices, MongoDB has opted not to publish exam scores. We will continue to offer examinees a pass/fail result and topic-level performance percentages.Please reach out to [email protected] if you have further questions.Thank you,\nKushagra Kesav", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the cutoff to clear MongoDB Java Developer exam
2023-01-02T12:29:59.459Z
What is the cutoff to clear MongoDB Java Developer exam
1,849
https://www.mongodb.com/…46_2_1024x74.png
[]
[ { "code": "", "text": "Hi,I am using an apikey to show the status of a cluster in Atlas which i have paused.\nThe status is IDLE . Is it the correct one?\nimage1076×78 6.3 KB\n", "username": "Mugurel_Frumuselu" }, { "code": "atlas clusters pause dba-test2STATE", "text": "Hi @Mugurel_Frumuselu , could you share which command did you use to pause the cluster?\nIf you used atlas clusters pause dba-test2 then it’s the right STATE you see. To double check, you can try running the command again. If the cluster is already paused you’ll receive an error message informing you about it.", "username": "Jakub_Lazinski" }, { "code": "", "text": "@Jakub_Lazinski Regardless if the cluster is paused or not the status show “IDLE” in both cases.Is that intended?", "username": "Mugurel_Frumuselu" } ]
Atlas apikey not showing correct status
2022-12-21T11:58:24.165Z
Atlas apikey not showing correct status
1,111
null
[ "python" ]
[ { "code": "", "text": "Hi\nI have recently completed the learning path for MongoDB python developer , its mentioned in the course that once the learning path is completed we’ll be able to avail offer in the associate developer certification fee, but the same is not reflecting.\nPlease guide how to avail the above waiver for certification fee.\nThanks…", "username": "Ciddhesh_Sathasivam" }, { "code": "", "text": "Hi @Ciddhesh_Sathasivam,Welcome to the MongoDB Community Forums Please email our MongoDB certification team at [email protected]. They will be glad to help you out.Thanks,\nKushagra Kesav", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to avail mongodb associate developer certification waiver?
2022-12-29T14:47:24.082Z
How to avail mongodb associate developer certification waiver?
1,928
null
[ "aggregation", "dot-net" ]
[ { "code": "public class Instance\n{\n public long IId { get; set; }\n public string Name { get; set; }\n public long TemplateId { get; set; }\n public Template Template { get; set; }\n}\n\npublic class Template\n{\n public long TId { get; set; }\n public string Name { get; set; } \n public List<Sample> Samples { get; set; }\n}\n\npublic class Sample\n{\n public long SId { get; set; }\n public long TemplateId { get; set; }\n public long SampleData { get; set; }\n public string Name { get; set; } \n}\nvar data = _mongoDatabase.GetCollection<Instance>(\"Instance\").Aggregate()\n .Lookup(\"Templates\", \"TemplateId\", \"TId\", @as: \"Template\")\n .Lookup(\"Sample\", \"TemplateId\", \"TemplateId\", @as: \"Template.Samples\")\n .Unwind(\"Template\")\n .Unwind(\"Template.Samples\")\n .As<Instance>() \n .ToList();\n", "text": "Please help me out by letting me know how to write the syntax for joining two collections in MongoDB. I want all the entries of the left joined the collection. If there is no matching entry in the right collection then it should be populated as a blank field.see below code,I use the below code to join collections,Here is what happening, if there is a document in Instance collection and no relevant matching document is present in Template or Sample collection then Instance document is also not coming to the list.I want all Instance documents even if there is no matching Template or Sample document available.", "username": "Anonumose_Jack" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Join two documents in mongoDB even if there is no entry in the right joined collection
2022-12-29T08:54:32.363Z
Join two documents in mongoDB even if there is no entry in the right joined collection
969
null
[ "aggregation" ]
[ { "code": "", "text": "Hello,\nIf you can help me please \nI have array of strings like this: [“61edd59612d7aa0045a5e829”, “61edd59612d7aa0045a5e830”] for example. and want simple to get new array by appending each string to object contain those properties, like so:\n[ { “61edd59612d7aa0045a5e829”: { “lastMonth” : 0, “total” : 0 } },\n{ “61edd59612d7aa0045a5e830”: { “lastMonth” : 0, “total” : 0 } }\n]\nnotice that i can’t know the length of the input array,TNX ", "username": "mybs2323" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Way to chain a string and object with aggregation
2022-12-28T11:33:55.691Z
Way to chain a string and object with aggregation
912
null
[ "data-modeling" ]
[ { "code": "", "text": "I’m seeking advice for building a database.\ncurrently we have a database for a project which is mongodb, we are going to have another 2 systems almost with same domain goal just different data but almost same models with slight differences just like different types added for example.the need is the 3 systems will need to communicate data with each other, for example creating a user on a system should reflect on other 2 systems so whenever you are going to search for that user from any system you will find it, and also insertion should make sure that data is already there while inserting from system 1 a record that already exists in system 2.my recommendation is having multi tenancy, shared core , that has the needed data across all 3 systems, but despite having same structure per database i think it is better to separate them each system has its own data ( except for the users here ) .suggestions for designing databases and how systems will communicate ?", "username": "Yahya_Adel" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Multiple projects with almost same database structure best practice
2022-12-27T21:49:26.560Z
Multiple projects with almost same database structure best practice
1,135
https://www.mongodb.com/…68574bd7a92a.png
[ "dot-net" ]
[ { "code": "", "text": "Hello,\nI am having a weird DNS issue with the MongoDB.Driver.Apple M1 chip\nmacOS Ventura 13.1.0\nVisual Studio for Mac 17.4.2\nAll packages are up to date, MongoDB drivers & .NET packages\nScreenshot 2022-12-20 at 10.58.23 AM988×194 25.2 KB\nI have already seen this other thread, but am not sure what the outcome here was:\nSimilar ThreadI have also tried clearing and resetting the DNS settings on my machine, but still am getting these same errors. It seems like some other .NET driver got updated and now the MongoDB.Driver is not working.", "username": "George_Ely" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
‘The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception.’
2022-12-20T16:03:45.491Z
‘The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception.’
996
https://www.mongodb.com/…9_2_1024x576.png
[ "php" ]
[ { "code": "", "text": "I wrote an application (in PHP, JS, HTML and CSS) that allows you to administrate a MongoDB database via a Web browser. Project name is: MongoDB PHP GUI. It is open-source and free. Its features are:\nmpg-database-query1920×1080 98.7 KB\n", "username": "Samuel_Tallet" }, { "code": "", "text": "v1.0.5 introduces a new feature: You can import documents from a JSON file.new-mpg-collection-import1920×1080 64.1 KB", "username": "Samuel_Tallet" }, { "code": "", "text": "Since 1.0.6 version you can:newest-mpg-database-visualize1920×1080 90.7 KB", "username": "Samuel_Tallet" }, { "code": "", "text": "Since 1.0.7 version, you can create and drop users.mpg-manage-users1920×1080 51.9 KB", "username": "Samuel_Tallet" }, { "code": "", "text": "MongoDB PHP GUI is available at Docker Hub ", "username": "Samuel_Tallet" }, { "code": "{ \"firstname\": \"Samuel\" }\nfirstname: Samuel\n", "text": "Before 1.1.6 version, you had to query database with a strict JSON syntax. Example:Since 1.1.6 version, you can query database with a relaxed JSON syntax. Example:", "username": "Samuel_Tallet" }, { "code": "", "text": "MongoDB PHP GUI supports now advanced options such as Replica Set in “URI” connection mode. You can switch connection mode on login page:", "username": "Samuel_Tallet" }, { "code": "", "text": "MongoDB PHP GUI switched to dark theme. I hope you will like it! \nMongoDB PHP GUI - Introducing Dark Theme1920×1080 93 KB\n", "username": "Samuel_Tallet" }, { "code": "", "text": "@Samuel_Tallet, while creating an index through the MongoDB PHP UI does it, will create in background or in the foreground?", "username": "Rabikumar" }, { "code": "", "text": "Hi @Rabikumar,All currently supported versions of MongoDB server (4.2+) use an optimised index build process that removes the need for foreground vs background index builds:Previous versions of MongoDB supported building indexes either in the foreground or background. Foreground index builds were fast and produced more efficient index data structures, but required blocking all read-write access to the parent database of the collection being indexed for the duration of the build. Background index builds were slower and had less efficient results, but allowed read-write access to the database and its collections during the build process.Starting in MongoDB 4.2, index builds obtain an exclusive lock on only the collection being indexed during the start and end of the build process to protect metadata changes. The rest of the build process uses the yielding behavior of background index builds to maximize read-write access to the collection during the build. 4.2 index builds still produce efficient index data structures despite the more permissive locking behavior.For more information, see Index Builds on Populated Collections.Regards,\nStennie", "username": "Stennie_X" } ]
MongoDB PHP GUI – A Web interface for MongoDB
2020-08-15T12:28:13.143Z
MongoDB PHP GUI – A Web interface for MongoDB
25,429
null
[]
[ { "code": "\"Query\": [\n {\n \"name\": \"owner\",\n \"applyWhen\": {},\n \"read\": false,\n \"write\": {\n \"$or\": [\n {\n \"user1Id\": \"%%user.id\"\n },\n {\n \"user2Id\": \"%%user.id\"\n }\n ]\n }\n }\n ],\n", "text": "Hello, when trying to add a document to a collection over a realm function (Application Authentication) I get following error:role “owner” in “myApp.Query” does not have insert permission: cannot use ‘user1Id’ in expression; only %% like expansions and top-level operators may be used as top-level fieldsMy rule looks like this (I insert something into this collection with either user1Id or user2Id being set):", "username": "Thomas_Anderl" }, { "code": "", "text": "@ Thomas_Anderl did you end up finding a solution to your problem? I am facing a similar issue when using $or.", "username": "BenJ" }, { "code": "", "text": "I unfortunately did not, I changed the function to run with system priviliges as this functon does not work with sensitive data.", "username": "Thomas_Anderl" } ]
Missing insert permission
2022-11-09T09:06:22.538Z
Missing insert permission
1,635
https://www.mongodb.com/…e_2_1024x512.png
[]
[ { "code": "", "text": "I have a client project, which is deployed with “GLOBAL” deployment model. Preferred location is “us-east-1” by default. Customer has initiated a replica readonly node in “Mumbai” region to help us to validate the actual response time of realm-sdk function calls. (MongoDB instances are upgraded to M20 - 6.0 latest stable version).\nWithout changing any config, request are still going to US only which still have same response time as earlier with huge latency. I have tried to point “preferred_region” in real_config.json file as Mumbai then i see the request are flowing to Mumbai node.I was going through the following link from MongoDB:How is the set up of load balancer in AWS for MongoDB atlas which support multi-region request?\nCan you help me understand, how the request if flowing from different regions to target region?\nHow can i make the “preferred_region” config value be dynamically pointing to required region?\nWith a give config stated above context, how can i point realm-sdk functions to call specified region (Mumbai)?Thank you in advance.", "username": "RamaKishore_K" }, { "code": "", "text": "Hi @RamaKishore_K ,As far as I know with global deployment your application components (values, functions, config , rules etc) are duplicated across our global infrastructure and based on the client call will be served from the nearest region.Now you have to remember that any write operation will be performed from the preferred region which for optimization reasons should be as close to your primary as possible , best in the same reason.When you alter the preferred region to mumbai any function or operation with writes will start to run from there but it will slow your writes.You should do another configuration to help only reads get to mumbai without harming the writes.Its done in the linked data sources when editing your cluster settings and pointing read preference to nearest\n\nScreenshot 2023-01-04 at 08.57.512252×738 78.4 KB\nIf you have different components with different preference and region deployment requirments consider splitting them :\n1- Two different data links with different configuration (its possible to have more than one link to the same cluster and dynamically pick one for the specific use case).\n2 - Two different applications which split the responsibilities of your logic based on topology best fit for each.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,Thank you for the response, after changing the read_preferences to nearest i am able to see the hits to the node. Comparing the response Time and revert back if any questions.Thank you.", "username": "RamaKishore_K" } ]
How are the server less function request flows from the source requester to the cluster given GLOBAL Deployment?
2023-01-04T05:37:19.413Z
How are the server less function request flows from the source requester to the cluster given GLOBAL Deployment?
750
null
[ "aggregation" ]
[ { "code": "", "text": "I have two field name, email in an array format\nemail:[[email protected],[email protected]]\nname:[abc,123]I want to write it in this format\n[abc:[email protected],123:[email protected]]How can I use aggregation for that ,pls help", "username": "Abhishek_Upadhyay1" }, { "code": "$mapemail$indexOfArrayemail$arrayElemAtname$arrayToObject$replceRootdb.collection.aggregate([\n {\n $project: {\n items: {\n $map: {\n input: \"$email\",\n in: {\n k: {\n $arrayElemAt: [\n \"$name\",\n { $indexOfArray: [\"$email\", \"$$this\"] }\n ]\n },\n v: \"$$this\"\n }\n }\n }\n }\n },\n {\n $replaceRoot: {\n newRoot: { $arrayToObject: \"$items\" }\n }\n }\n])\n", "text": "Hello @Abhishek_Upadhyay1, Welcome to the MongoDB community forum,You can use something like this pipeline,", "username": "turivishal" } ]
Regarding Aggregation
2023-01-04T02:52:37.345Z
Regarding Aggregation
889
null
[ "aggregation" ]
[ { "code": "", "text": "Hi,We have several collections based on n tenants and want the kafka connector to only watch for specific collections.Below is my mongosource.properties file where I have added the pipeline filter to listen only to specific collections.It workspipeline=[{$match:{“ns.coll”:{\"$in\":[“ecom-tesla-cms-instance”,“ca-tesla-cms-instance”,“ecom-tesla-cms-page”,“ca-tesla-cms-page”]}}}]the collections will grow in the future to maybe 200 collections which have to be watched, wanted to know the below three thingsThanks\nHarinder Singh", "username": "Harinder_Singh" }, { "code": "", "text": "Can we also specify regex lookup , that watch all collections specific to some regex.\n@Robert_Walters", "username": "Harinder_Singh" }, { "code": "\"pipeline\" : \"[ { $match: { \\\"ns.coll\\\": { \\\"$in\\\": [\\\"<collection_1>\\\", \\\"<collection_2>\\\", \\\"<collection_3>\\\" ] } } } ]\",", "text": "You can accomplish that by setting the database property then use a pipeline to match the collection name, something like:\"pipeline\" : \"[ { $match: { \\\"ns.coll\\\": { \\\"$in\\\": [\\\"<collection_1>\\\", \\\"<collection_2>\\\", \\\"<collection_3>\\\" ] } } } ]\",", "username": "Robert_Walters" }, { "code": "", "text": "is their any limitation on the number of collections one connector can listen to ?", "username": "Harinder_Singh" }, { "code": "", "text": "No limit you’ll eventually run into a query size limit if you make your pipeline too long but outside of that there is no predefined limit", "username": "Robert_Walters" }, { "code": "", "text": "What configuration is needed in Sink connector to listen to multiple database changes? In source connector i can use pipeline filter to select databases and collections name and make database field empty, but in sink connector database field is mandatory. How can we sync changes in multiple databases(more than 4k count) to the respective database in another cluster?? It is impossible to add seperate connectors for each databases. As per the documentation sink connector can listen to multiple topics, but how it will write to different databases.?", "username": "Suraj_Santhosh" } ]
Mongo Kafka Connector Collection Listen Limitations
2022-05-12T05:18:25.013Z
Mongo Kafka Connector Collection Listen Limitations
2,688
null
[ "aggregation", "indexes" ]
[ { "code": "const match = {\n\t\tstatus: 1,\n\t\tcustomer: new Types.ObjectId(customer),\n\t};\n\n\t// add filters to match query if they exist in request payload\n\tif (userFilter) match.assignee = userFilter.assignee;\n\tif (clientFilter) match.client = clientFilter.client;\n\tif (typeFilter) match.type = typeFilter.type;\n\nBooking.aggregate([\n\t\t\t{\n\t\t\t\t$match: {\n\t\t\t\t\t...match,\n\t\t\t\t\tdays: {\n\t\t\t\t\t\t$elemMatch: {\n\t\t\t\t\t\t\t// filter bookings by start and end date\n\t\t\t\t\t\t\tstart: {\n\t\t\t\t\t\t\t\t$gte: new Date(start).toISOString(),\n\t\t\t\t\t\t\t\t$lte: new Date(end).toISOString(),\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\t$sort: { createdAt: 1 },\n\t\t\t}\n])\n", "text": "Hello,I have an aggregation query like the below on a collection of bookings, bookings have a days property which is an array of days, each day has a start and end date. The aggregation query returns all bookings where some of the days fall between a start and end date (start, end dates come from request payload). Every time this query is ran however I receieve an alert that Query Targeting: Scanned Objects / Returned has gone above 1000 but I canno’t seem to get the right index’s to help prevent this, I have also tried lots of variations of the below query all with no success.Any help would be greatly appreciated!", "username": "Tim_Horwood" }, { "code": "db.collection.explain(\"executionStats\")db.collection.getIndexes()db.collection.explain(\"executionStats\")", "text": "Hi @Tim_Horwood,Every time this query is ran however I receieve an alert that Query Targeting: Scanned Objects / Returned has gone above 1000 but I canno’t seem to get the right index’s to help prevent this, I have also tried lots of variations of the below query all with no success.On top this, have you also run the db.collection.explain(\"executionStats\") against the query(s) to verify that these are the offending queries? If there are other queries running at the same time, the alert may be caused by another query but this would help round it down.// add filters to match query if they exist in request payloadI’m curious about this particular section. Depending on the request payload contents, the most appropriate index could change. Do you have a few indexes on this collection already? If so, can you provide the output of db.collection.getIndexes()?I have also tried lots of variations of the below query all with no success.In addition to the requested information above, can you please provide:In addition to the above, perhaps the following may help:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Help with correct index's on collection to prevent Query Targeting: Scanned Objects / Returned has gone above 1000 alert
2022-12-17T14:01:41.611Z
Help with correct index&rsquo;s on collection to prevent Query Targeting: Scanned Objects / Returned has gone above 1000 alert
1,404
null
[ "queries", "java", "atlas-cluster", "spring-data-odm" ]
[ { "code": "Exception in monitor thread while connecting to server ac-plamogs-shard-00-02.gxccyl8.mongodb.net:27017\n\ncom.mongodb.MongoSocketReadException: Prematurely reached end of stream\n\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:115) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:138) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:716) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:574) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:413) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:340) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:104) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:48) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:134) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.startHandshake(InternalStreamConnectionInitializer.java:76) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:185) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:198) ~[mongodb-driver-core-4.8.1.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:158) ~[mongodb-driver-core-4.8.1.jar:na]\nspring.data.mongodb.uri=mongodb+srv://<username>:<pwd>@trial.gxccyl8.mongodb.net/?retryWrites=true&w=majority\nspring.data.mongodb.database=trial\n", "text": "I am getting below exception while connecting to mongo db atlas from my spring boot applicationmy application.properties looks like thisplease anyone help here, getting this while deploying spring boot on local machine\njava version : 11\nspring boot version : 3.0.1", "username": "the_coding_camp" }, { "code": "", "text": "Hello @the_coding_camp ,Welcome to The MongoDB Community Forums! Please take a look at this thread, as a similar issue was resolved in this. In case you need more help, can you please share below details?Regards,\nTarun", "username": "Tarun_Gaur" } ]
Please help , i am getting exception while connecting to mongo atlas
2022-12-30T18:08:12.971Z
Please help , i am getting exception while connecting to mongo atlas
3,281
null
[ "golang" ]
[ { "code": "", "text": "I notice that we havepackage uuid // import “go.mongodb.org/mongo-driver/x/mongo/driver/uuid”But I’m not use whether this can be import from our app?go: finding module for package go.mongodb.org/mongo-driver/x/mongo/driver/uuid\ngo: downloading go.mongodb.org/mongo-driver v1.11.1\naccelbyte.net/customer-shared-justice-inventory-service/pkg/inventory/domain/models imports\ngo.mongodb.org/mongo-driver/x/mongo/driver/uuid: module go.mongodb.org/mongo-driver@latest found (v1.11.1), but does not contain package go.mongodb.org/mongo-driver/x/mongo/driver/uuid", "username": "Altiano_Gerung" }, { "code": "x/mongo/driver/uuidinternal/", "text": "Hey @Altiano_Gerung thanks for the question! The x/mongo/driver/uuid package was moved to an internal/ directory in the Go Driver v1.10.0 release and is not intended for use outside of the Go Driver. You should use another UUID library instead, like google/uuid or gofrs/uuid.", "username": "Matt_Dale" }, { "code": "", "text": "But it would be stored as string instead of UUID type wouldn’t it? I was thinking of using the same type as you have when using UUID(),it will result in better storage efficiency and possibly better performance, am I right?", "username": "Altiano_Gerung" }, { "code": "UUIDUUID[16]byte", "text": "The underlying type for the internal UUID type in the Go driver and the UUID types in the google/uuid and gofrs/uuid libraries are all [16]byte, so the BSON marshaling behavior should be the same for all 3 (they are marshaled as the BSON “binary” data type). There is currently no UUID type provided by the Go driver or Go BSON library that has any special marshaling/unmarshaling behavior.Check out an example of the default marshaling behavior on the Go Playground here.", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using build in UUID function from the driver in golang mongodb
2022-12-29T04:25:12.148Z
Using build in UUID function from the driver in golang mongodb
3,056
null
[ "node-js", "data-modeling" ]
[ { "code": "{\n _id: 10293812\n for_sale: [ 1000, 1001, 1002, ..., 1100 ]\n sold: []\n}\n{\n _id: 120938,\n employee: {\n _id: 102398734\n name: Huge Jackman\n },\n date: 2023/01/01,\n books:{\n book1: {\n _id: 7987234,\n serie: \"AA\",\n for_sale: [1000, 1001, ..., 1100],\n sold: []\n }\n },\n book2: {\n _id: 76598,\n seria: \"AA\",\n for_sale: [1200, 1201, ..., 1300],\n sold: []\n }\n}\n", "text": "Hi! I’m new to NoSQL Db’s and I’m having a seriously hard time figuring some stuff out. I’ve been watching some of the vidos en the MongoDB youtube channel and now I have a few guideliness to go with, however, I’m having a seriously hard time thinking what is the “correct” or “most efficient” way of building my documents for a specific project.I have several little “books” (really tiny) with 100 tickets each and the books can have different prices per ticket. They all have a specic series and a correlative. for example, AA 10290 to AA 10390, exactly 100.I need to prepare the “service” and give away a plastic bag with several of those books for them to go and sell each ticket. It should like something like this:01 Huge JackmanBook | N tickets\nAA 1000 - 1100 | 100\nAA 1200 - 1300 | 100\nAA 1400 - 1500 | 100When they come back, they could have sold the entire first book with 100 tickets. and come back with half of another.AA 1000 - 1100 | sold out\nAA 1200 - 1250 | 50\nAA 1400 - 1500 | 100Next “service” I prepare I will use the half sold book:AA 1250 - 1300 | 50\nAA 1400 - 1500 | 100\nAA 1600 - 1700 | 100And the process repeats.This is where the problem arrises. How do I keep track of all those tickets? should I create a document with 2 arrays for each book like this:assign the object id to the service document (or should i embed the entire book document in the service document?) and when the employee returns update the arrays values from for_sale to sold?According to some videos i watched, i should embed everything that can be embedded into just one document. So it should look like this:Each ticket of each book is unique with serie+correlative, I can’t have a repeated book with serie “AA” and correlative 1000 in any other service.Thank you so very much for your guidance!", "username": "Mauricio_Ramirez" }, { "code": "{\n _id: \"AA 1000 - 1100\",\n book: \"wordy\",\n for_sale: [ 1000, 1001, 1002, ..., 1100 ],\n sold: []\n}\n\n{\n _id: \"AA 1200 - 1300\",\n book: \"wordy\",\n for_sale: [ 1200, 1201, 1202, ..., 1300 ],\n sold: []\n}\n\n{\n _id: \"AA 1400 - 1500\",\n book: \"wordy\",\n for_sale: [ 1400, 1401, 1402, ..., 1500 ],\n sold: [],\npricePerUnit: 24.99\n}\n{\n _id: \"01 Huge Jackman\",\n booksToSell: [ \"AA 1000 - 1100\", \"AA 1200 - 1300\", \"AA 1400 - 1500\" ]\n}\n{\n \"_id\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\"),\n \"name\": \"Wordy\",\n \"pricePerTicket\": 24.99\n}\n{\n \"_id\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\"),\n \"seriesStart\": 1000,\n \"seriesEnd\": 1100,\n \"availableQuantity\": 10,\n \"books\": {\n \"BookId\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\")\n },\n \"employeeDetails\": {\n \"_id\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\"),\n \"name\": \"John Doe\"\n }\n},\n{\n \"_id\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\"),\n \"seriesStart\": 1100,\n \"seriesEnd\": 1200,\n \"availableQuantity\": 100,\n \"books\": {\n \"BookId\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\")\n },\n \"employeeDetails\": {\n \"_id\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\"),\n \"name\": \"John Doe\"\n }\n}\n", "text": "Hi @Mauricio_RamirezSo the schema you need can be determined only by the access pattern of your application, for example do you need to update each sale and track what exact tickets were sold in real time? per employee or per book?Or on the other hand do you only update the system once the employee comes back from his “shift”/“service”?In MongoDB the same dataset can look totatly different for each one of the use cases.For example, if you use the first approach and you need to query specific progress of a specific series as they progress with the sale you can end up with the following schema:If you need to have a more traditional schema where you will you only care about the total amounts sold per series and preparing on a start/end service basis. The following schema can be a good start:Let me know if that helps?Thanls\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you very much for your reply!Since it is very different depending on each use case, let me elaborate a bit more. (I’m really having a hard time getting outside the mindset of a sql relational db schema).I think I’m confusing you a bit by using the word “book”, I just really don’t know what they are called lol. It’s like a tiny ticket book used as a, kind of, receipt when someone takes the public transportation in my country. When someone gets on, they pay X depending the route distance’s fee, $0.25, $0.35, $0.45 … $1.50. Depending on that, they hand a single ticket with that cost from the ticketbook for that price/route. The person on shift can have several ticketbooks with the same price per ticket or have a mix of ticketbooks with different prices.I don’t think they need exact control over what number of ticket has been given (it leads me to believe the design should be like the second example you gave me), they just need the starting correlative and end correlative, so we will know that if the ticketbook starts with 1100 and comes back with 1125, it means the ticketbook has 75 tickets left. Ticket amount in each ticketbook is 100 fixed, never varies.I would assume that using your second schema example, for the second service I will create, when I enter the seriesStart: 1125 I will have to look for a service collection where 1125 is between (inclusive) seriesStart and seriesEnd, to automatically get the seriesEnd and the available quantity of 75. Would that be the proper approach?Thank you!!", "username": "Mauricio_Ramirez" }, { "code": "", "text": "Perhaps you can generate some sample document based on the second approach and show some CRUD that you need so we can verify it fits.Thanks", "username": "Pavel_Duchovny" }, { "code": "", "text": "Alrighty! I will just have to build part of the app that does that and then come back with the data, I was just planning upfront haha but I think it is better to have some data and then make corrections. Thank you!", "username": "Mauricio_Ramirez" }, { "code": "{\n \"_id\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\"),\ncurrentBookTickets: [{\n \"seriesStart\": 1000,\n \"seriesEnd\": 1100,\n \"availableQuantity\": 10,\n \"price\" : 0.75\n \"book\": {\n \"BookId\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\")\n }},\n{\n \"seriesStart\": 1100,\n \"seriesEnd\": 1200,\n \"availableQuantity\": 90,\n \"price\" : 0.25,\n \"book\": {\n \"BookId\": ObjectId(\"5f4d0c1234f6b8d7e98a1235\")\n }}],\n \"employeeDetails\": {\n \"_id\": ObjectId(\"5f4d0c1234f6b8d7e98a1234\"),\n \"name\": \"John Doe\"\n }\n}\n", "text": "Hi @Mauricio_Ramirez ,One alteration that you can do to the assignment document:This way a specific person can have different series in an array of assigned bookings …I recommend indexing predict like seriesStart and seriesEnd if you query by them as well as the user assigned (maybe even combined)I recommend the following reading and courses for uBest practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!Ty", "username": "Pavel_Duchovny" }, { "code": "", "text": "oohh okay! Pavel! I think that is the design I will go with! makes a lot of sense!And thank you a lot for the linked documentation, I will definitely read it!", "username": "Mauricio_Ramirez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
New to NoSQL/MongoDB and needing advice with schema
2023-01-01T19:09:14.557Z
New to NoSQL/MongoDB and needing advice with schema
1,643
null
[ "aggregation", "golang" ]
[ { "code": "", "text": "Hi, I am using Mongo shell version v5.0.8I am trying to count the documents, using aggregation pipeline with 3 lookups, in which the count is taking around 50s.My question is, when we use the count we are not fetching the documents, then why it takes so much time and how can we reduce it?Looking forward for your reply.", "username": "Sahildeep_Kaur" }, { "code": "", "text": "As requested by @K.V in your related thread pleaseshare your queries & indices details?In addition, I am worry about a use-case where you need 3 lookups to perform a count. A little bit more explanation might help us figure out what is the issue. Samples documents from your collections might also be useful.Size of data compared to system characteristics is also important to know.", "username": "steevej" }, { "code": "/* 1 */\n{\n \"explainVersion\" : \"1\",\n \"stages\" : [ \n {\n \"$cursor\" : {\n \"queryPlanner\" : {\n \"namespace\" : \"demo.bookings\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"arrival_date_time\" : {\n \"$lte\" : 1672223546.0\n }\n }, \n {\n \"status\" : {\n \"$not\" : {\n \"$in\" : [ \n 3.0, \n 6.0, \n 7.0, \n 8.0, \n 9.0\n ]\n }\n }\n }\n ]\n },\n \"queryHash\" : \"FDA0AD43\",\n \"planCacheKey\" : \"D5670236\",\n \"maxIndexedOrSolutionsReached\" : false,\n \"maxIndexedAndSolutionsReached\" : false,\n \"maxScansToExplodeReached\" : false,\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"_id\" : 1,\n \"address.address\" : 1,\n \"address.apt\" : 1,\n \"address_id\" : 1,\n \"checklist_pictures.booking_id\" : 1,\n \"checklist_pictures.tasks.subtask.photo_urls\" : 1,\n \"job_pictures.booking_id\" : 1,\n \"provider_ids\" : 1,\n \"uid\" : 1\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"SORT\",\n \"sortPattern\" : {\n \"arrival_date_time\" : -1\n },\n \"memLimit\" : 104857600,\n \"type\" : \"default\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"status\" : 1,\n \"arrival_date_time\" : 1,\n \"end_date_timestamp\" : 1,\n \"is_visible\" : 1\n },\n \"indexName\" : \"status_1_arrival_date_time_1_end_date_timestamp_1_is_visible_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"status\" : [],\n \"arrival_date_time\" : [],\n \"end_date_timestamp\" : [],\n \"is_visible\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"status\" : [ \n \"[MinKey, 3.0)\", \n \"(3.0, 6.0)\", \n \"(6.0, 7.0)\", \n \"(7.0, 8.0)\", \n \"(8.0, 9.0)\", \n \"(9.0, MaxKey]\"\n ],\n \"arrival_date_time\" : [ \n \"[-inf.0, 1672223546.0]\"\n ],\n \"end_date_timestamp\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"is_visible\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"_id\" : 1,\n \"address.address\" : 1,\n \"address.apt\" : 1,\n \"address_id\" : 1,\n \"checklist_pictures.booking_id\" : 1,\n \"checklist_pictures.tasks.subtask.photo_urls\" : 1,\n \"job_pictures.booking_id\" : 1,\n \"provider_ids\" : 1,\n \"uid\" : 1\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"SORT\",\n \"sortPattern\" : {\n \"arrival_date_time\" : -1\n },\n \"memLimit\" : 104857600,\n \"type\" : \"default\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"status\" : 1,\n \"arrival_date_time\" : -1\n },\n \"indexName\" : \"status_1_arrival_date_time_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"status\" : [],\n \"arrival_date_time\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"status\" : [ \n \"[MinKey, 3.0)\", \n \"(3.0, 6.0)\", \n \"(6.0, 7.0)\", \n \"(7.0, 8.0)\", \n \"(8.0, 9.0)\", \n \"(9.0, MaxKey]\"\n ],\n \"arrival_date_time\" : [ \n \"[1672223546.0, -inf.0]\"\n ]\n }\n }\n }\n }\n }\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 7960,\n \"executionTimeMillis\" : 1406,\n \"totalKeysExamined\" : 7966,\n \"totalDocsExamined\" : 7960,\n \"executionStages\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"nReturned\" : 7960,\n \"executionTimeMillisEstimate\" : 46,\n \"works\" : 15928,\n \"advanced\" : 7960,\n \"needTime\" : 7967,\n \"needYield\" : 0,\n \"saveState\" : 25,\n \"restoreState\" : 25,\n \"isEOF\" : 1,\n \"transformBy\" : {\n \"_id\" : 1,\n \"address.address\" : 1,\n \"address.apt\" : 1,\n \"address_id\" : 1,\n \"checklist_pictures.booking_id\" : 1,\n \"checklist_pictures.tasks.subtask.photo_urls\" : 1,\n \"job_pictures.booking_id\" : 1,\n \"provider_ids\" : 1,\n \"uid\" : 1\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 7960,\n \"executionTimeMillisEstimate\" : 24,\n \"works\" : 15928,\n \"advanced\" : 7960,\n \"needTime\" : 7967,\n \"needYield\" : 0,\n \"saveState\" : 25,\n \"restoreState\" : 25,\n \"isEOF\" : 1,\n \"docsExamined\" : 7960,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"SORT\",\n \"nReturned\" : 7960,\n \"executionTimeMillisEstimate\" : 6,\n \"works\" : 15928,\n \"advanced\" : 7960,\n \"needTime\" : 7967,\n \"needYield\" : 0,\n \"saveState\" : 25,\n \"restoreState\" : 25,\n \"isEOF\" : 1,\n \"sortPattern\" : {\n \"arrival_date_time\" : -1\n },\n \"memLimit\" : 104857600,\n \"type\" : \"default\",\n \"totalDataSizeSorted\" : 525360,\n \"usedDisk\" : false,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 7960,\n \"executionTimeMillisEstimate\" : 3,\n \"works\" : 7967,\n \"advanced\" : 7960,\n \"needTime\" : 6,\n \"needYield\" : 0,\n \"saveState\" : 25,\n \"restoreState\" : 25,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"status\" : 1,\n \"arrival_date_time\" : 1,\n \"end_date_timestamp\" : 1,\n \"is_visible\" : 1\n },\n \"indexName\" : \"status_1_arrival_date_time_1_end_date_timestamp_1_is_visible_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"status\" : [],\n \"arrival_date_time\" : [],\n \"end_date_timestamp\" : [],\n \"is_visible\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"status\" : [ \n \"[MinKey, 3.0)\", \n \"(3.0, 6.0)\", \n \"(6.0, 7.0)\", \n \"(7.0, 8.0)\", \n \"(8.0, 9.0)\", \n \"(9.0, MaxKey]\"\n ],\n \"arrival_date_time\" : [ \n \"[-inf.0, 1672223546.0]\"\n ],\n \"end_date_timestamp\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"is_visible\" : [ \n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\" : 7966,\n \"seeks\" : 7,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n }\n }\n }\n },\n \"nReturned\" : NumberLong(7960),\n \"executionTimeMillisEstimate\" : NumberLong(73)\n }, \n {\n \"$lookup\" : {\n \"from\" : \"users\",\n \"as\" : \"customer_info\",\n \"localField\" : \"uid\",\n \"foreignField\" : \"_id\",\n \"unwinding\" : {\n \"preserveNullAndEmptyArrays\" : false\n }\n },\n \"totalDocsExamined\" : NumberLong(7960),\n \"totalKeysExamined\" : NumberLong(7960),\n \"collectionScans\" : NumberLong(0),\n \"indexesUsed\" : [ \n \"_id_\"\n ],\n \"nReturned\" : NumberLong(7960),\n \"executionTimeMillisEstimate\" : NumberLong(319)\n }, \n {\n \"$lookup\" : {\n \"from\" : \"users\",\n \"as\" : \"provider_info\",\n \"localField\" : \"provider_ids\",\n \"foreignField\" : \"_id\"\n },\n \"totalDocsExamined\" : NumberLong(7897),\n \"totalKeysExamined\" : NumberLong(8356),\n \"collectionScans\" : NumberLong(0),\n \"indexesUsed\" : [ \n \"_id_\"\n ],\n \"nReturned\" : NumberLong(7960),\n \"executionTimeMillisEstimate\" : NumberLong(669)\n }, \n {\n \"$lookup\" : {\n \"from\" : \"users_address\",\n \"as\" : \"address\",\n \"localField\" : \"address_id\",\n \"foreignField\" : \"_id\",\n \"unwinding\" : {\n \"preserveNullAndEmptyArrays\" : true\n }\n },\n \"totalDocsExamined\" : NumberLong(0),\n \"totalKeysExamined\" : NumberLong(0),\n \"collectionScans\" : NumberLong(0),\n \"indexesUsed\" : [],\n \"nReturned\" : NumberLong(7960),\n \"executionTimeMillisEstimate\" : NumberLong(905)\n }, \n {\n \"$addFields\" : {\n \"full_address\" : {\n \"$toLower\" : [ \n {\n \"$concat\" : [ \n \"$address.apt\", \n {\n \"$const\" : \" \"\n }, \n \"$address.address\"\n ]\n }\n ]\n }\n },\n \"nReturned\" : NumberLong(7960),\n \"executionTimeMillisEstimate\" : NumberLong(905)\n }, \n {\n \"$lookup\" : {\n \"from\" : \"booking_job_pictures\",\n \"as\" : \"job_pictures\",\n \"localField\" : \"_id\",\n \"foreignField\" : \"booking_id\"\n },\n \"totalDocsExamined\" : NumberLong(5),\n \"totalKeysExamined\" : NumberLong(5),\n \"collectionScans\" : NumberLong(0),\n \"indexesUsed\" : [ \n \"booking_id\"\n ],\n \"nReturned\" : NumberLong(7960),\n \"executionTimeMillisEstimate\" : NumberLong(1148)\n }, \n {\n \"$lookup\" : {\n \"from\" : \"checklist_pictures\",\n \"as\" : \"checklist_pictures\",\n \"localField\" : \"_id\",\n \"foreignField\" : \"booking_id\",\n \"unwinding\" : {\n \"preserveNullAndEmptyArrays\" : true\n }\n },\n \"totalDocsExamined\" : NumberLong(0),\n \"totalKeysExamined\" : NumberLong(0),\n \"collectionScans\" : NumberLong(0),\n \"indexesUsed\" : [],\n \"nReturned\" : NumberLong(7960),\n \"executionTimeMillisEstimate\" : NumberLong(1350)\n }, \n {\n \"$match\" : {\n \"$or\" : [ \n {\n \"job_pictures.booking_id\" : {\n \"$exists\" : true\n }\n }, \n {\n \"$and\" : [ \n {\n \"checklist_pictures.booking_id\" : {\n \"$exists\" : true\n }\n }, \n {\n \"checklist_pictures.tasks.subtask.photo_urls\" : {\n \"$exists\" : true\n }\n }, \n {\n \"checklist_pictures.tasks.subtask.photo_urls.photo_url\" : {\n \"$exists\" : true\n }\n }, \n {\n \"checklist_pictures.tasks.subtask.photo_urls.photo_url\" : {\n \"$not\" : {\n \"$eq\" : \"\"\n }\n }\n }\n ]\n }\n ]\n },\n \"nReturned\" : NumberLong(5),\n \"executionTimeMillisEstimate\" : NumberLong(1350)\n }, \n {\n \"$group\" : {\n \"_id\" : {\n \"$const\" : null\n },\n \"count\" : {\n \"$sum\" : {\n \"$const\" : 1.0\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\" : {\n \"count\" : NumberLong(72)\n },\n \"totalOutputDataSizeBytes\" : NumberLong(229),\n \"usedDisk\" : false,\n \"nReturned\" : NumberLong(1),\n \"executionTimeMillisEstimate\" : NumberLong(1350)\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"iron-System-Product-Name\",\n \"port\" : 27017,\n \"version\" : \"5.0.8\",\n \"gitVersion\" : \"c87e1c23421bf79614baf500fda6622bd90f674e\"\n },\n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\" : 0,\n \"internalQueryMaxAddToSetBytes\" : 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600\n },\n \"command\" : {\n \"aggregate\" : \"bookings\",\n \"pipeline\" : [ \n {\n \"$match\" : {\n \"status\" : {\n \"$nin\" : [ \n 3.0, \n 9.0, \n 6.0, \n 7.0, \n 8.0\n ]\n },\n \"arrival_date_time\" : {\n \"$lte\" : 1672223546.0\n }\n }\n }, \n {\n \"$sort\" : {\n \"arrival_date_time\" : -1.0\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"users\",\n \"localField\" : \"uid\",\n \"foreignField\" : \"_id\",\n \"as\" : \"customer_info\"\n }\n }, \n {\n \"$unwind\" : \"$customer_info\"\n }, \n {\n \"$lookup\" : {\n \"from\" : \"users\",\n \"localField\" : \"provider_ids\",\n \"foreignField\" : \"_id\",\n \"as\" : \"provider_info\"\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"users_address\",\n \"localField\" : \"address_id\",\n \"foreignField\" : \"_id\",\n \"as\" : \"address\"\n }\n }, \n {\n \"$unwind\" : {\n \"path\" : \"$address\",\n \"preserveNullAndEmptyArrays\" : true\n }\n }, \n {\n \"$addFields\" : {\n \"full_address\" : {\n \"$toLower\" : {\n \"$concat\" : [ \n \"$address.apt\", \n \" \", \n \"$address.address\"\n ]\n }\n }\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"booking_job_pictures\",\n \"localField\" : \"_id\",\n \"foreignField\" : \"booking_id\",\n \"as\" : \"job_pictures\"\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"checklist_pictures\",\n \"localField\" : \"_id\",\n \"foreignField\" : \"booking_id\",\n \"as\" : \"checklist_pictures\"\n }\n }, \n {\n \"$unwind\" : {\n \"path\" : \"$checklist_pictures\",\n \"preserveNullAndEmptyArrays\" : true\n }\n }, \n {\n \"$match\" : {\n \"$or\" : [ \n {\n \"job_pictures.booking_id\" : {\n \"$exists\" : true\n }\n }, \n {\n \"$and\" : [ \n {\n \"checklist_pictures.booking_id\" : {\n \"$exists\" : true\n }\n }, \n {\n \"checklist_pictures.tasks.subtask.photo_urls\" : {\n \"$exists\" : true\n }\n }, \n {\n \"checklist_pictures.tasks.subtask.photo_urls.photo_url\" : {\n \"$exists\" : true,\n \"$ne\" : \"\"\n }\n }\n ]\n }\n ]\n }\n }, \n {\n \"$group\" : {\n \"_id\" : null,\n \"count\" : {\n \"$sum\" : 1.0\n }\n }\n }\n ],\n \"cursor\" : {},\n \"$db\" : \"demo\"\n },\n \"ok\" : 1.0\n}\n", "text": "Thank you @steevej for reply.The explain stats for query is:This query is executed on the sample data with comparatively less docs and it is taking 1.5s to execute.", "username": "Sahildeep_Kaur" }, { "code": "", "text": "As already mentioned, pleaseshare your queriesindices detailsuse-case … explanationdocuments from your (3) collectionsSize of dataandsystem characteristics", "username": "steevej" }, { "code": "executionTimeMillisEstimate$lookup$match$lookup$lookup\"bookings\"$lookup", "text": "My question is, when we use the count we are not fetching the documents, then why it takes so much time and how can we reduce it?The vast majority of the execution time estimated by the explain plan output (looking at the executionTimeMillisEstimate fields) is running the $lookup stages. While counting documents doesn’t return the documents, MongoDB does still have to access all of those documents to evaluate the $match or other filter stages. In that case, counting is not going to be significantly faster than returning all matching documents since most of the execution time is spent running the $lookup stages.You may be able to speed up your query by filtering the “joined” dataset as early as possible to reduce the execution time of the subsequent $lookup stages or by moving some of the “joined” documents into the \"bookings\" documents so you don’t have to run a $lookup at all.", "username": "Matt_Dale" } ]
Count with aggregation lookup takes too long
2022-12-22T11:41:31.677Z
Count with aggregation lookup takes too long
2,497
null
[ "queries", "swift" ]
[ { "code": "collection.findimport MongoSwiftSync\n...\n\n func test() {\n \n // my holidays collection has a bunch of entries like:\n // _id: 639dacae733952abc2be7c8e\n // date_no_dash: 20220101\n // holiday: \"New Year's Day\"\n \n struct holiday: Codable {\n let _id: BSONObjectID\n let date_no_dash: Int\n let holiday: String\n }\n \n let year = 2022 // the year to query holidays for\n let buffer = 2 // the amount of years to buffer on either side\n \n let collection = mongoDB.collection(\"holidays\", withType: holiday.self)\n \n let queryFilter: BSONDocument = [\n \"date_no_dash\": [\n \"$gte\": BSON((year - buffer) * 10_000),\n \"$lte\": BSON(((year + buffer) * 10_000) - 1)\n ]\n ]\n \n do {\n let tmp = try collection.find(queryFilter) { result in\n switch result {\n case .failure(let error):\n print(\"error:\\(error)\")\n case .success(let documents):\n for doc in documents {\n print(\"document \\(doc)\")\n }\n }\n }\n } catch {\n print(\"Error trying to find on collection: \\(error)\")\n }\n \n }\nqueryFilter", "text": "Hi, I wanted to kick the tires on the mongodb driver in swift. I am making a simple mac application and I can’t seem to query the database.I tried following the code examples in the documentation, but they seem to be a bit out of date.I have tried the following code, but get a compiler error on the collection.find line. “Type of expression is ambiguous without more context”I am also not sure if the queryFilter will work as I intend. But in general I find the result fetching with a switch statement fairly cumbersome. Isn’t there a more simple way?\nThanks", "username": "Joseph" }, { "code": "for result in try collection.find(queryFilter) {\n switch result {\n case .success(let doc):\n print(doc)\n case .failure(let error):\n print(\"error: \\(error)\")\n }\n}\nswitchResult.get()for result in try collection.find(queryFilter) {\n print(try result.get())\n}\nResult.get()", "text": "Hi @Joseph,Thank you for getting in touch. The documentation examples you link to are for the Realm Swift SDK, not the MongoDB Swift driver.To iterate through the results in the Swift driver, the code would look like this:In general, to avoid a switch statement on a Result, you can call .get() instead, like:See the docs for Result.get() here. This method will either return the contained value on success, or throw the contained error on failure.", "username": "kmahar" } ]
Having difficulty filtering a collection
2022-12-27T07:17:27.241Z
Having difficulty filtering a collection
1,584
https://www.mongodb.com/…2_2_1024x346.png
[ "node-js", "mongoose-odm" ]
[ { "code": "totalPriceprice of specific productPOSTtotalPriceconst OrderSchema = new mongoose.Schema({\n userId: {type: mongoose.Schema.Types.ObjectId, ref: 'User'},\n products: [\n {\n productId:{\n type: mongoose.Schema.Types.ObjectId, ref: 'Product'\n },\n quantity: {\n type: Number,\n default: 1,\n },\n sellerId: {\n type: mongoose.Schema.Types.ObjectId, ref: 'User' \n },\n totalPrice: {\n type: Number,\n default: 0,\n }\n\n }\n ],\n\n}, {timestamps: true}\n)\n\nexport default mongoose.model('Order', OrderSchema)\nconst handleSubmit = async (e) =>{\n \n e.preventDefault()\n if(orderSummary?.location === \"\"){\n toast.error(\"Please input location..\")\n }else{\n try {\n await userRequest.post(`/order`,{\n userId: currentUser._id,\n products: cart.products.map((item) =>({\n productId: item._id,\n quantity: item.quantity,\n sellerId: item.seller_id._id,\n totalPrice: Number(item.quantity * item._id.price)\n \n })),\n }) \n\n } catch (error) {\n toast.error(\"Please put a Location and Time!\")\n }\n }\n \n}\n369 {\n \"_id\": \"63b1b4de5a95bd4df7f9443b\",\n \"userId\": {\n \"_id\": \"63b18af8363f51fa50801dd0\",\n \"studentId\": \"1234567892\"\n },\n \"products\": [\n {\n \"productId\": {\n \"_id\": \"63b16fc58fe585c7b81c748d\",\n \"title\": \"asd\",\n \"price\": \"123\"\n },\n \"quantity\": 3,\n \"sellerId\": {\n \"_id\": \"63b160689f50f852e056afaf\",\n \"studentId\": \"1234567890\"\n },\n \"_id\": \"63b1b4de5a95bd4df7f9443c\"\n },\n {\n \"productId\": {\n \"_id\": \"63b16ff08fe585c7b81c7496\",\n \"title\": \"asd21\",\n \"price\": \"213\"\n },\n \"quantity\": 3,\n \"sellerId\": {\n \"_id\": \"63b160689f50f852e056afaf\",\n \"studentId\": \"1234567890\"\n },\n \"_id\": \"63b1b4de5a95bd4df7f9443d\"\n }\n ],\n\n }\n{\n \"_id\": \"63b1b4de5a95bd4df7f9443b\",\n \"userId\": {\n \"_id\": \"63b18af8363f51fa50801dd0\",\n \"studentId\": \"1234567892\"\n },\n \"products\": [\n {\n \"productId\": {\n \"_id\": \"63b16fc58fe585c7b81c748d\",\n \"title\": \"asd\",\n \"price\": \"123\"\n },\n \"quantity\": 3,\n \"sellerId\": {\n \"_id\": \"63b160689f50f852e056afaf\",\n \"studentId\": \"1234567890\"\n },\n \"totalPrice\": \"369\"\n },\n {\n \"productId\": {\n \"_id\": \"63b16ff08fe585c7b81c7496\",\n \"title\": \"asd21\",\n \"price\": \"213\"\n },\n \"quantity\": 3,\n \"sellerId\": {\n \"_id\": \"63b160689f50f852e056afaf\",\n \"studentId\": \"1234567890\"\n },\n \"totalPrice\": \"639\"\n }\n ],\n\n }\n", "text": "How can I compute the totalPrice of specific item, I’m trying to multiply the quantity and the price of specific product and then POST it, but I’m having a hard time on how can I compute the totalPrice of specific itemOrderSchema.jsOrder.jsin this image, I want to get the total amount of specific product, so the first product should have a 369 .image1630×552 31.1 KBBut when I call my get all orders, this is what I getWhat I’m trying to get here when I call order.js is the totalPrice, like this example.", "username": "EMMANUEL_JOSEPH_CRUZ" }, { "code": "", "text": "How are you getting the data currently ?", "username": "santimir" } ]
Is there anyway I can compute the totalPrice of specific product?
2023-01-01T16:58:19.683Z
Is there anyway I can compute the totalPrice of specific product?
1,300
null
[]
[ { "code": "operation was interrupted because a client disconnected\nerrMsg:\"Encountered non-retryable error during query :: caused by :: operation was interrupted\" errName:ClientDisconnect errCode:279\n2021-08-12T11:58:58.690+0800 I CONNPOOL [TaskExecutorPool-0] Ending connection to host 10.10.12.137:27002 due to bad connection status: InternalError: Connection is in an unknown state; 16 connections to that host remain open\n2021-08-12T11:59:09.693+0800 I NETWORK [conn11062415] Marking host 10.10.12.137:27002 as failed :: caused by :: NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit\n", "text": "mongs version: 4.2.0\ntaskExecutorPoolSize=1,\nShardingTaskExecutorPoolMaxConnecting=2,\nShardingTaskExecutorPoolMinSize=1\nClient set socketTimeout=10\nI got a lot of errors as below, even after restart mongos.Socket connections from client to mongos increase, but connections from mongos to sharding server keeps.I want to knowI am new to mongodb, and may describe not very clear. If other info is needed to solove it, please tell me.", "username": "dust_dn" }, { "code": "", "text": "socketTimeoutsocketTimeout is 10 seconds, sorry to mistake it .", "username": "dust_dn" }, { "code": "", "text": "any fix?i have same issue now", "username": "abinas_roy" }, { "code": "", "text": "did you solve this problem\nwe are encountering in 4.4.13we still not resolve this issue", "username": "giribabu_venugopal1" } ]
Error: Encountered non-retryable error during query :: caused by :: operation was interrupted" errName:ClientDisconnect errCode:279
2021-08-19T03:07:45.400Z
Error: Encountered non-retryable error during query :: caused by :: operation was interrupted&rdquo; errName:ClientDisconnect errCode:279
4,017
null
[ "data-modeling", "realm-web" ]
[ { "code": "", "text": "My problem is that in an “endless” feed, there can be thousands upon thousands of elements. The client obviously shouldn’t (and doesn’t in apps like Twitter) download them all at once. Only small chunks get downloaded, like 30 elements, and then, IF the user scrolls far enough down, the next chunk loads.How could I achieve this functionality in realm? It is also very important to keep in mind the steady growing of new elements that get added to the server, so just making like a partition-1 for the first 30 elements, partition-2, partition-3 and so one will not work (or does it?).This is such a common theme I think there clearly has to be some kind of solution. I also haven’t really looked into flexible syncing, maybe that’s the solution? Excited to read your comments.", "username": "SirSwagon_N_A" }, { "code": "", "text": "I was wondering the same. I am currently developing (with flexible sync) an iOS app which contains user-generated content. From what I understand, for flexible sync, each subscription will locally download all the data based on its query, so having one subscription with a broad query won’t be scalable.\nI haven’t been able to come up with an elegant solution yet, it would be great if someone could chime in…", "username": "Sonisan" }, { "code": "", "text": "My approach, for now, is just doing it with functions.I have a function ‘loadNextDataChunk(int offset, queryParameters, startTime)’ which will return the next x amounts of data every time the user scrolled far enough down.", "username": "SirSwagon_N_A" }, { "code": "", "text": "Hello, I’m trying to achieve this exact functionality. Would you mind elaborating how you used the function to pass the results back to the realm app? (i.e the App service you used and the architecture) Thank you!", "username": "Timothy_Tati" }, { "code": "exports = function(query, skipAmount, limitAmount){\n let col = context.services.get(\"mongodb-atlas\").db(\"Database name\").collection(\"collection name\");\n \n let result = await col.find({'title': {'$regex': query, '$options': 'i'}}).skip(skipAmount).limit(limitAmount);\n if(result.modifiedCount == 0){\n console.log(\"Could not get results\");\n return false;\n }\n \n return JSON.stringify(result);\n};\n", "text": "Hello, my Realm function code looks like this:This starts at the newest data and goes further down to older posts as the user scrolls. The parameters skipAmount has to be maintained from client app and is just a a whole number of how many posts already got loaded, limitAmount is the amount to load each time the user reaches bottom.", "username": "SirSwagon_N_A" }, { "code": "", "text": "Hello, Thank you very much. This helped a lot!", "username": "Timothy_Tati" } ]
How to partition an endless feed? (for example reddit, twitter, facebook, youtube comments...)
2022-09-04T20:15:56.774Z
How to partition an endless feed? (for example reddit, twitter, facebook, youtube comments&hellip;)
3,434
null
[ "production", "c-driver" ]
[ { "code": "", "text": "Announcing 1.23.2 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.No changes since 1.23.1. Version incremented to match the libmongoc version.Bug fixes:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C Driver 1.23.2 Released
2023-01-03T16:05:56.884Z
MongoDB C Driver 1.23.2 Released
1,532
null
[ "queries", "node-js", "transactions" ]
[ { "code": "MongoServerError: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.\nexports.setArticleView = async (req, res) => {\n const userId = req.user._id;\n const { article_id } = req.body;\n\n // Start a session\n const session = await mongoose.startSession();\n\n try {\n // Start a transaction\n session.startTransaction();\n\n await ArticleView.create([{ articleId: article_id, user: userId }], { session });\n\n await Article.updateOne({ _id: article_id }, { $inc: { views: 1 } }, { session });\n\n // Commit the transaction\n await session.commitTransaction();\n\n return res.status(200).json({ success: true });\n } catch (error) {\n return res.status(500).json({ success: false });\n } finally {\n // End session\n session.endSession();\n }\n};\nmaxTransactionLockRequestTimeoutMillismaxTransactionLockRequestTimeoutMillismaxTransactionLockRequestTimeoutMillis", "text": "I am getting the the following error when I try to update only 2 documents with the transaction:From all the docs I read, this can be an issue when modifying more than 1000 documents, but since I am updating only 2, it should not throw any error.That being said, this error only happens sometimes, not always. Usually, the logic will pass without the error, but sometimes it is thrown randomly.MongoDB version: 5.0.14Here is the code:I think it’s the following scenario when the error happens:I assume this based on the following paragraph from the docs:If a multi-document transaction is in progress, new DDL operations that affect the same database(s) or collection(s) wait behind the transaction. While these pending DDL operations exist, new transactions that access the same database(s) or collection(s) as the pending DDL operations cannot obtain the required locks and and will abort after waiting maxTransactionLockRequestTimeoutMillis (which defaults to 5ms).I think it can probably be solved by increasing the maxTransactionLockRequestTimeoutMillis , but I wanted to check with the community. If I would increase maxTransactionLockRequestTimeoutMillis config, would that have some further implications?", "username": "NeNaD" }, { "code": "", "text": "Just a guess, but I’d think you’d have to commit the created document before you try to update it.", "username": "Jack_Woehr" }, { "code": "", "text": "Hi @Jack_Woehr, thanks for the answer!I am not sure what do you mean? I am creating document in one collection, and then updating document from other collection. It’s not like I am updating the same document that I created (even though I think that should work too).That being said, this error only happens sometimes, not always. Usually, the logic will pass without the error, but sometimes it is thrown randomly. (I just figured I didn’t mention this in the question itself and I think it’s important. Will updated the question).", "username": "NeNaD" }, { "code": "", "text": "is the system actually in production? Is other activity taking place?", "username": "Jack_Woehr" }, { "code": "", "text": "@Jack_Woehr Yup, it’s in production. No, this is the whole business logic in that endpoint.", "username": "NeNaD" }, { "code": "", "text": "Well, there certainly are a lot of caveats about transactions!\nUnless someone posts with an answer, I suggest you read carefully both of the following:If you don’t find anything there, you may need to file an issue.", "username": "Jack_Woehr" }, { "code": "maxTransactionLockRequestTimeoutMillismaxTransactionLockRequestTimeoutMillis", "text": "Thanks for the references! I already read them, but I couldn’t figure out what could be the exact issue.I assume it’s maybe the following scenario:I assume this based on the following paragraph from the docs:If a multi-document transaction is in progress, new DDL operations that affect the same database(s) or collection(s) wait behind the transaction. While these pending DDL operations exist, new transactions that access the same database(s) or collection(s) as the pending DDL operations cannot obtain the required locks and and will abort after waiting maxTransactionLockRequestTimeoutMillis (which defaults to 5ms).I think it can probably be solved by increasing the maxTransactionLockRequestTimeoutMillis, but I wanted to check with the community too.", "username": "NeNaD" }, { "code": "", "text": "@steevej do you know the answer to @NeNaD 's question?", "username": "Jack_Woehr" }, { "code": "", "text": "Thanks for tagging me. I have seen the post but have nothing to add at this moment. I still have not enough experience with transactions to have experimented this issue. In addition, it is with mongoose which I avoid.The only thing I would do it to try without mongoose. The issue might be mongoose.And as I write, I think that this code can should be done outside a transaction. You just create the ArticleView and if successful you $inc the Article/views. $inc is already atomic so if 2 ArticleView are created at the same time, which is also atomic, the count in $inc twice. The issue with transaction in this case, I think, is that you have 2 transactions starting at the more or less the same time. Both transaction take a snapshot of Article. One of the transaction terminate correctly. But now the snapshot of Article of the 2nd transaction is stale as it has been updated by the first transaction.I thought I had nothing to write. Thanks for letting me know I had.", "username": "steevej" }, { "code": "MongoServerErroruniqueArticleView", "text": "Hi @steevej,Thanks for the response, and @Jack_Woehr thanks for tagging him.I don’t think the issue is related to Mongoose at all, since Error clearly say it’s MongoServerError. Also, I don’t want do it without a transaction because I am tracking unique user views, so it could happen that ArticleView will be created and later it would fail to increase the article’s view count, which would lead to data inconsistency. Thank you for the answer though. ", "username": "NeNaD" }, { "code": "", "text": "@Stennie_X, @MaBeuLux88, @Pavel_Duchovny, @Jason_TranCan you please let me know your opinions on this if you don’t mind.Thank you in advance! ", "username": "NeNaD" }, { "code": "Please retry your operation or multi-document transaction.", "text": "The only wayit would fail to increase the article’s view countis that if the code crashes between the ArticleView creation and the Article.views increment.But if you want to stick with transaction you have to do what the error message tells you to doPlease retry your operation or multi-document transaction.", "username": "steevej" }, { "code": "retrymaxTransactionLockRequestTimeoutMillis", "text": "Yup, I will implement retry logic, it’s just this error should not be thrown in the first place in my opinion, and that is why I raised the question.I am trying to figure out if the maxTransactionLockRequestTimeoutMillis config is the reason for the error, and if the error can be prevented by increase it (if yes, would that have some further implications…). ", "username": "NeNaD" }, { "code": "", "text": "it’s just this error should not be thrown in the first place in my opinionI beg to differ. Under load you are most likely to have 2 transactions started at almost the same time. Especially if you run multiple instances of your code. This is what the error is about. You have 2 transactions wanting to update the same resources, one have to wait so that the data is consistent at end.You decrease the timeout you increase the chance of having the error but you must retry more operations increasing the load on the server and reducing the capacity of the client.You increase the timeout you decrease the chance of having the error but you have more transactions awaiting which increases the load on the server and reduce the capacity of the client.My conclusion is the same. Doing it without transactions will provide the same consistency and better throughput.", "username": "steevej" } ]
MongoServerError: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction
2022-12-29T19:58:08.672Z
MongoServerError: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction
11,005
null
[ "aggregation", "compass", "connector-for-bi" ]
[ { "code": "\"CalculationDetails\": [\n {\n \"Code\": \"STEP_1_LABEL\",\n \"Parameters\": {\n \"PARAM_1_LABEL\": {\n \"string\": \"5.88\"\n },\n \"PARAM_2_LABEL\": {\n \"string\": \"5.0\"\n },\n \"PARAM_3_LABEL\": {\n \"string\": \"0.005\"\n }\n },\n \"Value\": 0.004900000058114529\n },\n {\n \"Code\": \"STEP_2_LABEL\",\n \"Parameters\": {\"PARAM_1_LABEL\": {\n \"string\": \"5.88\"\n },\n \"PARAM_4_LABEL\": {\n \"string\": \"0.001234\"},\n \"Value\": 5.881234\n },\n\"steps\":\n {\n \"STEP_1_LABEL\": 0.004900000058114529,\n \"STEP_2_LABEL\": 5.881234,\n }\n\"params\":\n {\n \"PARAM_1_LABEL\": \"5.88\",\n \"PARAM_2_LABEL\": \"5.00\",\n \"PARAM_3_LABEL\": \"0.005\",\n \"PARAM_4_LABEL\": \"0.001234\",\n }\n", "text": "Hello community,Despite the few hours spent searching here and there for a solution, I cannot get around. Here is my document structure:Main problematic: using the aggregation framework, I aim to build the following strucuture:I tried my way around $ObjectToArray, $ArrayToObject and $map, but couldn’t get to a conclusive result because my original document doesn’t have a “k”, “v” shape. I tried renaming “Code” into “k” and “Value” into “v”, but unsuccessfully.Other information: I’m trying to get to my result without using $unwind, because my collection is already 2M documents large, and each document may contain up to 25 steps. It’s not a problem for MongoDB per se, but it is for PowerBI when I export data using the BI connector (very poor performance).Also, I’m trying to avoid using $unwind + $group, because Compass can’t deal with such an amount of data with $group, at least with an M30 cluster.Any idea how to reach the desired output given the constraints above?Bonus question: ideally, I also wish my output to contain a second object:Parameters may appear multiple times (up to one time per step), however I require to output them only once.For the same reason as stated above, I wish to avoid using an $unwind + $group (even though it would do the job) because the amount of data crashes CompassThank you very much for your help and happy new year everyone!", "username": "Antoine_Delequeuche" }, { "code": "map = { \"$map\" : {\n \"input\" : \"$CalculationDetails\" ,\n \"in\" : { \"k\" : \"$$this.Code\" , \"v\" : \"$$this.Value\" }\n} }\nset_1 = { \"$set\" : { \"_steps\" : map } }\nset_2 = { \"$set\" : { \"steps\" : { \"$arrayToObject\" : \"$_steps\" } } }\n", "text": "$ArrayToObject and $map,Is the way to go.First aggregation stage is a $set with a $map to extract the appropriate values from CalculationDetailsThe second stage is another $set that uses $arrayToObject on the temp. _steps to produce the final steps object.:I will have to think a little bit more for the params part.You may of course do set_1 and set_2 in a single stage by replacing $_steps by the map but it is easier to understand, to debug and to modify when the stages are kept simple.", "username": "steevej" }, { "code": "set_params = { \"$set\" : {\n \"params\" : { \"$reduce\" : {\n \"input\" : \"$CalculationDetails\" ,\n \"initialValue\" : { } ,\n \"in\" : { \"$mergeObjects\" : [ \"$$value\" , \"$$this.Parameters\" ] }\n } }\n} }\n", "text": "As for params, I would use $reduce in a $set stage with $mergeObjects to assembles the Parameters in the params result. Something like:The field Parameters should be an array. Dynamic field names like PARAM_1_LABEL, … is never a good idea.", "username": "steevej" }, { "code": "[\n {\n $addFields: {\n results: {\n $map: {\n input:\n \"$CalculationDetails\",\n in: {\n k: \"$$this.Code\",\n v: \"$$this.Value\",\n },\n },\n },\n },\n },\n {\n $addFields: {\n results: {\n $arrayToObject: \"$results\",\n },\n },\n }\n]\n", "text": "Hello @steevej,Thank you for taking the time to look into my issue! Your solution works perfecty, thanks a lot! As the question was originally about an aggregation, I’ll repost your answer using that syntax, in case it could benefit other people:I decided to use the same variable in both aggregation stages, knowing that the result is saved as a view and that I can return to its aggregation builder in case of debug", "username": "Antoine_Delequeuche" }, { "code": "params: {\n \"PARAM_1_LABEL\": {\n string: \"5.88\"\n },\n \"PARAM_2_LABEL\": {\n string: \"5.00\"\n },\n(...)\n[\n {\n $addFields: {\n params: {\n $reduce: {\n input:\n \"$CalculationDetails\",\n initialValue: {},\n in: {\n $mergeObjects: [\"$$value\", \"$$this.Parameters\"],\n }\n }\n }\n }\n },\n {\n $addFields: {\n params: {\n $objectToArray: \"$params\",\n }\n }\n },\n {\n $addFields: {\n params: {\n $map: {\n input: \"$params\",\n in: {\n k: \"$$this.k\",\n v: \"$$this.v.string\",\n }\n }\n }\n }\n },\n {\n $addFields: {\n params: {\n $arrayToObject: \"$params\",\n }\n }\n }\n]\n\"params\":\n {\n \"PARAM_1_LABEL\": \"5.88\",\n \"PARAM_2_LABEL\": \"5.00\",\n }\n", "text": "Thank you @steevej for this insight as well. It gives me almost the output I’m looking for:To get the expected result, I added a few tweaks to your proposal:Output:", "username": "Antoine_Delequeuche" }, { "code": "", "text": "I always leave cosmetic stage as an exercise to the reader.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting lost with $ObjectToArray
2022-12-31T17:06:17.679Z
Getting lost with $ObjectToArray
1,989
null
[ "dot-net", "crud" ]
[ { "code": " at MongoDB.Bson.ObjectId.Parse(String s)\n at MongoDB.Bson.Serialization.Serializers.StringSerializer.SerializeValue(BsonSerializationContext context, BsonSerializationArgs args, String value)\n at MongoDB.Bson.Serialization.Serializers.SealedClassSerializerBase`1.Serialize(BsonSerializationContext context, BsonSerializationArgs args, TValue value)\n at MongoDB.Bson.Serialization.Serializers.SerializerBase`1.MongoDB.Bson.Serialization.IBsonSerializer.Serialize(BsonSerializationContext context, BsonSerializationArgs args, Object value)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Serialize(IBsonSerializer serializer, BsonSerializationContext context, Object value)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.SerializeNormalMember(BsonSerializationContext context, Object obj, BsonMemberMap memberMap)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.SerializeClass(BsonSerializationContext context, BsonSerializationArgs args, TClass document)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.Serialize(BsonSerializationContext context, BsonSerializationArgs args, TClass value)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Serialize[TValue](IBsonSerializer`1 serializer, BsonSerializationContext context, TValue value)\n at MongoDB.Driver.OperatorUpdateDefinition`2.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.CombinedUpdateDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.CombinedUpdateDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.CombinedUpdateDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.CombinedUpdateDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.CombinedUpdateDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.MongoCollectionImpl`1.ConvertWriteModelToWriteRequest(WriteModel`1 model, Int32 index)\n at System.Linq.Enumerable.<SelectIterator>d__174`2.MoveNext()\n at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)\n at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation..ctor(CollectionNamespace collectionNamespace, IEnumerable`1 requests, MessageEncoderSettings messageEncoderSettings)\n at MongoDB.Driver.MongoCollectionImpl`1.CreateBulkWriteOperation(IClientSessionHandle session, IEnumerable`1 requests, BulkWriteOptions options)\n at MongoDB.Driver.MongoCollectionImpl`1.<BulkWriteAsync>d__30.MoveNext()\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()\n at MongoDB.Driver.MongoCollectionImpl`1.<UsingImplicitSessionAsync>d__106`1.MoveNext()\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()\n at MongoDB.Driver.MongoCollectionBase`1.<UpdateOneAsync>d__107.MoveNext()\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\n at Core.User.Infrastructure.NoSql.MongoDb.User.Repositories.UserRepository.<AddAsync>d__4.MoveNext() in\n", "text": "Hi Team, Please find below error for inserting document from .Net to MongoDb Collection.‘69f2cd42-b6a4-418f-a4c2-ece3d719ed6f’ is not a valid 24 digit hex string.Id Property is string GUID which is accepting when I am inserting document from MongoDbCompass or web Collection Explorer but whenever we insert the same from .net code, it generating exception.Details Exception:", "username": "Pravin_Yadav" }, { "code": "[BsonRepresentation(BsonType.ObjectId)]", "text": "is not a valid 24 digit hex string.Decorate your id property field with the following annotation.", "username": "Sudhesh_Gnanasekaran" }, { "code": " [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n [JsonProperty(\"id\")]\n public string Id { get; set; }\n {\n \"_id\": \"69f2cd42-b6a4-418f-a4c2-ece3d719ed6f\",\n \"UserName\": \"SC1005\",\n \"EmailID\": \"[email protected]\",\n \"Password\": \"APt/zqxocK9haWvu5Iv8+NgWSUuWzep8HrFFRcSkrOveYsBVQDYn7YfxFRVE99R8Mw==\",\n \"FullName\": \"Pravin Yadav\",\n \"LastPwdChangedDateTime\": \"2021-08-09T12:54:05.62+05:30\",\n \"UserGroup\": {\n \"_id\": \"69f2cd42-b6a4-418f-a4c2-ece3d719ed6f\",\n \"GroupName\": \"Administrator\"\n },\n \"UserVisibility\": {\n \"_id\": \"69f2cd42-b6a4-418f-a4c2-ece3d719ed6f\",\n \"GroupName\": \"Administrator\"\n },\n \"ShowLocalLanguage\": false,\n \"CreatedDetails\": {\n \"UserId\": \"69f2cd42-b6a4-418f-a4c2-ece3d719ed6f\",\n \"SessionId\": \"69f2cd42-b6a4-418f-a4c2-ece3d719ed6f\",\n \"UserName\": \"SC1005\",\n \"Timestamp\": \"2021-08-09T12:54:05.62+05:30\"\n }\n }\n", "text": "Still not worked:Property:JSON working when inserting from MongoDbCompass:", "username": "Pravin_Yadav" }, { "code": "", "text": "Could you please share the data model?", "username": "Sudhesh_Gnanasekaran" }, { "code": "public class UserResponse\n {\n private UserResponse()\n {\n }\n\n public UserResponse(string id, string userName, string emailID, string password, string fullName, UserGroupResponse userGroup, UserVisibilityResponse userVisibility, bool showLocalLanguage, DateTimeOffset? lastPwdChangedDateTime, bool active, bool isDeleted, ActivityDetails createdDetails, ActivityDetails lastUpdatedDetails)\n {\n Id = id;\n UserName = userName;\n EmailID = emailID;\n Password = password;\n FullName = fullName;\n UserGroup = userGroup;\n UserVisibility = userVisibility;\n ShowLocalLanguage = showLocalLanguage;\n LastPwdChangedDateTime = lastPwdChangedDateTime;\n Active = active;\n IsDeleted = isDeleted;\n CreatedDetails = createdDetails;\n LastUpdatedDetails = lastUpdatedDetails;\n }\n\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n [JsonProperty(\"id\")]\n public string Id { get; set; }\n public string UserName { get; set; }\n public string EmailID { get; set; }\n public string Password { get; set; }\n public string FullName { get; set; }\n public UserGroupResponse UserGroup { get; set; }\n public UserVisibilityResponse UserVisibility { get; set; }\n public bool ShowLocalLanguage { get; set; }\n public DateTimeOffset? LastPwdChangedDateTime { get; set; }\n public bool Active { get; set; }\n public bool IsDeleted { get; set; }\n public ActivityDetails CreatedDetails { get; set; }\n public ActivityDetails LastUpdatedDetails { get; set; }\n }", "text": "This is DataModel:", "username": "Pravin_Yadav" }, { "code": "", "text": "@Sudhesh_Gnanasekaran any luck on this or workaround. Please suggest !!", "username": "Pravin_Yadav" }, { "code": "[BsonRepresentation(BsonType.ObjectId)] public class UserResponse\n {\n [BsonId]\n public string Id { get; set; }\n\n public string UserName { get; set; }\n }\n\n static void Main(string[] args)\n {\n MongoClient dbClient = new MongoClient(\"mongodb://localhost:27017\");\n var database = dbClient.GetDatabase(\"test\");\n database.GetCollection<UserResponse>(nameof(UserResponse)).InsertOne(new UserResponse { Id = Guid.NewGuid().ToString(), UserName = \"Test User\" }); \n }", "text": "[BsonRepresentation(BsonType.ObjectId)]@Pravin_Yadav you need to decorate the model with this attribute [BsonId].", "username": "Sudhesh_Gnanasekaran" }, { "code": "", "text": "Hi Sir,I have tried the same but still getting same error i.e. ‘69f2cd42-b6a4-418f-a4c2-ece3d719ed6f’ is not a valid 24 digit hex string.", "username": "Pravin_Yadav" }, { "code": "", "text": "@Pravin_Yadav did you try to the run the code that I replied? By the way what is your c# driver and mongodb version.?", "username": "Sudhesh_Gnanasekaran" }, { "code": " private readonly IMongoDatabase _db;\n private readonly IMongoCollection<Application.User.Responses.UserResponse> _collection;\n private readonly MongoClient _client;\n public UserRepository(MongoClient client)\n {\n _client = client;\n _db = _client.GetDatabase(\"UserDatabase\");\n _collection = _db.GetCollection<UserResponse>(\"Users\");\n }\n\n public void Add(UserResponse item, CancellationToken cancellationToken)\n {\n _collection.InsertOne(item);\n }\n", "text": "Yes, tried the same. Below is MongoDb Driver:\n\nand DotNet 5", "username": "Pravin_Yadav" }, { "code": "", "text": "@Sudhesh_Gnanasekaran sir, any update or work arround !!", "username": "Pravin_Yadav" }, { "code": "", "text": "Were we able to solve this issue? I am getting the same error message.", "username": "Vishal_Vaibhav" }, { "code": " [BsonId]\n [JsonProperty(\"id\")]\n public string? Id { get; set; }\n", "text": "Declare like this works for me !!", "username": "Pravin_Yadav" } ]
Guid in _id in collection
2021-11-21T11:16:50.007Z
Guid in _id in collection
18,901
null
[]
[ { "code": "", "text": "Hi everyone,I am Heba and I am a Tech Lead, Technical Content Writer, and Data Scientist. I heard about MongoDB a year ago during my data science learning journey. I am glad to join the community and hope to share knowledge and gain motivation for further growth in my career.", "username": "Heba_Ahmed" }, { "code": "", "text": " Welcome to the MongoDB Community Forums @Heba_Ahmed – great to see you here!What data science tools or libraries are you using with MongoDB? We don’t have a specific forum category for data science (yet!) but I suspect there could be much more discussion around tools, analysis, and use cases.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "data science tools or libraries are you using with MongoDBThanks, @Stennie_X for your reply. I agree with you that there is a lot to be discussed around data extraction, data quality, analyzing, and visualizing data.\nCurrently, I am practicing data wrangling with MongoDB, I am using pymongo, Mongo Compass, and Atlas.", "username": "Heba_Ahmed" } ]
I am Heba from Egypt
2021-11-23T19:01:12.123Z
I am Heba from Egypt
4,361
null
[]
[ { "code": "", "text": "Hello All,\nCan anyone help me with process of implementing VAPT (Vulnerability assessment and penetration testing) report of my MongoDB atlas database.\nThankyou in advance.", "username": "Ashutosh_Mishra1" }, { "code": "", "text": "@Ashutosh_Mishra1 are you looking to obtain penetration testing report of Atlas or looking to pen test your MongoDB cluster in Atlas yourself? The cluster implements secure defaults such as always on TLS / authn / authz, and is not accessible from the Internet by default.", "username": "Salman_Baset" }, { "code": "", "text": "Hi @Salman_Baset , how can we do pentest of our own cluster?", "username": "Ashutosh_Mishra1" }, { "code": "", "text": "Hi @Ashutosh_Mishra1 , you can implement tests to ensure that your cluster configuration through Atlas is in line with your desired policies. For example, if your desired TLS level is 1.2, you can run simple nmap tests to check the desired TLS level on your cluster. The default TLS level on a cluster is 1.2", "username": "Salman_Baset" } ]
VAPT report of Atlas Database
2022-12-04T11:55:50.435Z
VAPT report of Atlas Database
1,481
null
[ "app-services-hosting" ]
[ { "code": "", "text": "Hi,How to restrict App Services / function access by IP?(I did a search before asking but only found an unanswered question here)Thanks", "username": "MBee" }, { "code": "", "text": "Hello,If I understand correctly you want to restrict clients that make requests to your app services e.g. calling functions etc. You can manage this from your IP Access List in your app services configuration.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Thanks, that looks good, just that when I gave it a try after your suggestion, it is not working for me –\nimage638×517 29.7 KB\nI.e., I allow only my IP, but when I visit the service from two other different sites, they’re still fine. Given that:Edits to this page will be updated in your application immediatelyI’m thinking that something is wrong. Hmm… Maybe …OK, here is more details – I hosted my web page at different places/sites, this is what I was hoping to control. However, my web page connect to app services directly via its javascript code. Does that mean that the connection is made from my IP to the app service (which then can explain why my above trials failed)? If so, then my question becomes, since I cannot list every single person’s IP in IP Access List, is there any better solution, like restrict by hosted sites / reference host etc?Or, thinking out of the box, can I apply rate control, since now it is every single person’s individual IP is concerned?", "username": "MBee" }, { "code": "%%request{\n \"%%request.httpReferrer\": { \"$in\": \"%%values.referrers\" }\n}\n", "text": "It sounds like you’re using the Realm Web SDK and calling functions this way.Does that mean that the connection is made from my IP to the app serviceYes it would be the ip of the person visiting the website.since I cannot list every single person’s IP in IP Access ListThe ip access list entry can be a CIDR notation including a range of addresses. Could you target the multiple addresses this way?Alternatively you can specific an Authorization Express in the function itself which has to evaluate to TRUE before running the function. Please see options for Expression Evaluations for e.g. you could use the %%request expansion to evaluate the referrer with a list specified in a value called “referrers”.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Thanks, I’ll take a look.FTA, as to the rate control, I took a further look, and have found this question, with a reply pointing to here, and I quite agree with all comments there, like Justin’s comment:I agree that if MongoDB App Services is to be used at scale and we are signing up for the potential of inflated costs due to unwanted requests, we should have the ability to protect ourselves via rate limiting of some kind. Otherwise, we just become very vulnerable.I upvoted it and hope it will be picked up.", "username": "MBee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Restrict function access by ip
2023-01-02T04:55:59.950Z
Restrict function access by ip
2,180
null
[ "compass" ]
[ { "code": "", "text": "Mongodb compass UI unresponsive when a collection size is more than 30MB, and average size of document could more than 5MB.Is there any reason why it is slow, or is there any limitations on collection or document sizes.", "username": "Naveen_K2" }, { "code": "", "text": "The slowness of Compass will have more to do with where your server is running, the capacity of your and how you are connected to your server.It will also depend on what else is running on the machine where Compass is running.", "username": "steevej" }, { "code": "", "text": "@Naveen_K2 can you share some more details about what’s slow?\nThere is a known issue about switching back and forth between JSON view and other views which we are planning to release a fix for soon.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Hi Marcon,Our Mongodb server was located in USA, I was working from India. So, there is a collection which has more than 200md in size and the average doc size is 4mb. While I’m trying to view that collection through MongoDB Compass its taking a while to load and also some times the entrie screen going to blank(white screen) which show only top left 3 options(connect, view , help)", "username": "Naveen_K2" }, { "code": "", "text": "Serverlocated in USAand Compassfrom Indiawithaverage doc size is 4mbseem to confirmThe slowness of Compass will have more to do with where your server is running, the capacity of your server and how you are connected to your server.", "username": "steevej" } ]
Mongodb compass UI unresponsive when a collection size is more than 30MB
2023-01-01T18:55:55.763Z
Mongodb compass UI unresponsive when a collection size is more than 30MB
1,474
null
[ "node-js", "app-services-user-auth" ]
[ { "code": "google-auth-librarygoogle-auth-library{\n token: {\n clientId: GoogleClientId,\n credential: GoogleIdToken\n select_by: 'btn'\n },\n custom_data:{\n occupation: \"Teacher\"\n }\n}\nconst express = require(\"express\")\nconst cookieParser = require(\"cookie-parser\");\nconst cors = require(\"cors\");\nconst app = express();\n\n//Middleware\napp.use(express.json());\napp.use(cookieParser());\napp.use(cors());\n\n//Google Auth\nconst PORT = process.env.PORT || 5000\napp.post(\"/login\", async (req, res) => {\n const payload = req.body\n const { token, custom_data} = payload\n const { OAuth2Client } = require(\"google-auth-library\");\n const clientId = process.env.GOOGLE_CLIENT_ID\n const client = new OAuth2Client(clientId);\n\n //verify google token \n async function verify() {\n const ticket = await client.verifyIdToken({\n audience: clientId,\n idToken: token.credential\n });\n const payload = ticket.getPayload();\n return payload;\n }\n const verifiedToken = await verify();\n res.send(verifiedToken);\n});\n\napp.listen(PORT, () => {\n console.log(`Server is running on Port ${PORT}`)\n})\nexports = async function(payload){\n const { \n token,\n custom_data\n } = payload;\n const { OAuth2Client } = require(\"google-auth-library\");\n const clientId = context.values.get(\"GOOGLE_CLIENT_ID\");\n const client = new OAuth2Client(clientId);\n\n //verify google token \n async function verify() {\n const ticket = await client.verifyIdToken({\n audience: clientId,\n idToken: token.credential\n });\n const payload = ticket.getPayload();\n return payload;\n }\n const verifiedToken = await verify();\n return verifiedToken;\n};\n", "text": "So my app requires a custom implementation of Google Authentication, since I pass additional parameters that the app gathers from users, upon sign up, and can even prevent an account creation based on some additional security parameters.Therefore, as per the documentation, I’m using the Custom Function provider to login my users.To implement Google Auth, there’s an essential step - to verify the Id token received from Google. The best way to do this, is to use google-auth-library which is provided directly from Google.I initially thought the problem lied in the library, but I setup a test server using node and express, and tested this verification step. There was no error, and I received the verified and decoded token. I setup the same exact function on MongoDB Realm, and received an error.Due to this, I’m wondering if Realm does not support some of google-auth-library packages. If this the case, would this be a new feature request, or would this be a bug, given how essential this is for any custom authentication that relies on Google?Example Payload:Test Express Server code:MongoDB Realm Error:TypeError: ‘end’ is not a functionMongoDB Realm Function:", "username": "Arky_Asmal" }, { "code": "", "text": "I am encountering this same issue.", "username": "Jeffrey_Pinyan1" }, { "code": "whatwg-url", "text": "Same problem here, it is obviously caused by a dependency called whatwg-urlMore details:", "username": "Khaled_Elyamany" }, { "code": "", "text": "I am encountering a similar issue (TypeError: ‘end’ is not a function) with the @google-cloud/storage package.", "username": "Thomas_Andersen" } ]
Custom Google Authentication Potential Bug
2022-03-29T13:56:13.341Z
Custom Google Authentication Potential Bug
3,952
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "strictQueryfalsemongoose.set('strictQuery', false);mongoose.set('strictQuery', true);node --trace-deprecation ...", "text": "Hello everyone,\nI am running into this error: MongoParseError: Protocol and host list are required in “mongodb+srv://admin:Mitchelle@@[email protected]/?retryWrites=true&w=majority//3001” did not\nconnect\n(node:7724) [MONGOOSE] DeprecationWarning: Mongoose: the strictQuery option will be switched back to false by default in Mongoose 7. Use mongoose.set('strictQuery', false); if you want to prepare for this change. Or use mongoose.set('strictQuery', true); to suppress this warning.\n(Use node --trace-deprecation ... to show where the warning was created)\nkindly help me to work about it because it is a blocker to me", "username": "Nalukoola_Allan_Ndaula" }, { "code": "", "text": "Could be due to special character in your password\nCan you connect by shell?\nUse a simple password or you have to escape the special characters from URI", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Yes. Source below for reference @Nalukoola_Allan_Ndaula :OP you could also see this post for how to generate simple and secure pwd:", "username": "santimir" } ]
Connecting mongoose to node js
2023-01-03T07:46:44.567Z
Connecting mongoose to node js
3,056
null
[ "aggregation" ]
[ { "code": "{\n\t\"_id\": {\n\t\t\"AccountID\": \"920d145d\",\n\t\t\"EntityName\": \"ABILITIES\"\n\t},\n\t\"EntityName\": \"ABILITIES\",\n\t\"Fields\": [{\n\t\t\"fieldName\": \"ABILITYCODE\",\n\t\t\"SourceDataType\": \"Edm.String\",\n\t\t\"KeyFlag\": true\n\t}, {\n\t\t\"fieldName\": \"ABILITYDES\",\n\t\t\"SourceDataType\": \"Edm.String\",\n\t\t\"KeyFlag\": false\n\t} ]\n}\n\n{\n\t\"_id\": {\n\t\t\"$oid\": \"63b3eab829615d432868f67e\"\n\t},\n\t\"SourceSystem\": \"Priority\",\n\t\"SourceDataType\": \"Edm.String\",\n\t\"MySQLDatatype\": \"TINYTEXT\"\n},\n{\n\t\"_id\": {\n\t\t\"$oid\": \"63b3ed1e29615d432868f67f\"\n\t},\n\t\"SourceSystem\": \"Priority\",\n\t\"SourceDataType\": \"Edm.Int64\",\n\t\"MySQLDatatype\": \"BIGINT(64)\"\n}\n{\n\t\"_id\": {\n\t\t\"AccountID\": \"920d145d\",\n\t\t\"EntityName\": \"ABILITIES\"\n\t},\n\t\"EntityName\": \"ABILITIES\",\n\t\"Fields\": [{\n\t\t\"fieldName\": \"ABILITYCODE\",\n\t\t\"SourceDataType\": \"Edm.Int64\",\n \"MySQLDatatype\": \"BIGINT(64)\",\n\t\t\"KeyFlag\": true\n\t}, {\n\t\t\"fieldName\": \"ABILITYDES\",\n\t\t\"SourceDataType\": \"Edm.String\",\n \"MySQLDatatype\": \"TINYTEXT\",\n\t\t\"KeyFlag\": false\n\t} ]\n}\n\n", "text": "Hi! I need help with $lookup, aggregate and set.\nI have 2 collections and I want to combine both of them. The issue is my document model\na lost of dictionaries…\nAppreciate the help, Thank! TalThe first collectionThe second collection:I want to turn the first collection to this:", "username": "Tal_Cohen" }, { "code": "", "text": "I think if you read the $lookup documentation carefully, you’ll see that the joins there resemble your problem rather closely. All you have to do then is upsert the join into a collection.", "username": "Jack_Woehr" } ]
Using $lookup, aggregate and set
2023-01-03T12:09:25.101Z
Using $lookup, aggregate and set
654
null
[ "data-modeling", "capacity-planning" ]
[ { "code": "", "text": "Hi good people ,I wanted to know is there any limit for creating databases in a single mongodb cluster ?If so what is the maximum number of databases that are allowed ?If the cluster can handle as many databases as I add, is there any issue for scalability with this design ?Assuming I have 5000 databases and 50 collections on each database, can a cluster handle this setup efficiently or is it even possible?", "username": "Riad_Hossain" }, { "code": "", "text": "Hello @Riad_Hossain!I am not aware of a particular “limit” in database quantity … as for scalability and efficiency, a bigger concern would be what is in your collections, etc.This post has a good breakdown of those concepts:", "username": "Justin_Jenkins" }, { "code": "", "text": "Hello @Justin_Jenkins,Thanks for the resource. This will help.", "username": "Riad_Hossain" } ]
How many databases can be created in a single mongodb cluster?
2023-01-02T22:17:39.447Z
How many databases can be created in a single mongodb cluster?
1,886
null
[ "replication", "sharding" ]
[ { "code": "2022-12-28T08:25:54.559+0000 I - [repl writer worker 12] k2db.cvis collection clone progress: 28273932/277825445 10% (documents copied)\n2022-12-28T08:26:18.757+0000 I ASIO [ShardRegistry] Connecting to 172.31.2.161:27001\n2022-12-28T08:26:56.983+0000 I - [repl writer worker 12] k2db.cvis collection clone progress: 34701253/277825445 12% (documents copied)\n2022-12-28T08:27:18.740+0000 I NETWORK [listener] connection accepted from 127.0.0.1:36958 #67 (13 connections now open)\n2022-12-28T08:27:18.741+0000 I NETWORK [conn67] received client metadata from 127.0.0.1:36958 conn67: { application: { name: \"MongoDB Shell\" }, driver: { name: \"MongoDB Internal Client\", version: \"4.0.1-rc0-2-g54f1582fc6\" }, os: { type: \"Linux\", name: \"CentOS Linux release 7.6.1810 (Core) \", architecture: \"x86_64\", version: \"Kernel 3.10.0-1160.81.1.el7.x86_64\" } }\n2022-12-28T08:27:56.240+0000 I - [repl writer worker 12] k2db.cvis collection clone progress: 39948059/277825445 14% (documents copied)\n2022-12-28T08:28:59.657+0000 I - [repl writer worker 12] k2db.cvis collection clone progress: 46496982/277825445 16% (documents copied)\n2022-12-28T08:29:06.112+0000 I NETWORK [listener] connection accepted from 127.0.0.1:36960 #68 (14 connections now open)\n2022-12-28T08:29:06.113+0000 I NETWORK [conn68] received client metadata from 127.0.0.1:36960 conn68: { application: { name: \"MongoDB Shell\" }, driver: { name: \"MongoDB Internal Client\", version: \"4.0.1-rc0-2-g54f1582fc6\" }, os: { type: \"Linux\", name: \"CentOS Linux release 7.6.1810 (Core) \", architecture: \"x86_64\", version: \"Kernel 3.10.0-1160.81.1.el7.x86_64\" } }\n2022-12-28T08:29:06.113+0000 I NETWORK [conn68] end connection 127.0.0.1:36960 (13 connections now open)\n2022-12-28T08:29:06.115+0000 I NETWORK [conn55] end connection 127.0.0.1:36934 (12 connections now open)\n2022-12-28T08:30:08.769+0000 I - [repl writer worker 12] k2db.cvis collection clone progress: 53216848/277825445 19% (documents copied)\n2022-12-28T08:30:18.754+0000 I SHARDING [LogicalSessionCacheReap] Refreshing cached database entry for config; current cached database info is {}\n2022-12-28T08:30:48.754+0000 I NETWORK [ShardRegistry] Marking host 172.31.2.161:27001 as failed :: caused by :: NetworkInterfaceExceededTimeLimit: timed out\n2022-12-28T08:30:48.754+0000 I SHARDING [ShardServerCatalogCacheLoader-1] Operation timed out with status NetworkInterfaceExceededTimeLimit: timed out\n2022-12-28T08:30:48.754+0000 I ASIO [ShardRegistry] Ending connection to host 172.31.2.161:27001 due to bad connection status; 9 connections to that host remain open\n2022-12-28T08:30:48.754+0000 I SHARDING [ShardServerCatalogCacheLoader-1] Refresh for database config took 30000 ms and failed :: caused by :: NetworkInterfaceExceededTimeLimit: timed out\n2022-12-28T08:30:48.754+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: timed out\n2022-12-28T08:30:48.754+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: timed out\n2022-12-28T08:31:13.205+0000 I - [repl writer worker 12] k2db.cvis collection clone progress: 60086778/277825445 21% (documents copied)\n2022-12-28T08:31:18.760+0000 I ASIO [ShardRegistry] Connecting to 172.31.2.161:27001\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:31:20.107+0000 I ASIO [ShardRegistry] Connecting to 172.31.80.10:27005\n2022-12-28T08:32:14.946+0000 I - [repl writer worker 12] k2db.cvis collection clone progress: 65520518/277825445 23% (documents copied)\n2022-12-28T08:33:23.297+0000 I - [repl writer worker 12] k2db.cvis collection clone progress: 72149091/277825445 25% (documents copied)\n", "text": "Hi,I’m using mongo 4.0.1-rc0-2-g54f1582fc6. I have a cluster on 2 shards and 1 config server, each of them is a 3 node replica set(standard cluster). The primary shard is the first shard where most of the data is present.When I’m trying to start secondary replica of the first shard, I’m getting a timeout error.This the secondary replica of first shard:1:STARTUP2> rs.printSlaveReplicationInfo()\nsource: 172.31.61.174:27001\nsyncedTo: Thu Jan 01 1970 00:00:00 GMT+0000 (UTC)\n1672216775 secs (464504.66 hrs) behind the primaryAttaching the sample log below:I have gone through the link here but this could not resolve my error.Any help would be much appreciated ", "username": "Vijay_Rajpurohit" }, { "code": "rs.status()sh.status()", "text": "Hi @Vijay_Rajpurohit and welcome to the MongoDB community forum!!I’m using mongo 4.0.1-rc0-2-g54f1582fc6.The MongoDB version 4.0 is a very old version and RC indicating Release Candidate (but not final release). I would recommend you to upgrade to the latest version 4.0 release which is 4.0.28 or to a supported version 4.2 or newer for major bug fixes and improvements.\nAlso, please note that, the minor upgrade (4.0.x) do not introduce any backward compatibility changes.I have a cluster on 2 shards and 1 config server,Config servers store essential metadata for a sharded cluster, including which shards own ranges of data for sharded collections. If something happens to their single config server their recovery path will likely be restoring the entire sharded cluster from a backup. For a production environment specifically, the recommendation would be to have multiple config servers to handle failures if any.Since it is possible that you’re seeing the effect of a fixed issue due to the version you’re using, upgrading the server to the latest supported version would be my first step. After the upgrade, if the issue persists, could you share the output for rs.status() and sh.status() from the primary members of the replica set and mongos respectively.Best Regards\nAasawari", "username": "Aasawari" } ]
Issue with Secondary replica set syncing: Network Interface Exceeded Time Limit: timed out
2022-12-28T08:46:12.124Z
Issue with Secondary replica set syncing: Network Interface Exceeded Time Limit: timed out
1,736
https://www.mongodb.com/…8_2_1023x209.png
[ "dot-net" ]
[ { "code": "_idcollection.ReplaceOneUpsertcollection.ReplaceOne_IdAAAAAAAAAAAAAAAAAA==InsertOneGuidpublic class ContactModel\n{\n [BsonId]\n public Guid Id { get; set; }\n\n public string FirstName { get; set; }\n public string LastName { get; set; }\n public List<EmailAddressModel> EmailAddresses { get; set; } = new();\n public List<PhoneNumberModel> PhoneNumbers { get; set; } = new();\n}\npublic class MongoDBDataAccess\n{\n private static IMongoDatabase db;\n static string tableName = \"Contacts\";\n\n public void InsertRecord<T>(string table, T record)\n {\n var collection = db.GetCollection<T>(table);\n collection.InsertOne(record);\n }\n\n public void UpsertRecord<T>(string table, Guid id, T record)\n {\n var collection = db.GetCollection<T>(table);\n \n var result = collection.ReplaceOne(\n new BsonDocument(\"_id\", id),\n record,\n new ReplaceOptions { IsUpsert = true });\n }\n\n}\n//Program.cs\nMongoDBDataAccess db = new(\"MongoContactsDB\", GetConnectionString());\nstring tableName = \"Contacts\";\n\nContactModel user = new() { FirstName = \"Ceehaoi\", LastName = \"Corey\" };\nuser.PhoneNumbers.Add(new PhoneNumberModel(){ PhoneNumber = \"552115-555-5555\"});\nuser.PhoneNumbers.Add(new PhoneNumberModel(){ PhoneNumber = \"91wqqe239123\"});\nuser.EmailAddresses.Add(new EmailAddressModel(){ EmailAddress = \"[email protected]\"});\nuser.EmailAddresses.Add(new EmailAddressModel(){ EmailAddress = \"[email protected]\"});\n\ndb.UpsertRecord(tableName, user.Id, user);\n", "text": "TLDR: The same _id is always used when using collection.ReplaceOne on a newly instantiated object.Every time I try to Upsert a new unique record using collection.ReplaceOne , it just overwrites my previous record. As you can see in the screenshot the _Id for the Upsert is AAAAAAAAAAAAAAAAAA== . I’m not sure what is happening. I would like to be able to insert/update in 1 action.When using InsertOne they are created in the database with a unique Guid.This is my first experience with MongoDB so I’m not sure where I’m going wrong. I have been following a tutorial and my code is the same as the tutorials, however I’m having issues.\nimage1151×235 26.9 KB\n", "username": "austen_elam" }, { "code": "", "text": "You’re not setting a value for GUID or rather the an implicit empty Guid has been created, all 0’sA new Guid would have to be instantiated when you create the user object.", "username": "chris" }, { "code": "collection.InsertOne(record)", "text": "Thanks for the reply! Shouldn’t the database automatically generate the new guid so there are no duplicates? Like I said, my collection.InsertOne(record) is creating a Guid automatically.I’m not sure the correct way to implement this and any insight you have would be greatly appreciated.", "username": "austen_elam" } ]
Upsert Not Creating Unique GUID
2023-01-01T16:50:05.437Z
Upsert Not Creating Unique GUID
971
https://www.mongodb.com/…e_2_1024x512.png
[]
[ { "code": "", "text": "Found a few nice tutorials to learn Realm. It is great.I’m following this one atm:Just a little suggestion maybe: It would be great if we could access the Request/Response object interfaces for the Endpoint Functions. As described here..Would be nice to hear others’ experience or comments around this tutorials or Realm basics.Have a good day!", "username": "santimir" }, { "code": "", "text": "Hi @santimir,I think you are referring to Atlas App Services (formerly known as MongoDB Realm).Just a little suggestion maybe: It would be great if we could access the Request/Response object interfaces for the Endpoint Functions.Can you share more details on what you are looking for in terms of accessing the Request/Response objects?Regards,\nStennie", "username": "Stennie_X" } ]
Learning Realm is cool
2022-12-28T17:28:12.332Z
Learning Realm is cool
1,488
https://www.mongodb.com/…6a9f0cd90c64.png
[ "aggregation", "queries", "node-js", "data-modeling", "serverless" ]
[ { "code": "", "text": "Currently, I have an Atlas serverless cluster with 1 million documents, roughly 2GB.If a query takes 10 seconds to run with my current environment, what kind of performance can I expect with each tier here?:\nScreenshot from 2022-12-29 03-14-401022×813 67.4 KB\n", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "Hi,This question isn’t directly answerable as it depends on your overall concurrent workload and resources. If you are expecting changes in performance (for better or worse), I would start by trying to understand how your query is limited by current resources.For example:Have you profiled time spent in different areas of your application’s request lifecyle (eg database processing, network transfer, application processing, and frontend rendering)?Are there significant memory, storage, or network limitations affecting your database throughput?Typically performance improvements are going to involve tuning across all areas involved in the request lifecycle.If the most limiting factor happens to be available RAM for query processing, upgrading to a larger database cluster tier would likely help. If the main performance challenge is time spent converting results from BSON (returned by the server) to native objects in your programming language or rendering in the frontend, changing your cluster resources is unlikely to improve that significantly.There may also be some performance wins that would avoid scaling up database resources such as checking:Your common queries efficiently indexedYou are optimising the data model and results returned. For example, Reducing the Size of Large Documents.Hope that helps!Regards,\nStennie", "username": "Stennie_X" } ]
What kind of performance can be expected at each tier?
2022-12-29T09:16:30.435Z
What kind of performance can be expected at each tier?
1,617
null
[ "queries", "python", "database-tools" ]
[ { "code": "[\n {\n\n \"build_info\": {\n\n \"os_version\": \"Ubuntu-20.04\",\n\n \"scheduler\": \"bng-dev-icesch\",\n\n \"start_time\": {\n\n \"$date\": \"2022-12-22T12:18:06.844+0530\"\n\n },\n\n \"stop_time\": {\n\n \"$date\": \"2022-12-22T12:18:48.944+0530\"\n\n },\n\n \"user\": \"rhadagali\",\n\n \"build_status\": \"Passed\"\n\n \n\n }\n\n }\n]\n start_time: ISODate(\"2022-12-22T06:48:06.844Z\"),\n stop_time: ISODate(\"2022-12-22T06:48:48.944Z\"),\nstart_time: { '$date': '2022-12-22T12:18:06.844+0530' },\nstop_time: { '$date': '2022-12-22T12:18:48.944+0530' },\n", "text": "Json file exampleusing mongoimport (inside DB)using insert_one from pymongoShould i change pymongos insert_one to make it to take it as date datetype ?", "username": "Stuart_S" }, { "code": "import datetime\n\nstart_time = datetime.datetime(2022, 12, 22, 6, 48, 6, 844000)\nstop_time = datetime.datetime(2022, 12, 22, 6, 48, 48, 944000)\n\ndocument = {\n 'start_time': start_time,\n 'stop_time': stop_time\n}\n\nresult = collection.insert_one(document)\n", "text": "Hi @Stuart_S ,you can use the datetime module in Python to create datetime objects, and then pass those directly to insert_one as the values for the start_time and stop_time fields. pymongo will automatically convert the datetime objects to BSON date types when inserting them into the collection.For example:Will that work as expected.Thanks", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for the reply @Pavel_Duchovny , Should i change my json format ?", "username": "Stuart_S" }, { "code": "tart_time: ISODate(\"2022-12-22T06:48:06.844Z\"),\n stop_time: ISODate(\"2022-12-22T06:48:48.944Z\"),\n", "text": "I thought you wanted paymongo to produce:This is a natural way of storing it in MongoDB.If so you need to adjust the python code as suggested.Pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks @Pavel_Duchovny , i was able to import those fields as date datatype in DB", "username": "Stuart_S" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Inserting field with date Datetype is working with mongoimport and not with insert_one in pymongo
2022-12-27T07:06:34.807Z
Inserting field with date Datetype is working with mongoimport and not with insert_one in pymongo
4,178
null
[ "aggregation", "atlas-search" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"6397dc537457f02b0ca50a6c\"\n },\n \"route\": \"nonstop\",\n \"cities\": [\n {\n \"name\": \"Ontario\",\n \"layover\": {\n \"duration\": \"long\"\n }\n }\n ]\n}\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"cities\": {\n \"dynamic\": true,\n \"fields\": {\n \"layover.duration\": {\n \"type\": \"autocomplete\"\n },\n \"name\": {\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"embeddedDocuments\"\n },\n \"route\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n[{\n $search: {\n index: 'TravelSearch',\n embeddedDocument: {\n path: 'cities',\n operator: {\n autocomplete: {\n query: 'long',\n path: 'cities.layover.duration'\n }\n }\n }\n }\n}, {\n $addFields: {\n score: {\n $meta: 'searchScore'\n }\n }\n}]\n", "text": "Suppose I have a collection with documents that are structured like this.I created the following Atlas Search index for this collection.I am able to search based on the “name” attribute that is in each element of the “cities” array.Is it possible to search based on the “layover.duration” attribute?I tried doing so, but my pipeline is not returning any results.Please see my pipeline below.", "username": "Suray_T" }, { "code": " \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"cities\": {\n \"dynamic\": true,\n \"fields\": {\n \"layover\": {\n \"fields\": {\n \"duration\": {\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"document\"\n },\n \"name\": {\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"embeddedDocuments\"\n },\n \"route\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n[\n {\n '$search': {\n 'index': 'travel', \n 'embeddedDocument': {\n 'path': 'cities', \n 'operator': {\n 'compound': {\n 'must': [\n {\n 'autocomplete': {\n 'path': 'cities.layover.duration', \n 'query': 'long'\n }\n }\n ], \n 'should': [\n {\n 'autocomplete': {\n 'path': 'cities.layover.duration', \n 'query': 'long'\n }\n }\n ]\n }\n }\n }\n }\n }\n]\nembedded", "text": "Hi @Suray_T and welcome to the MongoDB community forum!!Based on the sample data shared above, I tried to reproduce this in my own atlas environment and below is the index definition and the query used for the data.Index definition:and the search query :Please note that, the above query is based on the sample data shared, hence I would recommend through testing for the complete dataset.If you believe this index definition and query do not fit your use case, please provide further information regarding the use case.As of the time of this message, the Atlas Search embeddedDocuments index option, embeddedDocument operator, and embedded scoring option are in preview.Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search with Embedded Documents and Subdocuments
2022-12-13T02:23:39.191Z
Atlas Search with Embedded Documents and Subdocuments
1,907
null
[ "serverless" ]
[ { "code": "", "text": "I’m implementing Device Sync on my mobile app on thr Shared plan but quite confused about the next steps.Does the Serverless plan support Device Sync, or only the Dedicated one do? I read conflicting info in different places in the UI/documentation.", "username": "anh" }, { "code": "", "text": "Welcome to the MongoDB Community @anh !Atlas Serverless instances currently do not support Atlas Device Sync – only shared or dedicated Atlas clusters.Per Get Started with Atlas Device Sync:You cannot use sync with a serverless instance or Federated database instance.Atlas Device Sync is also listed as an unsupported feature on Serverless Instance Limitations.If there is conflicting info in UI/docs, can you please share some links so those can be corrected?Regards,\nStennie", "username": "Stennie_X" }, { "code": "Serverless sync architectureFeature overviewServerless", "text": "Thanks for the reply, Stennie! The bit of confusion is at Atlas Device Sync | MongoDB where it says Serverless sync architecture under the Feature overview section. It kinda throws me off a bit as the plan name for serverless is also named Serverless.It’d be nice if this is spelt out in the feature comparison chart as IMO it’s pretty important. I expect to grow from the Shared plan to Serverless but now it means I have to skip that and go straight to Dedicated.", "username": "anh" }, { "code": "Serverless", "text": "It kinda throws me off a bit as the plan name for serverless is also named Serverless .Hi @anh,Thanks for the follow up info… I see how that could be confusing! The description in this context is intended to apply to device sync infrastructure being serverless.I’ll share this discussion and your feedback with the product team.Regards,\nStennie", "username": "Stennie_X" } ]
Does serverless plan support Atlas Device Sync?
2022-12-29T15:35:10.411Z
Does serverless plan support Atlas Device Sync?
1,732
null
[ "aggregation", "dot-net" ]
[ { "code": "{\n \"mappings\": {\n \"fields\": {\n \"Name\": {\n \"analyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n }\n }\n }\n}\npublic class UserEntity\n {\n [BsonElement(\"Id\")]\n public string Id{ get; set; }\n [BsonElement(\"Age\")]\n public string Age { get; set; }\n [BsonElement(\"Name\")]\n public string Name { get; set; }\n [BsonElement(\"NickName\")]\n public string NickName { get; set; }\n }\n\npublic async Task<List<UserEntity>> SearchUser(string name)\n {\n try\n {\n List<UserEntity> result = Collection.Aggregate()\n .Search(\n SearchBuilders<UserEntity>.Search.Regex($\"({name})\", x => x.Name, true))\n .Sort(\"{ score: { $meta: \\\"textScore\\\" } } \")\n .Project(x => new UserEntity { Id = x.Id, Age = x.Age, Name = x.Name, NickName = x.NickName})\n .ToList();\n return result;\n }\n catch (Exception ex)\n {\n throw ex;\n }\n }\n\n", "text": "Hi,\nCan someone help me implement sort using textScore and fuzzy on my C# code using regex please??\nCurrently i have this Search Index:And this is how my code looks like:", "username": "Henrique_Shoji" }, { "code": "textScoresearchScoretextScoresearchScoresearchScore", "text": "Hi @Henrique_Shoji,Can someone help me implement sort using textScore and fuzzy on my C# code using regex please??Just to clarify, are you wanting to sort using textScore (and not searchScore)? If so, what is the use case for this as textScore is related to native text search where as searchScore is used in Atlas Search.Documents in the result set are returned in order from highest score to lowest (searchScore) for Atlas Search by default.Regards,\nJason", "username": "Jason_Tran" } ]
textScore and Fuzzy options using Regex c#
2022-12-15T18:42:02.958Z
textScore and Fuzzy options using Regex c#
1,468
null
[]
[ { "code": "Synchronization between Atlas and Device Sync has been stopped, due to error: non-recoverable error processing event: dropping namespace ns='xxx' being synchronized with Device Sync is not supported. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning.", "text": "I keep seeing Synchronization between Atlas and Device Sync has been stopped, due to error: non-recoverable error processing event: dropping namespace ns='xxx' being synchronized with Device Sync is not supported. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning. in my dashboard, yet my devices keep syncing just fine (I’m using the Shared plan)I wonder if this is related to me using the free plan as clearly the sync is working.Also, when I try to view Collections, none of the data shows up. It’s pretty weird.", "username": "anh" }, { "code": "", "text": "Hi. You are running into the issue that the error describes. Sync really exists in 2 places:When you drop collections, the sync to/from mongodb (as described in the error message) stops because we cannot support that. Sync between devices will continue to work, but as you point out the sync to MongoDB will not work anymore until you terminate and re-enable sync (this has nothing to do with the free plan)Let me know if you have any other questions.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Right now I’m prototyping and playing a lot with the schemas so that kinda explains. Not sure what’s the best way to ‘restart’ everything. So far I use the web interface to delete the schema, drop all the tables, kill all sessions from my devices, even disable sync/delete users from the App Users page after any significant changes, but the errors keep coming back, sometimes instantly.How do I, in Dev mode, make sure this error doesn’t come back? If you can point me to any resources for migration in production that’d be very useful too.Thanks.", "username": "anh" }, { "code": "readAndWriteAllEnabling Sync...copying data Errors...xxx: \"encountered error when flushing batch: error in onBatchComplete callback: error updating resumable progress: context canceled\", xxx: \"encountered error when flushing batch: error in onBatchComplete callback: error updating resumable progress: connection(xxx) incomplete read of message header: context canceled\",xxx: \"encountered error when flushing batch: error in onBatchComplete callback: error updating resumable progress: connection(xxx) incomplete read of message header: context canceled\"db.dropDatabase()__realm_sync", "text": "I realized I have removed all the Rules, so I set them to readAndWriteAll and restart the sync. I get this red warnings:Enabling Sync...copying data Errors...xxx: \"encountered error when flushing batch: error in onBatchComplete callback: error updating resumable progress: context canceled\", xxx: \"encountered error when flushing batch: error in onBatchComplete callback: error updating resumable progress: connection(xxx) incomplete read of message header: context canceled\",xxx: \"encountered error when flushing batch: error in onBatchComplete callback: error updating resumable progress: connection(xxx) incomplete read of message header: context canceled\"And again the sync process failed within seconds of restarting. I’m confused as how to exactly fix this issue. I’m in Dev mode.EDIT: This ended up working for me.I guess my main concern are:", "username": "anh" }, { "code": "", "text": "Hi Anh,As per Tyler’s comment here are some conditions that require a Termination of Sync including the dropping of a collection.Clicking restart sync in the banner will not work in these cases.Note that the steps I mentioned in the link you provided is only required in the context if terminating sync by itself does not resolve the issue for some reason. You should not need to drop __realm_sync every time since the termination process should do this automatically.Regards\nManny", "username": "Mansoor_Omar" } ]
`Synchronization between Atlas and Device Sync has been stopped` in dev mode?
2022-12-30T01:55:20.579Z
`Synchronization between Atlas and Device Sync has been stopped` in dev mode?
2,597
null
[ "atlas-functions", "realm-web" ]
[ { "code": "", "text": "I have tons of functions in my Realm Application and I’d really love a feature to quickly see which of those I already completed, tested, updated, whatever. Like a simple tagging system so that we could add a tag to a function just for visibility to quickly see which functions need which work.", "username": "SirSwagon_N_A" }, { "code": "", "text": "Hello,We have a feedback portal where you may want to add this idea/request to.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Feature Suggestion: Tag / Categorize functions of Realm App Services
2023-01-01T14:36:31.160Z
Feature Suggestion: Tag / Categorize functions of Realm App Services
1,350