image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "swift" ]
[ { "code": "where: ( { some query } )@ObservedResultsstruct Example: View {\n @EnvironmentObject var state: AppState\n \n @ObservedResults(Task.self,\n where: ( { $0.creatorId == state.userID! } )) var tasks\n\n var body: some View {\n Text(\"\\(tasks.count)\")\n }\n}\nstate", "text": "Using where: ( { some query } ) one can filter the @ObservedResults of an implicitly opened realm. I need to filter these results dynamically, so for example using some variable that is available only after the init of the view has run. How would I achieve this?Example:would throw an error: “Cannot use instance member ‘state’ within property initializer; property initializers run before ‘self’ is available” which makes sense… but how would I use something in state to filter the data?cc @Jason_Flax", "username": "David_Kessler" }, { "code": "struct Example: View {\n @EnvironmentObject var state: AppState\n \n @ObservedResults(Task.self) var tasks\n\n var body: some View {\n if let filteredTasks = tasks.where({ $0.creatorId == state.userID! }) {\n Text(\"\\(filteredTasks.count)\")\n }\n}\n", "text": "You just have to filter those down below rather than in the initializer.Try it like this:", "username": "Kurt_Libby1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filter @ObservedResults dynamically
2022-07-21T10:04:31.137Z
Filter @ObservedResults dynamically
2,439
null
[ "queries" ]
[ { "code": "db.movies.find({title: \"Red\"})^$", "text": "Is it possible to use search to match on an exact phrase?For example, say I want to query a movie database for the title of “Red”. There are lots of movies I’d have to parse through that contain the word “red”. I just want the exact phrase.I could just do something like db.movies.find({title: \"Red\"}) but suppose there is no index on title, but there is a search index. Can I make use of the search index with an exact match like that?I would assume I could do it with regex, but I noticed that ^ and $ are not supported.", "username": "djedi" }, { "code": "lucene.keyword{\n $search: {\n \"index\": \"movies_search_index\"\n \"phrase\": {\n \"query\": \"Red Robin\",\n \"path\": \"title\"\n }\n }\n}\n", "text": "Hi @djedi,Welcome to the MongoDB Atlas Search forum. I’d like to recommend a lucene.keyword analyzer for single-word exact match queries and phrase query for multi-word exact match queries.Of course, you need to create an index, and then you can do something like:", "username": "Marcus" } ]
Exact word only on Atlas Search
2022-07-20T19:38:12.695Z
Exact word only on Atlas Search
3,084
https://www.mongodb.com/…0_2_1024x575.png
[ "aggregation", "atlas", "sanfrancisco-mug" ]
[ { "code": "Manager, Solutions Architecture, Bay AreaSenior Solutions ArchitectSolutions ArchitectDirector of Community", "text": "San Francisco, US MongoDB User Group is excited to launch and announce their first meetup after the pandemic.The event will start with an Introduction to MongoDB the Developer Data Platform and followed up by a demo-based deep dive into knowing what’s new in MongoDB 6.0 and recent announcements made at MongoDB World 2022. Topics will include MongoDB as a Developer Data Platform, Atlas Data Federation, Analytics, Queryable Encryption, Relational Migrator, and more!Later we will have a simple quick fun exercise - Where’s the Bug? - where you need to identify the bug in the shown code snippets & queries to win some MongoDB Swag !After this, we will have a session by @Stennie_X on How to Repro MongoDB Issues using his trifecta toolkit.We will also have fun Networking Time to meet some of the MongoDB developers, customers, architects, and experts in the region. Not to forget there will also be, Swags, and Pizzas. Looking forward to seeing you all!Doors will open at 5:30 PM and the event starts at 6:00 PM. Join us before 6:00 PM if you want to meet some of the MongoDB Staff to learn more about MongoDB and ask any questions you have.Event Type: In-Person\n Location: MongoDB Office, San Francisco.\n 88 Kearny, Suite 500 San Francisco, CA 94108 United StatesTo RSVP - please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Thomas Luckenbach\nManager, Solutions Architecture, Bay Area–Julie Mayhew\nSenior Solutions Architect–Julia Guenther\nSolutions Architect–Join the San Francisco group to stay updated with upcoming meetups and discussions.", "username": "Harshit" }, { "code": "", "text": "Hey Everyone,Gentle Reminder, MongoDB User Group San Francisco Meetup is tomorrow and we are excited to see you all!Here are a few things to note:Please reply on this thread in case you have any questions.Looking forward to seeing most of you tomorrow!Thanks\nHarshit", "username": "Harshit" }, { "code": "", "text": "Thanks to everyone who attended! We will announce the next meetup date soon. In the meantime join the San Francisco group to stay updated with upcoming meetups and discussions.\n2022 SF MUG (1)1920×1217 347 KB\nCheers!\nHarshit", "username": "Harshit" } ]
San Francisco MUG: MongoDB the Developer Data Platform & What's new in MongoDB 6.0!
2022-07-01T22:56:58.770Z
San Francisco MUG: MongoDB the Developer Data Platform & What’s new in MongoDB 6.0!
3,789
null
[ "python", "server" ]
[ { "code": "", "text": "Hello MongoDB-community,I hope you can help me with a simple question. There is the possibility to insert DBref in a document so the document that contains the DBref is referencing another document in the same and/or different collection. This is awsome but I have a few little questions you may help me with.If the $ is necessary, how should i deal with it in a python dataclass because there I’m not allowed to name a dataclass field with a beginning $-sign.Thanks a lot for you help beforehand!", "username": "Marco_Fischer" }, { "code": "import pymongo\nfrom pymongo import MongoClient\nfrom bson.dbref import DBRef\n\nclient = pymongo.MongoClient(\"mongodb://localhost:27017\")\ndb = client['test']\ndb.authors.drop()\ndb.books.drop()\n\nauthors = [\n {'_id': 0, 'name': 'Conan Doyle'},\n {'_id': 1, 'name': 'Homer'}\n]\ndb.authors.insert_many(authors)\n\nbooks = [\n {'_id': 0 ,'title': 'Ilyad', 'author': DBRef('author', 1)},\n {'_id': 1 ,'title': 'Adventures', 'author': DBRef('author', 0)},\n {'_id': 2 ,'title': 'Odyssey', 'author': DBRef('author', 1)}\n]\ndb.books.insert_many(books)\n\nbooks_by_conan = list(db.books.find({'author.$id': 0}))\nbooks_by_homer = list(db.books.find({'author.$id': 1}))\n\nprint(f'Books by Conan: {books_by_conan}')\nprint(f'Books by Homer: {books_by_homer}')\nauthor.$id_id_id", "text": "Hi @Marco_Fischer and welcome to the community!!The following sample code below would give an explanation on how to use the DBRefs in MongoDB using PythonThe result of which would look like:Books by Conan: [{‘_id’: 1, ‘title’: ‘Adventures’, ‘author’: DBRef(‘author’, 0)}]Books by Homer: [{‘_id’: 0, ‘title’: ‘Ilyad’, ‘author’: DBRef(‘author’, 1)}, {‘_id’: 2, ‘title’: ‘Odyssey’, ‘author’: DBRef(‘author’, 1)}]In the example above, author.$id is referencing the author’s _id values.\nHere using DBRef provides a slight advantage of making it explicit that the field is referencing another collection.However, one main disadvantage is that DBRef does not work with $lookup.Please note that as mentioned in the page Database References, unless you have a specific & compelling reason to use DBRef, it’s strongly recommended to use manual references instead (for example, putting the value of the author’s _id directly in the author field, in the example above)I would also suggest against using DBRef or manual references to replicate a tabular database design and using it like a foreign key.Please see the recommended MongoDB Schema Design Best Practices | MongoDB to learn more.Thanks\nAasawari", "username": "Aasawari" }, { "code": "from bson.dbref import DBRef", "text": "from bson.dbref import DBRefThank you a lot Aasawari! Now it works and the hint i needed was the “from bson.dbref import DBRef”. Now I’m creating an object out of a dataclass and transpose it to a dict. Afterwards I’m able to create an array for a field with DBRefs and it also passes the validation.You made my day!! ", "username": "Marco_Fischer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to deal with DBref in schema and dataclass in python
2022-07-12T11:10:52.657Z
How to deal with DBref in schema and dataclass in python
3,544
null
[]
[ { "code": "2022-07-21T00:08:11.365+0000 I REPL [replication-248] Initial sync attempt finishing up.\n2022-07-21T00:08:34.070+0000 E - [replication-248] Assertion: 13548:BufBuilder attempted to grow() to 67108869 bytes, past the 64MB limit. src/mongo/bson/util/builder.h 350\n2022-07-21T00:08:34.269+0000 I REPL [replication-248] Error creating initial sync progress object: Location13548: BufBuilder attempted to grow() to 67108869 bytes, past the 64MB limit.\n2022-07-21T00:08:34.270+0000 I REPL [replication-248] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 0, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1658226358314), initialSyncAttempts: [], fetchedMissingDocs: 12452, appliedOps: 11672766, initialSyncOplogStart: Timestamp(1658226358, 117), initialSyncOplogEnd: Timestamp(1658345698, 17) }\n2022-07-21T00:08:34.270+0000 E REPL [replication-248] Initial sync attempt failed -- attempts left: 9 cause: InternalError: error fetching oplog during initial sync :: caused by :: error in fetcher batch callback: quick oplog start location had error...?\n\n", "text": "I’m setting up a secondary database. During the syncing process it failed right after it finished applying oplog. I do researchs about this logs but have no idea where to go from this.If anyone can point me in the right direction that would be great.", "username": "Huy" }, { "code": "", "text": "Can anyone help me with this. If you need more information please asked, I’ll provide them.", "username": "Huy" }, { "code": "rs.status()rs.conf()rs.printReplicationInfo()", "text": "Hi @HuyThis is a peculiar message. Could you help us with some details:Best regards\nKevin", "username": "kevinadi" }, { "code": "What is your MongoDB version\nHow many databases & collections you have in the deployment?\nWhat’s the timeline of the incident, and have you tried rerunning the initial sync?\nWhat’s the output of rs.status(), rs.conf(), and rs.printReplicationInfo()\nproductiondb:PRIMARY> rs.status()\n{\n \"set\" : \"productiondb\",\n \"date\" : ISODate(\"2022-07-21T08:28:09.793Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(22),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1658392089, 1),\n \"t\" : NumberLong(22)\n },\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1658392089, 1),\n \"t\" : NumberLong(22)\n },\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1658392089, 38),\n \"t\" : NumberLong(22)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1658392089, 1),\n \"t\" : NumberLong(22)\n }\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"DB1:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 893212,\n \"optime\" : {\n \"ts\" : Timestamp(1658392089, 38),\n \"t\" : NumberLong(22)\n },\n \"optimeDate\" : ISODate(\"2022-07-21T08:28:09Z\"),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1657498982, 1),\n \"electionDate\" : ISODate(\"2022-07-11T00:23:02Z\"),\n \"configVersion\" : 105936,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 1,\n \"name\" : \"DB2:27017\",\n \"health\" : 1,\n \"state\" : 5,\n \"stateStr\" : \"STARTUP2\",\n \"uptime\" : 165731,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2022-07-21T08:28:09.043Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2022-07-21T08:28:08.783Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"DB1:27017\",\n \"syncSourceHost\" : \"DB1:27017\",\n \"syncSourceId\" : 0,\n \"infoMessage\" : \"\",\n \"configVersion\" : 105936\n }\n ],\n \"ok\" : 1,\n \"operationTime\" : Timestamp(1658392089, 38),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1658392089, 39),\n \"signature\" : {\n \"hash\" : BinData(0,\"mWGu9CvEYng72WLP+5MzZW859JU=\"),\n \"keyId\" : NumberLong(\"7083073276435496961\")\n }\n }\n}\n\nproductiondb:PRIMARY> rs.conf()\n{\n \"_id\" : \"productiondb\",\n \"version\" : 105936,\n \"protocolVersion\" : NumberLong(1),\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"DB1:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 2,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 1,\n \"host\" : \"DB2:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : true,\n \"priority\" : 0,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 0\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"5c1088d3efa5a1bd60427c1f\")\n }\n}\n\nproductiondb:PRIMARY> rs.printReplicationInfo()\nconfigured oplog size: 512000.00009155273MB\nlog length start to end: 17068secs (4.74hrs)\noplog first event time: Thu Jul 21 2022 12:52:10 GMT+0900 (JST)\noplog last event time: Thu Jul 21 2022 17:36:38 GMT+0900 (JST)\nnow: Thu Jul 21 2022 17:36:38 GMT+0900 (JST)\n\n", "text": "Thanks for responding. Regarding your questions:I have retried several times, still the same error.I ran those on the primary database, we currently only have two database in replica set, one this primary and the other is the one failing.rs.status():rs.conf()rs.printReplicationInfo()", "username": "Huy" }, { "code": "", "text": "Hi @Huy310 database, with more than 3 million collection.This could be the issue, with such an enormous number of collections. I noticed that you also seem to have only 2 members in the replica set. Are you trying to convert a standalone to a replica set? Note that it’s not recommended to run an even number of nodes in a replica set. It’s strongly recommended to have at least 3 nodes.Regarding the initial sync issue, perhaps you can try the procedure outlined in Sync by Copying Data Files from Another Member if the automatic sync method is not working out for you.As with anything that is operationally risky, I would suggest you take backups & practice the method before doing it in production Furthermore:log length start to end: 17068secs (4.74hrs)How long was the initial sync run? If it’s more than 4 hours, you might want to consider a larger oplog size.Note that MongoDB 3.6 series is not supported anymore per April 2021 so it won’t receive any new updates. You might want to upgrade to a supported MongoDB version.Best regards\nKevin", "username": "kevinadi" } ]
Mongo replication failed when initial sync finishing up
2022-07-21T02:07:51.182Z
Mongo replication failed when initial sync finishing up
2,663
null
[ "queries" ]
[ { "code": "{\n \"candidate_id\": 202,\n \"name\": \"Chandan Testc\",\n \"email\": \"[email protected]\",\n \"category\": \"Daily\",\n \"zipcode\": \"41150\",\n \"candidate_sub_organizations\": [\n {\n \"sub_organization_id\": 539,\n \"twoway_text_subs\": true,\n \"is_blocked\": false\n }\n ],\n}\n", "text": "", "username": "susanta_kumar_pradhan" }, { "code": "", "text": "This is a support forum for MongoDB, not Azure Cosmos DB. The Cosmos MongoDB API is an emulator that is not fully compatible with, or supported by MongoDB.", "username": "tomhollander" } ]
Azure Cosmos DB Sorting Issue using Mongo API
2022-07-21T07:32:14.661Z
Azure Cosmos DB Sorting Issue using Mongo API
1,441
https://www.mongodb.com/…dc18aa063d12.png
[ "aggregation", "queries" ]
[ { "code": "", "text": "I have following document:\n\nimage706×732 52.1 KB\nBelow layouts.nodes, there are many node fields(no limit), and I want to find the maximum x value in these nodes. How can I achieve this? (this specific document format is a configuration of a frontend library)", "username": "1224084650" }, { "code": "\"nodes\" : [\n { x : 0 , y : 1 } ,\n { x : 100 , y : 60 }\n]\n", "text": "The major problem with your issue is that nodes is an object rather than an array with field names that look like array index but are not array index.The following:would be more efficient in terms of space and in terms of processing. It would be simple to implement your use-case as you could use \"$max\":\"$nodes.x\" in a projection to do it.With your schema you still have to use \"$max\", but you have to perform another step in order to transform your object nodes into an array with $objectToArray.", "username": "steevej" }, { "code": "", "text": "yeah, you are right. If nodes are an array, then everything should be fine. Does the performance difference between storing an array and storing them as objects separately large? Should I store it as an array and transform it into objects when the front-end requests it?", "username": "1224084650" }, { "code": "mongosh> db.nodes.find()\n{ _id: ObjectId(\"62d802a681fed875f28c70ec\"),\n node0: { x: 123, y: 456 },\n node1: { x: 123, y: 456 },\n node2: { x: 123, y: 456 },\n node3: { x: 123, y: 456 },\n node4: { x: 123, y: 456 },\n node5: { x: 123, y: 456 },\n node6: { x: 123, y: 456 },\n node7: { x: 123, y: 456 },\n node8: { x: 123, y: 456 },\n node9: { x: 123, y: 456 },\n node10: { x: 123, y: 456 },\n node11: { x: 123, y: 456 },\n node12: { x: 123, y: 456 } }\nmongosh> db.nodes.stats().avgObjSize\n375 /* bytes */\nmongosh> db.nodes_array.find()\n{ _id: ObjectId(\"62d8040181fed875f28c70ee\"),\n nodes: \n [ { x: 123, y: 456 },\n { x: 123, y: 456 },\n { x: 123, y: 456 },\n { x: 123, y: 456 },\n { x: 123, y: 456 },\n { x: 123, y: 456 },\n { x: 123, y: 456 },\n { x: 123, y: 456 },\n { x: 123, y: 456 },\n { x: 123, y: 456 }, \n { x: 123, y: 456 },\n { x: 123, y: 456 } ] }\nmongosh> db.nodes_array.stats()\n323 /* bytes */\npaths : {\n path0 : [ \"node0\" , \"node3\" , \"node4\" ] ,\n path1 : [ \"node2\" , \"node12\" , \"node11\" ]\n}\npaths : [\n [ 0 , 3 , 4 ] ,\n [ 2 , 12 , 11 ]\n]\n", "text": "Does the performance difference between storing an array and storing them as objects separately large?Lets do a little test, with 2 collections of 1 document with 12 nodes:A difference of 52 bytes does not seem a lot, but that is 1 document with only 12 nodes. But it all adds up, you need bigger permanent storage, you need bigger RAM to hold your working set in cache, you need more bandwidth to download. Plus you need extra processing on the server to do $objectToArray in order to implement your use-case. It is easier to distribute this extra processing (which might not be needed) if you dotransform it into objects when the front-end requests itBecause only the front-end that requests it is impacted by the conversion. Everyone is when done on the server.I suspect that your Blue Train has much more that 12 nodes so 52 bytes is very low compared to your real data.I also suspect that your other top level objects like *nodes, edges and paths have the same structure. I even suspect that paths might refers to layouts.nodes by names likeHaving the following will have an even bigger positive impact.", "username": "steevej" }, { "code": "", "text": "I will ask the library author whether we can change the data structure design of it. Thank you for the elaborated explanation and experiments.", "username": "1224084650" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I get maximum value of a document with many different fields through aggregation? sub field names are same
2022-07-19T13:53:09.521Z
How can I get maximum value of a document with many different fields through aggregation? sub field names are same
2,173
null
[]
[ { "code": "", "text": "Hi All,May be this a redundant question, we are planning to replicate RDBMS data into MongoDB, just want to check the design options that are possible to replicate data successfully with a minimal effort into document ModelRDBMS\nOrder\nOrderDetailsMongoDb\nOrder { …,OrderDetails[{}]}OptionsHave independent collections for Order and Order Detail, and perform a join through lookup and present the data, if requestedHave independent Staging Collections (for a parent and child queue), upon data arriving on the Parent and/or Child Document build the final document and upsert through Mongo data event to the actual Order-Detail collectionMerge two topics using Kafka Stream and send the merged topic to a kafka connectorPlease let me know, which of the options should be chosen, pro & cons if possible, If there is a better solution, please guide me to the right approach", "username": "Balaji_Mohandas" }, { "code": "", "text": "If you’d like MongoDB help check out the MongoDB MigratorMongoDB Relational Migrator simplifies the process of migrating workloads from relational databases to MongoDB", "username": "Robert_Walters" }, { "code": "", "text": "Thanks for your response Robert, we are planning to migrate the data from Db2 z/OS. We not only want to load the data to Mongo, but also we want to keep the data in sync with legacy Db through Change Data Capture. My apologizes, I did not state that clearly earlier", "username": "Balaji_Mohandas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sink Parent and Child Kafka Topics into Single Collection
2022-07-19T19:32:27.923Z
Sink Parent and Child Kafka Topics into Single Collection
1,550
null
[ "server", "field-encryption" ]
[ { "code": "mongod --dbpath /Users/myuser/data/db\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.733-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.734-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.734-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.736-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.736-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.736-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.736-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.736-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":14351,\"port\":27017,\"dbPath\":\"/Users/clackson/mongodb\",\"architecture\":\"64-bit\",\"host\":\"iMac.local\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.736-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.0\",\"gitVersion\":\"e61bf27c2f6a83fed36e5a13c008a32d563babe2\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.736-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.5.0\"}}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.736-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"/Users/clackson/mongodb\"}}}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /Users/clackson/mongodb not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.737-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.738-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.738-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.738-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.738-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.738-05:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.738-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-07-20T14:07:54.738-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\niMac:mongodb clackson$ mongod --dbpath /Users/clackson/data/db\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.253-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.254-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.256-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.258-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.258-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.258-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.258-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.258-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":19545,\"port\":27017,\"dbPath\":\"/Users/clackson/data/db\",\"architecture\":\"64-bit\",\"host\":\"iMac.local\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.258-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.0\",\"gitVersion\":\"e61bf27c2f6a83fed36e5a13c008a32d563babe2\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.258-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.5.0\"}}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.258-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"/Users/clackson/data/db\"}}}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.259-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /Users/clackson/data/db\"}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n", "text": "I have been trying to sort this out all afternoon and I’m not getting anywhere between Stack Overflow and this forum.I have installed mongodb-community via homebrew using the guide. I have created a folder in my user directory for it to write to but when running the commandThis is the output I get, I have tried to google everything in this but have gotten nowhere.If anyone can point me in the right direction that would be great.", "username": "Carter_Clackson" }, { "code": "{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /Users/clackson/data/db\"ls -ld /Users/clackson/data/db\n", "text": "The last error is:{\"t\":{\"$date\":\"2022-07-20T14:12:46.260-05:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /Users/clackson/data/db\"which seems to indicate that you cannot write into the given directory.Please share the output of", "username": "steevej" }, { "code": "/System/Volumes/Data/data/db", "text": "Thanks Steeve, I actually just figured this out with the help of a YouTube video.It turned out that I needed to use the path /System/Volumes/Data/data/db, nothing else would work no matter how hard I tried.", "username": "Carter_Clackson" } ]
Problem installing MongoDB on Mac OS 12.4
2022-07-20T19:14:52.399Z
Problem installing MongoDB on Mac OS 12.4
3,239
null
[]
[ { "code": "", "text": "Hi team, myself Pallaav Sethi, currently holding the position of Manager in Piramal Finance. Feel free to connect with me on Linkedin: https://www.linkedin.com/in/pallaav-sethi", "username": "Pallaav_Sethi" }, { "code": "", "text": "Hello @Pallaav_Sethi,\nWelcome to the MongoDB Community Glad to see you join and introduce yourself. Equally excited to know that you would be leading the Chennai, MongoDB User Group soon.If you can share we would love to know how you use MongoDB in your day job. Thanks\nHarshit", "username": "Harshit" } ]
Introduction about myself
2022-07-20T16:21:27.880Z
Introduction about myself
2,108
https://www.mongodb.com/…375b47ab02b3.png
[ "atlas" ]
[ { "code": "", "text": "Can someone please help me with the belowQ1. What is the difference between Atlas Pro and Atlas in SaaS, ? (I see Pro has 24/7 support, does without Pro does not have 24/7 support, for enterprise which one will you suggest.)\n\nimage861×219 13.8 KB\n\nI went ahead to create a free subscription in Azure Mkt place but I find a message as below, I am not able to understand the message.Q2. What is the difference between having an instance on Atlas on Azure Market Place and Atlas in Mongo website ? ( other than from Billing Purpose, I understand the fact that MDB atlas instance in azure marketplace is just used for billing purposes )Q3. Why do we have AWS IAM when I pick up Azure as a cloud while creating a cluster in Atlas in Mongo Website. ( I should use Azure Authentication, why should I use AWS when I choose Azure cloud )Q4. How to get my azure account’s mongodb Atlas connected with Atlas on mongodb website, ( I find the below : [https://docs.atlas.mongodb.com/security/federated-auth-azure-ad/] is there any blog with some detailed explanation or steps of implementation. )Thanks and Regards,\nRayaguru", "username": "Rayaguru_S_Dash" }, { "code": "", "text": "Q1. Pro is recommended for Enterprises as it comes with 24hrs support (2 hrs SLA).\nThe message is basically to intimate you that just choosing the plan doesn’t complete the process. You need to create your Atlas account and that is all detailed in the mail you receive post the subscribing.Q2. Billing is the primary benefit along with burn down MACC for enterprisesQ3. All available options are shown in the UI. You can use Azure AD for authentication and authorization (database users). Details hereQ4. Technically both are same as clusters as created from Atlas portal only. Can you elaborate what you mean by connecting both of them ? Moving data between clusters or querying them together?", "username": "Diana_Annie_Jenosh" }, { "code": "", "text": "MACCThanks Diana for your clarification\nNeed a few more, please", "username": "Rayaguru_S_Dash" }, { "code": "", "text": "Thanks for your interest and efforts.Sorry for not detailing, MACC is Microsoft Azure Commit to Consume - refer hereDetailed steps for Vnet Peering are here and for Private end points , refer here .\nThe diagram shows AWS VPC endpoint/ Private Link. It is exactly same for Azure, with Vnet, Azure Private Endpoint and Azure Private Link.I thought that was answered. You can very well use Azure AD. Please send details/screenshot of where you are seeing the AWS mandated.Hope it helps.", "username": "Diana_Annie_Jenosh" }, { "code": "", "text": "Hello Diana,Yes it really helped, slowing i am able to establish my footprints in Atlas.Thanks for sharing the link for Pvt End Point\n\nimage941×645 147 KB\nI am looking for a diagrammatic illustration for Vnet Peering for connecting to Atlas Pro by the Application team.\nI find the below instances where AWS is referred even though I am putting my Atlas on Azure cloud.\n\nimage1177×441 25.5 KB\n\n\nimage1616×696 48.7 KB\n\n\nimage1360×872 91.4 KB\nRegards,\nRayaguru", "username": "Rayaguru_S_Dash" }, { "code": "", "text": "Data Federation is relatively new and is only supported for AWS today. Happy to have a call so that we can help you with your use-case. Thanks!", "username": "Diana_Annie_Jenosh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ATLAS Pro in Azure Market Place
2022-07-15T05:00:37.987Z
ATLAS Pro in Azure Market Place
2,628
https://www.mongodb.com/…5_2_1024x575.png
[ "queries", "vscode" ]
[ { "code": "Read One Document\n\nTo read one document, use the following syntax in your Playground:\n\ndb.collection.findOne(\n { <query> },\n { <projection> }\n)\n\n// The current database to use.\nuse('poem');\n\n// Search for documents in the current collection.\ndb.getCollection('chinese')\n .find(\n {\n /*\n * Filter\n * fieldA: value or expression\n */\n },\n {\n /*\n * Projection\n * _id: 0, // exclude _id\n * fieldA: 1 // include field\n */\n }\n )\n .sort({\n /*\n * fieldA: 1 // ascending\n * fieldB: -1 // descending\n */\n });\n\n", "text": "Hi everyone, Thanks for reading this issue.The query for querying document in the Vscode’s playground is as belowhereBut I tried it on the Vsode with no response.\nimage2830×1590 272 KB\nHowever, when I clicked the choice “Search For Documents” on the left, the coming up query works wellDid I make any mistake(。 ́︿ ̀。)? Thanks.", "username": "Tony_Cheng" }, { "code": "", "text": "Most likely you do not have a document with id:1 in the collection chinese of the database poem.Please share the documents you get whenclicked the choice “Search For Documents” on the left, the coming up query works well", "username": "steevej" }, { "code": "use('poem');use poem;", "text": "Thanks for your reply. ^ ^I just query the document successfully.Seems caused by syntax error. When I typed use('poem'); instead of use poem;. The query works fine.Sorry for this. I should’ve be more careful when reading document.", "username": "Tony_Cheng" }, { "code": "Atlas rent-shard-0 [primary] test> db.chinese.find() \n/* no output */\nAtlas rent-shard-0 [primary] test> use poem\nswitched to db poem\nAtlas rent-shard-0 [primary] poem> db.chinese.find()\n[ { _id: ObjectId(\"62d831294569ddb647f8d87e\"), x: 369 } ]\nAtlas rent-shard-0 [primary] poem> use test\nswitched to db test\nAtlas rent-shard-0 [primary] test> db.chinese.find()\n/* no output */\nAtlas rent-shard-0 [primary] test> use( 'poem' )\nswitched to db poem\nAtlas rent-shard-0 [primary] poem> db.chinese.find()\n[ { _id: ObjectId(\"62d831294569ddb647f8d87e\"), x: 369 } ]\nuse( 'poem' ) ;\ndb.chinese.find( { \"id\" : 1 } ) ;\nuse poem ;\ndb.chinese.find() ;\n", "text": "Seems caused by syntax error.You did not mentioned syntax error in the first post, but indicated that you did not get any results. Your screenshot also indicate no result rather than syntax error. As far as I know, at least for mongosh, use Database and use( ‘Database’ ) both produces the same result:I still think thatyou do not have a document with id:1 in the collection chinese of the database poem .Please post a screenshot withand", "username": "steevej" } ]
The recommendation of query for find document in viscose don't work
2022-07-20T13:27:13.851Z
The recommendation of query for find document in viscose don&rsquo;t work
2,187
null
[ "python", "beta" ]
[ { "code": "4.2.0b0PyMongopip install \"pymongo@git+ssh://[email protected]/mongodb/[email protected]\"", "text": "The Python Driver team is pleased to announce the 4.2.0b0 version of PyMongo. Due to technical limitations we cannot release on PyPI, but it can be installed from the tagged version as:pip install \"pymongo@git+ssh://[email protected]/mongodb/[email protected]\"To see the full set of changes, check out the release notes.If you run into any issues, please file an issue on JIRA or GitHub.", "username": "Steve_Silvester" }, { "code": "", "text": "I get error using the pip install command as following:Collecting pymongo@ git+ssh://[email protected]/mongodb/[email protected]\nCloning ssh://@github.com/mongodb/mongo-python-driver.git (to revision 4.2.0b0) to /tmp/pip-install-sm3q8b_2/pymongo_b5fd34b4fda24d418051342d53dcb804\nRunning command git clone --filter=blob:none --quiet 'ssh://@github.com/mongodb/mongo-python-driver.git’ /tmp/pip-install-sm3q8b_2/pymongo_b5fd34b4fda24d418051342d53dcb804\nHost key verification failed.\nfatal: Could not read from remote repository.Please make sure you have the correct access rights\nand the repository exists.\nerror: subprocess-exited-with-error× git clone --filter=blob:none --quiet ‘ssh://****@github.com/mongodb/mongo-python-driver.git’ /tmp/pip-install-sm3q8b_2/pymongo_b5fd34b4fda24d418051342d53dcb804 did not run successfully.\n│ exit code: 128\n╰─> See above for output.note: This error originates from a subprocess, and is likely not a problem with pip.\nerror: subprocess-exited-with-error× git clone --filter=blob:none --quiet ‘ssh://****@github.com/mongodb/mongo-python-driver.git’ /tmp/pip-install-sm3q8b_2/pymongo_b5fd34b4fda24d418051342d53dcb804 did not run successfully.\n│ exit code: 128\n╰─> See above for output.note: This error originates from a subprocess, and is likely not a problem with pip.\nWARNING: You are using pip version 22.0.4; however, version 22.1.2 is available.\nYou should consider upgrading via the ‘/usr/local/bin/python -m pip install --upgrade pip’ command.", "username": "Yang_Hyejin" }, { "code": "pip install https://github.com/mongodb/mongo-python-driver/archive/4.2.0b0.zippip install --upgrade pymongo==4.2.0", "text": "You can use https instead:\npip install https://github.com/mongodb/mongo-python-driver/archive/4.2.0b0.zipActually we just released PyMongo 4.2.0 today so you can install via:\npip install --upgrade pymongo==4.2.0", "username": "Shane" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Python Driver 4.2.0 Beta Available
2022-06-08T00:29:16.863Z
Python Driver 4.2.0 Beta Available
3,484
null
[ "production", "rust" ]
[ { "code": "v2.3.0mongodb", "text": "The MongoDB Rust driver team is pleased to announce the v2.3.0 release of the mongodb crate. This release contains a number of new features, bug fixes, and improvements, most notably support for MongoDB 6.0.To see the full set of changes, check out the release notes. If you run into any issues, please file an issue on JIRA or GitHub.", "username": "kmahar" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Rust driver 2.3.0 released
2022-07-19T15:49:35.533Z
Rust driver 2.3.0 released
2,310
null
[ "aggregation", "sharding", "production", "ruby" ]
[ { "code": "", "text": "The Ruby driver team is please to announce the release of version 2.18.0. This feature release of the Ruby driver supports MongoDB version 6.0. It includes the following new features:The following minor improvements were made:The following issues were addressed:", "username": "Dmitry_Rybakov" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Ruby driver 2.18.0
2022-07-20T15:07:28.931Z
Ruby driver 2.18.0
2,600
null
[ "replication" ]
[ { "code": "", "text": "We have 3 envs, the lowest env is a single instance and the devs run a single instance mongodb as well.\nOn the higher envs we use replicasetsWe have a feature that uses change streams, which are reliant on the server being a replicaset. We want to toggle the feature depending on MongoDBs capabilites dynamically on startup of the application.\nTo run replSet.getStatus the user needs admin permissions though, which we obviously dont want.How can we find out if the server is a replicaset without having admin permissions on the user?", "username": "Michael_Niemand" }, { "code": "", "text": "You’re looking for the hello command.Although drivers would be aware of the topolgy and can provide information indicating a replicaset or not.Also you can just create a replicaset of one.", "username": "chris" }, { "code": "", "text": "great answer, very helpful. Thanks.", "username": "Michael_Niemand" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Finding out if a mongodb instance is a replicaset without admin permissions
2022-07-20T11:40:51.403Z
Finding out if a mongodb instance is a replicaset without admin permissions
1,191
null
[]
[ { "code": "", "text": "We need to retrieve the service ID for a MongoDB Atlas cluster to be used in the App Services Admin API. We used to be able to get this by going to the Rules section of a Realm Application and retrieving it from the URL. There must be a better way. That hack is no longer supported in the new UI. Is there any way to retrieve this service ID from the UI of either the Atlas section or the App Services section. Any help would be appreciated.", "username": "Richard_Krueger" }, { "code": "", "text": "Yep – in the upper left corner of yoru App Services project, you’ll see the project name with an icon to copy the ID.\n", "username": "Caleb_Thompson" }, { "code": "", "text": "Caleb that is the App id of the MongoDB Realm App. What we need is the service Id of the Atlas Cluster to create a Trigger programmatically through the Admin API functionsAPI Create a TriggerThis is the service_id under the config object inside the payload of the API call.Richard", "username": "Richard_Krueger" }, { "code": ".../services/6237f5669fdc39c84d37575d/config", "text": "Hi Richard,I believe what you want is the id of the Linked Data Source which is a module in the UI.\nThe default service is usually named “mongodb-atlas” and if you open a data source, the service_id will be shown in the URL.Example:\n.../services/6237f5669fdc39c84d37575d/configAlternatively you can retrieve all service ids on the app using the admin api below:https://www.mongodb.com/docs/atlas/app-services/admin/api/v3/#tag/servicesRegards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Manny thank you so much for that piece of information. It works like a champ.Richard Krueger", "username": "Richard_Krueger" } ]
Retrieving service Id for Atlas Cluster
2022-07-18T15:14:41.669Z
Retrieving service Id for Atlas Cluster
2,513
null
[ "python", "spark-connector" ]
[ { "code": "", "text": "HiHow to pass “allowDiskUse” parameter to aggregate pipeline in Pyspark?", "username": "Gaurav_Gupta4" }, { "code": "spark.mongodb.read.aggregation.allowDiskUse=true", "text": "You would specify it as part of the Spark confwithin your pyspark set as follows -spark.mongodb.read.aggregation.allowDiskUse=trueNote that in V10 of the spark connector, we have a ticket to address an issue with this setting https://jira.mongodb.org/browse/SPARK-355. If anyone is reading this response be sure to check the status of the ticket before using the configuration property.", "username": "Robert_Walters" } ]
How to pass "allowDiskUse" parameter to aggregate pipeline in Pyspark
2021-10-26T11:09:36.225Z
How to pass &ldquo;allowDiskUse&rdquo; parameter to aggregate pipeline in Pyspark
3,472
null
[ "aggregation" ]
[ { "code": "{\n group: {\n code: 1\n },\n customerID: 11111\n}\n{\n groupCustomerRelation: [\n group: {\n code: 1\n },\n customerID: [\n 11111,\n 22222,\n 33333\n ]\n ]\n}\n", "text": "I need help with aggregation.\nI have a document like this:and I need to create new document grouped by group code with related customer IDs as arrayhow can I achieve this?", "username": "Daniel_Sorkin" }, { "code": "use('test')\ndb.foo.drop();\ndb.foo.insertMany([\n{ group: { code: 1 }, customerID: 11111 },\n{ group: { code: 1 }, customerID: 22222 },\n{ group: { code: 1 }, customerID: 33333 }\n]);\ndb.foo.aggregate([\n { $group: {\n _id: \"$group.code\",\n customerID: { $push: \"$customerID\"}\n }},\n { $project: { \n _id: 0,\n groupCustomerRelation: {\n group: { code: \"$_id\" },\n customerID: \"$customerID\"\n } \n }}\n])\n", "text": "Hi @Daniel_Sorkin ,and I need to create new document grouped by group code with related customer IDs as array\nhow can I achieve this?You should be able to do this as follows:", "username": "alexbevi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation group with nested array
2022-07-20T13:00:33.528Z
Aggregation group with nested array
1,284
null
[]
[ { "code": "", "text": "Hi I have a Kafka topic that I’d like to persist in a MongoDB using the official connector.However, I struggle to get it to work with the current version. The topic we like to persist has a String-Key and a JSON payload. The id for the MongoDB is part of the JSON payload.So far, everything is fine. However, when we receive a tombstone event, we’d like to delete all records that have the key of the topic in a certain field. Now, this causes currently 2 issues:Is there any way around this and if not, would it be possible to get this PR merged? Support setting a custom deletewritemode.strategy by ArneKlein · Pull Request #108 · mongodb/mongo-kafka · GitHubKind regards", "username": "Arne_Klein" }, { "code": "", "text": "Thanks @Arne_Klein for your PR, we will review it within the next quarter as part of our 1.8 release. Can you provide an example ideally sample/pseudo code of how you would use this custom strategy?", "username": "Robert_Walters" }, { "code": "document.id.strategy=com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy\ndocument.id.strategy.partial.value.projection.list=<comma-separated field names>\ndocument.id.strategy.partial.value.projection.type=AllowList\nwritemodel.strategy=com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneTimestampsStrategy\ndeletewritemodel.strategy: arneklein.connector.strategy.DeleteManyWriteStrategy\npackage arneklein.connector.strategy\n\nimport ...\n\nclass DeleteManyWriteStrategy : WriteModelStrategy {\n override fun createWriteModel(sinkDocument: SinkDocument): WriteModel<BsonDocument> {\n val keyDoc = sinkDocument.keyDoc\n val filter = Filters.eq(\n SHARED_ID_FIELD_NAME, keyDoc\n )\n return DeleteManyModel(filter)\n }\n\n companion object {\n private const val SHARED_ID_FIELD_NAME = \"some_field\"\n }\n}\n", "text": "Thank you @Robert_Walters for the fast response. The basic idea is, I want to save all versions of a Document until the document gets deleted. My setup would roughly look as follows:Rough configuration of the sink:And the delete many write strategyPlease take this just as a code sample, I can try to give you some real code as well, but depending on how fast the next version gets published I might need a little longer, since I’ll have to publish my own version of the MongoDB connector then.", "username": "Arne_Klein" }, { "code": "", "text": "Great! thanks for the details. I created https://jira.mongodb.org/browse/KAFKA-320 to track the work item for this request. As far as timeline goes to set your expectations it will be at least 3-6 months out before we can address this ticket.", "username": "Robert_Walters" } ]
Support for custom deletewritemode strategies in Kafka Sink
2022-07-18T18:13:09.045Z
Support for custom deletewritemode strategies in Kafka Sink
1,818
https://www.mongodb.com/…_2_1024x576.jpeg
[ "node-js" ]
[ { "code": "", "text": "\nIMG_20220720_1326131920×1080 115 KB\nI have been trying to fix this since saturday,and I’ve gone through mongodb’s documentation and stuff… but still… I neee help please, thanks!", "username": "norbert_madojemu1" }, { "code": "", "text": "Hi it’s hard to help find a problem without any code snippets. If you could share the code that you are using to connect.\nI would also check that the DB is up and running correctly and you can connect through the mongoshell.Here is a document that could possibly help:Here is another forum post that could also help:", "username": "tapiocaPENGUIN" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb QuerySrv ECONNREFUSED
2022-07-20T12:46:34.546Z
Mongodb QuerySrv ECONNREFUSED
2,730
null
[ "dot-net", "atlas-cluster", "containers" ]
[ { "code": "collection.FindAll()", "text": "I am using a rather old C# driver1.11.0 , but until 3 days ago hadn’t any issues with basic query operations, i.e. retrieval of all documents in a collection. Now collection.FindAll() returns a cursor with 0 documents all the time, although I can see there are documents.The current version of the free cluster is 5.0.9 and I’ve tried locally to setup a 5.0.9 instance in a Docker container (single server) and the same old driver works fine with it.I don’t have a clue what might be the problem and how to detect what has changed or what’s special about MongoDB Atlas deployment. Could it be something cluster-specific?The connection string in the .NET code looks like this:mongodb://user:**********@samplecluster-shard-00-00-2oaxj.mongodb.net:27017,samplecluster-shard-00-01-2oaxj.mongodb.net:27017,samplecluster-shard-00-02-2oaxj.mongodb.net:27017/admin?ssl=true&sslverifycertificate=false&connectTimeoutMS=20000&uuidRepresentation=StandardThe only thing that I found in the compatibility notes that sounds relevant was:Starting in MongoDB 5.0, certain database commands raise an error if passed a parameter not explicitly accepted by the command. In MongoDB 4.4 and earlier, unrecognized parameters are silently ignored.But this doesn’t explain why I don’t have such an issue with 5.0.9 outside MongoDB Atlas context.Any ideas what might be wrong or how to narrow down the problem further?", "username": "Ivan_Mitev" }, { "code": "", "text": "Just for the record, newer versions of the driver successfully retrieve the documents, but the 1.11 doesn’t. It correctly returns the number of documents in a collection, but the documents are not returned in the cursor. This happens silently - no indication of an error.", "username": "Ivan_Mitev" } ]
Can no longer query MongoDB Atlas documents in a cluster using an old .NET Driver 1.11
2022-07-19T11:31:00.434Z
Can no longer query MongoDB Atlas documents in a cluster using an old .NET Driver 1.11
1,440
null
[ "replication" ]
[ { "code": "", "text": "Hi experts,\nwe are on mongodb version: 2.6.12\nwe have a mongodb replicaset with 3 nodes. 1 primary 2 secondary instances.Primary is working and both secondary nodes are stale in mongodb replicaset.\nSecondary nodes are stuck in recovering state and many days behind primary.\ndbpath directory has 5TB data on each secondary node.Please advice how can make atlleast one secondary node replicating with primary.", "username": "Abhinav_Avanisa" }, { "code": "dbPath", "text": "Hi,\nI would like to recommend you to upgrade your MongoDB database, version 2.6.12 is very old and out of support.\nRegarding you replica set question - the replica set member becomes “stale” when its replication process falls so far behind that the primary overwrites oplog entries the member has not yet replicated. When this occurs, you must completely resynchronize the member by removing its data and performing an initial sync.please read Resync a Member of a Replica Set", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "Thank you very much for your quick response.\nThis worked in a lower environment where dbpath is less size.\nBut incase of production, we have 5.2TB of data to replicate from primary if we perform initial sync.Do we any other options. ?Thanks Again,", "username": "Abhinav_Avanisa" }, { "code": "", "text": "you can also Sync by Copying Data Files from Another Member, I think a snapshot can be a good solution.", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "We started resync process on one the secondary node. It was 50 percent complete and stuck for long time and not progressing further. It was in startup2 mode.\nSo we restarted the mongod service on that node. Then mongod service came and started syncup from the beginning again. I could not see the earlier synced up files.\nCan you please suggest on this issue.", "username": "Abhinav_Avanisa" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Both secondary nodes are stale in mongodb replicaset
2022-07-15T21:01:20.855Z
Both secondary nodes are stale in mongodb replicaset
2,148
null
[]
[ { "code": "", "text": "I am following this tutorial [MERN Stack Course - ALSO: Convert Backend to Serverless with MongoDB Realm - YouTube]I get 204 errors for all my requests when I test them in my react web app or from Insomnia. The functions produce the correct results in the Realms editor.The endpoints are set to use System Authentication and I am allowing access from all IP addresses.The result is the same regardless of the body of the function. For example:exports = async function({ query, headers, body}, response) {\nconst collection = context.services.get(“mongodb-atlas”).db(“sample_restaurants”).collection(“restaurants”);\nconst cuisines = await collection.distinct(“cuisine”);\nreturn cuisines\n};\nproduces the same result as\nexports = async function({ query, headers, body}, response) {\nreturn “Hello world”;\n};The call from the realm console is just\nexports({})Result from the realm console isran at 1658043458424\ntook 402.199783ms\nresult:\n[\n“Afghan”,\n“African”,\n“American”,\n“Armenian”,\n“Asian”,\n“Australian”,\n“Bagels/Pretzels”,\n“Bakery”,\n“Bangladeshi”,\n“Barbecue”,\n“Bottled beverages, including water, sodas, juices, etc.”,\n“Brazilian”,\n“Café/Coffee/Tea”,\n“Café/Coffee/Tea”,\n“Cajun”,\n“Californian”,\n“Caribbean”,\n“Chicken”,\n“Chilean”,\n“Chinese”,\n“Chinese/Cuban”,\n“Chinese/Japanese”,\n“Continental”,\n“Creole”,\n“Creole/Cajun”,\n“Czech”,\n“Delicatessen”,\n“Donuts”,\n“Eastern European”,\n“Egyptian”,\n“English”,\n“Ethiopian”,\n“Filipino”,\n“French”,\n“Fruits/Vegetables”,\n“German”,\n“Greek”,\n“Hamburgers”,\n“Hawaiian”,\n“Hotdogs”,\n“Hotdogs/Pretzels”,\n“Ice Cream, Gelato, Yogurt, Ices”,\n“Indian”,\n“Indonesian”,\n“Iranian”,\n“Irish”,\n“Italian”,\n“Japanese”,\n“Jewish/Kosher”,\n“Juice, Smoothies, Fruit Salads”,\n“Korean”,\n“Latin (Cuban, Dominican, Puerto Rican, South & Central American)”,\n“Mediterranean”,\n“Mexican”,\n“Middle Eastern”,\n“Moroccan”,\n“Not Listed/Not Applicable”,\n“Nuts/Confectionary”,\n“Other”,\n“Pakistani”,\n“Pancakes/Waffles”,\n“Peruvian”,\n“Pizza”,\n“Pizza/Italian”,\n“Polish”,\n“Polynesian”,\n“Portuguese”,\n“Russian”,\n“Salads”,\n“Sandwiches”,\n“Sandwiches/Salads/Mixed Buffet”,\n“Scandinavian”,\n“Seafood”,\n“Soul Food”,\n“Soups”,\n“Soups & Sandwiches”,\n“Southwestern”,\n“Spanish”,\n“Steak”,\n“Tapas”,\n“Tex-Mex”,\n“Thai”,\n“Turkish”,\n“Vegetarian”,\n“Vietnamese/Cambodian/Malaysia”\n]\nresult (JavaScript):\nEJSON.parse(‘[“Afghan”,“African”,“American”,“Armenian”,“Asian”,“Australian”,“Bagels/Pretzels”,“Bakery”,“Bangladeshi”,“Barbecue”,“Bottled beverages, including water, sodas, juices, etc.”,“Brazilian”,“Café/Coffee/Tea”,“Café/Coffee/Tea”,“Cajun”,“Californian”,“Caribbean”,“Chicken”,“Chilean”,“Chinese”,“Chinese/Cuban”,“Chinese/Japanese”,“Continental”,“Creole”,“Creole/Cajun”,“Czech”,“Delicatessen”,“Donuts”,“Eastern European”,“Egyptian”,“English”,“Ethiopian”,“Filipino”,“French”,“Fruits/Vegetables”,“German”,“Greek”,“Hamburgers”,“Hawaiian”,“Hotdogs”,“Hotdogs/Pretzels”,“Ice Cream, Gelato, Yogurt, Ices”,“Indian”,“Indonesian”,“Iranian”,“Irish”,“Italian”,“Japanese”,“Jewish/Kosher”,“Juice, Smoothies, Fruit Salads”,“Korean”,“Latin (Cuban, Dominican, Puerto Rican, South & Central American)”,“Mediterranean”,“Mexican”,“Middle Eastern”,“Moroccan”,“Not Listed/Not Applicable”,“Nuts/Confectionary”,“Other”,“Pakistani”,“Pancakes/Waffles”,“Peruvian”,“Pizza”,“Pizza/Italian”,“Polish”,“Polynesian”,“Portuguese”,“Russian”,“Salads”,“Sandwiches”,“Sandwiches/Salads/Mixed Buffet”,“Scandinavian”,“Seafood”,“Soul Food”,“Soups”,“Soups & Sandwiches”,“Southwestern”,“Spanish”,“Steak”,“Tapas”,“Tex-Mex”,“Thai”,“Turkish”,“Vegetarian”,“Vietnamese/Cambodian/Malaysia”]’)", "username": "Paul_Wilkinson" }, { "code": "", "text": "So I did figure this out. “Respond With Result” was set to false by default. After setting it to true I started getting 200’s and responses again.", "username": "Paul_Wilkinson" } ]
204 Errors from Endpoints regardless of function
2022-07-17T07:58:02.785Z
204 Errors from Endpoints regardless of function
1,720
null
[ "app-services-user-auth", "containers" ]
[ { "code": " mongodb:\n image: mongo:5.0\n container_name: mongo\n environment:\n MONGO_INITDB_DATABASE: test\n ports:\n - 27018:27017\n volumes:\n - ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo-js:ro\n - './volumes/mongo:/data/db'\ndb.createUser({\n user: 'admin',\n password: 'admin',\n roles: [\n {\n role: 'readWrite',\n db: 'admin',\n },\n ],\n});\n", "text": "this is the docker compose fileand this is the init scriptI tried opening the container with auth but it failed mesrably.it works without auth… coupld someone point out where did I go wrong with this ?", "username": "Ahmed_Elbarqy" }, { "code": "command: [--auth]\nenvironment:\n - MONGO_INITDB_ROOT_USERNAME=userAdmin\n - MONGO_INITDB_ROOT_PASSWORD=userPassword\n", "text": "Hi @Ahmed_Elbarqy and welcome to the community!!In order to enable authentication using the docker-compose.yaml, you would need to perform the following procedure:docker-compose up -ddocker-compose exec mongodb /bin/shmongosh -u userAdmin -p userPassword --authenticationDatabase adminOnce you are logged in using the shell, you can create the user on the required database using the required roles and authentication.For more information, there are further documentations available regarding the configuration in DockerIf you are still facing issue, please provide the complete step by step process to reproduce the issue with the received error response.Please note that the official MongoDB image is maintained by docker and not MongoDB.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Docker compose authentication doesn't register
2022-07-14T14:14:17.718Z
Docker compose authentication doesn&rsquo;t register
20,715
null
[ "aggregation" ]
[ { "code": "", "text": "So, on our internal network, I have added my MongoDB Server as server IP:27017 in forwarding destinations in IBM QRadar hosted on a remote server. So basically, IBM QRadar will be forwarding a JSON payload (array of JSON objects) over TCP to my MongoDB Server on server IP:27017I need to write this data into a MongoDB collection. What is the best way to achieve this? This needs to be done in the way I have mentioned. I don’t want to manually export JSON files from QRadar and then import them into MongoDB.Currently, I’ve added the remote server IP in the bind-IP list in mongod.cfg file so that MongoDB listens to connections coming from remote clients. I can currently see -Blockquote 2110 61.498272 IP1 IP2 TCP 54 27017 → 53964 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0where IP2 is the MongoDB Server and IP1 is the QRadar Console.I need to use MongoDB to dump raw data logs generated by QRadar and then aggregate it based on requirements. I have mistakenly copied the Wireshark log without PSH.", "username": "Vikram_Tatke" }, { "code": "", "text": "Hi @Vikram_TatkeI’m not an expert on QRadar and their capabilities, but I think the most straightforward way to do this is to put an API layer between QRadar and the MongoDB server. Basically the API layer would capture the incoming JSON object, then turn it into an insert statement that goes into MongoDB.There are many community REST API layer providers, such as restheart for Java, or you can roll your own using any popular REST server (such as Express for Node) in combination with the official MongoDB Driver for the corresponding language. Of course, going this route would require you to maintain said server (uptime, resources, availability, security, etc.).Alternatively if you’re using Atlas, you can use some custom HTTPS endpoint or even the Atlas Data API.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Ingest JSON data into MongoDB received over a port from a remote server
2022-07-19T10:25:12.510Z
Ingest JSON data into MongoDB received over a port from a remote server
1,443
null
[ "node-js", "mongoose-odm" ]
[ { "code": "require('dotenv').config();\nconst mongoose = require(\"mongoose\")\n\nmongoose.connect(process.env.MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true });\n\nlet personSchema = new mongoose.Schema({\n name: String,\n age: Number,\n favoriteFoods: [String]\n});\n\n/** 3) Create and Save a Person */\nlet Person = mongoose.model('Person', personSchema);\n\n/** 6) Use `Model.findOne()` */\nlet findOneByFood = function (food, done) {\n Person.findOne({ favoriteFoods: food }, function (err, data) {\n if (err) return console.log(err);\n done(null, data);\n });\n};\n", "text": "Hi, MongoDB community. You folks must be aware of freeCodeCamp. They have a track known as Back End Development and APIs. Inside that track, there is a module for MongoDB and Mongoose.Before I go into describing the problem, I would like to put in some context first.Now let’s come to my problem. There are almost 12 exercises in that module and I’m stuck at 5th one. The problem statement is:Use model.findOne() to Return a Single Matching Document from Your DatabaseThis is the minimal code I am using to pass this test.Somehow I am failing the test for this exercise.If you have passed this exercise, please enlighten me on what’s wrong with my code.", "username": "sntshk" }, { "code": "if (err) return console.log(err);", "text": "if (err) return console.log(err);This might be the issue.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi Santosh,Just wondering if you ever found a solution. I’m having the exact same issue ^^’Cheers,\nAlex", "username": "Alexis_Drai" } ]
freeCodeCamp: Use model.findOne() to Return a Single Document fails
2022-05-02T03:53:13.967Z
freeCodeCamp: Use model.findOne() to Return a Single Document fails
5,448
null
[ "golang" ]
[ { "code": "", "text": "We are using several connection to mongoDB (Mainly Atlas, but sometimes self-hosted) due to our multi-tenant application (actually one per customer). The app runs through Golang official driver. For that, we cache each connection in a “map Like” (internal dev that take care of concurrency problem). We were wondering how can we delete a connection from this cache in case of connection problem for example if the server is down for some reason.Currently the flow is as follow:I was thinking about using the PoolMonitor and/or ServerMonitor which looks promising for our use case. In case of a connection problem (ServerHeartbeatFailed seems to be the more appropriate), we delete the connection from the cache. The next time a user from this tenant will try to access his database, it will try to build the connection (not cached anymore) and fail if the server is still down or success if the server is up.The idea behind it is to delete the invalid connection from our cache without the need to restart the app if one of our customer has problem with his connection.Does the ServerHeartbeatFailed event is the more appropriate ?", "username": "Sebastien_Tachier" }, { "code": "ClientClientClient", "text": "@Sebastien_Tachier thanks for the question! A Client will manage its own pool of connections and create new connections when there is any issue connecting to the database, so if everything about the connection parameters are the same, there should be no need to drop and re-create a Client.", "username": "Matt_Dale" } ]
Best approach for deleting a cached connection
2022-07-18T07:57:31.230Z
Best approach for deleting a cached connection
2,318
null
[ "queries" ]
[ { "code": "", "text": "Hello guys, I’m new with mongodb and I’m stucked with a stupid (for you) problem.\nI try to explain the problem.\nI have a personal_feed document with user_id and an array of feed. The feed document has a name and an array of profile_id.\nHow I can find the personal_feed of a specific user (by user_id), getting a specific feed (by name) and return the first 10 element in profile_id?personal_feed:{\nuser_id: 1,\nfeed:[\n{\nname: “feed1”,\nprofiles_id: […]\n},\n{\nname: “feed2”,\nprofiles_id: […]\n},\n…\n]\n}For instance, I want retrieve the first 10 profiles_id of the feed with name “feed2” of the personal_feed with user_id=1Thanks for every hint", "username": "Mauro_Cerone" }, { "code": "", "text": "You want to study MongoDB aggregation. MongoDB Courses and Trainings | MongoDB University\nReally Aggregation is the answer to life, the universe, and everything in MongoDB outside of server admin.", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to retrieve portion of collection?
2022-07-19T16:35:16.474Z
How to retrieve portion of collection?
746
null
[ "golang" ]
[ { "code": "", "text": "case 1\nif err.Error() == “value is nil”case 2\nif err == mongo.ErrNoDocumentsMethods like cases don’t seem right because you can’t predict what kind of errors will occur. How do I check the error?", "username": "kindcode" }, { "code": "", "text": "You can use package variables defined by Mongo ModulePackage mongo provides a MongoDB Driver API for Go.", "username": "Sebastien_Tachier" } ]
How can I check the error of Mongodb in Golang?
2022-06-23T01:57:55.635Z
How can I check the error of Mongodb in Golang?
4,764
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "sponsoredsponsored: true[...]\n\nconst matchFilter: { approved: boolean, type?: QueryOptions, format?: QueryOptions, difficulty?: QueryOptions, language?: QueryOptions }\n = { approved: true }\n\nif (typeFilter) matchFilter.type = { $in: typeFilter };\nif (formatFilter) matchFilter.format = { $in: [...formatFilter, 'videoandtext'] };\nif (difficultyFilter) matchFilter.difficulty = { $in: difficultyFilter };\nif (languageFilter) matchFilter.language = { $in: languageFilter };\n\nconst aggregationResult = await Resource.aggregate()\n .search({\n compound: {\n must: [\n [...]\n ],\n should: [\n [...]\n ]\n }\n })\n [...]\n .sort(\n {\n sponsored: -1,\n _id: 1\n }\n )\n .facet({\n results: [\n { $match: matchFilter },\n { $skip: (page - 1) * pageSize },\n { $limit: pageSize },\n ],\n totalResultCount: [\n { $match: matchFilter },\n { $group: { _id: null, count: { $sum: 1 } } }\n ],\n [...]\n })\n .exec();\n\n[...]\n", "text": "My app can search through a database of resources using MongoDB’s aggregation pipeline. Some of these resources are marked as sponsored via a property.I want to show these sponsored entries first (already done) but I want to show only one of them.What I have now is this:What I want:Below is my aggregation code (with Mongoose syntax). How can I skip elements with sponsored: true except for the first one?", "username": "Florian_Walther" }, { "code": "addFieldsselectedForSponsoredSlotsponsored: true selectedForSponsoredSlot", "text": "I just realized that I don’t actually want to filter out these other sponsored elements. Instead, I would like to treat them like organic results, meaning that I don’t want to sort them to the top. Is it possible to only move the first sponsored element to the top of the results?I assume the solution has to do with addFields, i.e. \"add field selectedForSponsoredSlot but only for the first element with sponsored: true. Then sort by selectedForSponsoredSlot.", "username": "Florian_Walther" } ]
Apply match filter but ignore first element
2022-07-19T09:49:01.717Z
Apply match filter but ignore first element
1,127
null
[ "realm-web" ]
[ { "code": "import * as Realm from 'realm-web'", "text": "New here & to MongoDb in general, so not sure whether I should be replying to this post or creating a new one, but I have a similar problem.Followed the Docs to set up Web SDK and when importing realm-web into my Svelte project using:\nimport * as Realm from 'realm-web'\nI have the following error :Uncaught TypeError: Class extends value # is not a constructor or nulloriginating from: bundle.dom.es.js:1664:26", "username": "Marcio_Rodrigues1" }, { "code": "", "text": "Same Error using from CDN … origination from bundle.iife.js:9160:30 this time", "username": "Marcio_Rodrigues1" }, { "code": "", "text": "Not sure exactly why, but was trying to integrate into an existing Svelte app that was still using Rollup, switched over to Vite now and the error no longer appears.", "username": "Marcio_Rodrigues1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Web Error: Class extends value # is not a constructor or null in Svelte App
2022-07-14T15:30:02.144Z
Realm Web Error: Class extends value # is not a constructor or null in Svelte App
5,859
null
[]
[ { "code": "{\n name: \"Joe\",\n events: {\n \"51a9436e-b038-...\": {\n startTime: ...,\n endTime: ...,\n ...\n },\n \"4d3ab18f-ad40-...\": {\n startTime: ...,\n endTime: ...,\n ...\n },\n ... (many more events, probably 100-1000)\n }\n}\n", "text": "Hello. I am wondering what document structure I should use to solve the problem below:The data that I want to store maps a user ID to a User account. Each user account can have many “events”, and those events are stored by ID under each account (not globally). I would like to be able to quickly lookup an event by UUID without enumerating each one on every request.Here is an example JSON User account:How best can I map this to MongoDB, or is there an alternative database that I should use?Any help would be greatly appreciated!", "username": "Nate_Levin" }, { "code": "{\n _id: <ObjectId>,\n name: <string>,\n events: [\n { id: <ObjectId>, startTime: <Date>, endTime: <Date> , ... },\n { id: <ObjectId>, startTime: <Date>, endTime: <Date>, ... },\n //...\n]\nObjectIdinsertOnefindOneupdateOnedeleteOneidmongosh// Create a user with two events\ndb.users.insertOne ({ name: \"Tom\", events: [ { id: 1, desc: \"first event\" }, { id: 2, desc: \"second event\" } ] })\n\n// Query an event by its id, 2 for the user \"Tom\"\ndb.users.findOne( { name: \"Tom\", \"events.id\": 2 }, { \"events.$\": 1 )\nevents", "text": "Hello @Nate_Levin, welcome to the MongoDB Community forum!The user’s events can be stored as an array of events. Each event is identified by a unique id field. For example,In MongoDB, most commonly, ObjectId is used to represent a unique identifier (like, the user id or event id). You can also use the UUID or any other types.You can perform CRUD operations on the user as well as the user event data. The typical collection methods are the insertOne, findOne, updateOne and the deleteOne. There are specific operators to work with the array fields within these operations. For example, to query a specific event by its id, you can try something like this from mongosh:To query by array fields efficiently, you can index the array field. Indexes on array fields are referred as Multikey Indexes.MongoDB provides specific operators to work with array type fields, in this case the events. There are Query Operators, Projection Operators, Update Operators and Aggregation Operators which can be used with array fields. Refer the following links for usage and examples:", "username": "Prasad_Saya" } ]
Data Modeling — User -> Many UUIDs -> JSON
2022-07-19T02:31:55.828Z
Data Modeling — User -&gt; Many UUIDs -&gt; JSON
1,052
null
[ "queries", "python" ]
[ { "code": "", "text": "Hello! Can you help me with problem?\nI use connection like this:\nuri = ‘mongodb://myUsername:myPassword@hostname/?tls=True’\nconnect = MongoClient(uri)\ndb = connect.сallсenter\ncollection = db.JournalAnd next after any request i have an error.\nFor example:collection = db.Journal\nc = collection.find()\nfor r in c:\nprint(r)An error:\nOperationFailure: not authorized on сallсenter to execute command…At the same time I can connet to this database using MongoDBCompass. So connection data is correct. Why I cant connect using python?", "username": "Eugene" }, { "code": "", "text": "The error message tells you that thus user is not authorised to do read operations on the database callcenter. Check the users roles and privileges.", "username": "steevej" }, { "code": "", "text": "Thank you. But I can connect to database using MongoDBCompass. I use the same username and password. So I can read data\nJust now I was able to connect to the database using clicksense. But i still can’t connect using python", "username": "Eugene" }, { "code": "", "text": "Post screenshots of doing so.Same user with same password on same database of the same server should be able to do the same thing.", "username": "steevej" }, { "code": "", "text": "Here it is\nScreenshot1048×1670 83 KB\n", "username": "Eugene" }, { "code": "", "text": "This is a but different from your original post. You are not connecting to a standalone instance like hinted by your original URI. You are connecting to a replica set. The issue might simply be because you are not specifying the replica set name in python. You seem to do it with Compass.Check syntax at https://www.mongodb.com/docs/manual/reference/connection-string/", "username": "steevej" }, { "code": "", "text": "If your credentials are correct, then try restarting Jupyter’s python kernel. It is possible you are forgetting to execute the connection cell after you edit URI, or variables just got stuck somehow and won’t change without a restart.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I have added replicaSet but nothing changed.#Connection\nuri = ‘mongodb://callcenter:password@example6:27017,example7:27017,example8:27017/callcenter?readPreference=secondary&authSource=admin&replicaSet=rs0’\nconnect = MongoClient(uri)\ndb = connect.сallсenterI still have an error: not authorized on сallсenter to execute commandMaybe you know another reasons why i cant connect?", "username": "Eugene" }, { "code": "", "text": "Thank you. But it didnt help", "username": "Eugene" }, { "code": "authSourcecallcenterauthSource=callcenteruse admin\ndb.getUser(\"callcenter\")\nuse callcenter\ndb.getUser(\"callcenter\")\n", "text": "here is the thing: you are either using the wrong authSource, connecting to wrong database or this “callcenter” user does not have access to callcenter database.first, make sure you are not having a typo in your connection string.use authSource=callcenter and try again.do you have admin right on that database? check the output of these commands to see what access rights this “callcenter” user has", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I checked the connection string. It`s correct. With this conection string I can connect to db using QlikSense. And I connected to db using MongoDBCompass using this stringJust discovered that if I specify only one server, not three, then everything works fine. So, the problem is that I wrote three servers. How can I specify three servers in one connection string? Maybe you know some features?", "username": "Eugene" }, { "code": "", "text": "if there is no typo, your above URI string should have no problem connecting to a replica set.The error you are getting is about the auth. It is possible your user is not set up correctly and one or more replica members do not know how to deal with it. or maybe the replica set is not set up correctly.You said you can connect to one of them if you use single hostname. Can you try to connect each of the members separately?", "username": "Yilmaz_Durmaz" }, { "code": "use admin\ndb.getUser(\"callcenter\")\nuse callcenter\ndb.getUser(\"callcenter\")\n", "text": "Try removingreadPreference=secondaryDo the following on each host individually:One more thing is to ensure you are connecting to the same machines. You seem to be running off a 10.* network. I have seen many times that the same 10.* IPs are not referring to the same machines. When connection from a test gateway you reach test machines and from the prod. gateway you reach prod. machine with the exact same IPs.BecauseSame user with same password on same database of the same server should be able to do the same thing.In principal the same URI refers to the same user, same password, same database and same server/cluster. With all the redacting you are doing with the URI you share it is very hard to spot what differs from one application to the other. Unless there is a major bug in one of the driver where it sends the wrong credentials, you are the same user in the server with the same provileges.", "username": "steevej" } ]
Cant connect to Mongodb using python
2022-07-15T11:38:23.635Z
Cant connect to Mongodb using python
5,344
null
[ "node-js", "atlas-cluster" ]
[ { "code": "MongoServerSelectionError: Server selection timed out after 30000 ms\n at Timeout._onTimeout (<path>\\node_modules\\mongodb\\src\\sdam\\topology.ts:570:30)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n '<username>-shard-00-00.xocdc.mongodb.net:27017' => [ServerDescription],\n '<username>-shard-00-02.xocdc.mongodb.net:27017' => [ServerDescription],\n '<username>-shard-00-01.xocdc.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: '<set-name>',\n logicalSessionTimeoutMinutes: undefined\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n", "text": "Hello,I’m using MongoDB Atlas since a few days in my Node.js API with mongodb version 4.8.Till now I haven’t encountered any issues connection to the mongo server, but all of a sudden I keep getting this error:I’m sure my connections are setup correctly. I’ve whitelisted all IPs, so the problem can’t be there. Does anyone have a clue what causes this error?", "username": "Inbar_Azulay" }, { "code": "", "text": "Are the error messages really includes <username> and <set-name> like you shared or is this some redaction you made to hide some details?If that is the case, then it looks funny because you should get real values. It looks like some configuration issues. It might be some code errors that manipulate the URI to inject username and password.", "username": "steevej" } ]
MongoServerSelectionError: Server selection timed out after 30000 ms
2022-07-17T13:35:19.610Z
MongoServerSelectionError: Server selection timed out after 30000 ms
2,545
null
[]
[ { "code": "", "text": "Hello,\nI created my first iOS app with a realm database\nThe database is used offline on the Device\nThe app runs fine with the xcode simulatorNow I want to test the app on my ipad (plugged on the Mac)\nThe app is launching but the database is not reachable\nThe console of Xcode says :\nRealDB is located: file:///var/mobile/Containers/Data/Application/FD338F70-94BF-4592-AF9A-A7429DF1/Documents/default.realm\nShould I transfer the database manually ? in which folder ?\nThanks a lot for your answers !\nMat", "username": "Mat" }, { "code": "", "text": "Before going too far into an answer, how did the database ‘get into the simulator’?e.g. Is this a pre-packaged (bundled) database or is it created as the user enters data?", "username": "Jay" }, { "code": "", "text": "Thanks for your answer\nI created the realm database by myself\nThe user isn’t allowed to modify it.\nFor each device I want to simulate in Xcode, I need to copy the dataBase in the respective folder.\nXcode give me the way of the folder like this:\nRealmDB is located: file:///Users/Platypus/Library/Developer/CoreSimulator/Devices/0850306E-00AC-4EEA-AB14-C1ACA0/data/Containers/Data/Application/3F14D0BF-C230-4C56-8EAF-4DC36E6/Documents/default.realm\nBut when I want to install the app on my iPad with Xcode simulator, the console give me this way:\nRealmDB is located: file:///var/mobile/Containers/Data/Application/FD338F70-94BF-4592-AF9A-A7429DF1/Documents/default.realm\nI don’t know where the folder “var” is and I don’t know where I have to copy the dataBase.\nI don’t even know if this folder is on the mac or on the iPad! ", "username": "Mat" }, { "code": "", "text": "Perhaps my question was not clear or detailed enough.Are you BUNDLING the Realm Database with your app? So for example it’s distributed when the app is?…or…Does your code BUILD the Realm database when it’s first run?…or…Something else? e.g. how did the Realm file and data get to that location to start with?The answer will be affected by the above.", "username": "Jay" }, { "code": "", "text": "ok sorry,\nI have populated the data base and it will be bundled and distributed with the app\nAfter this the database won’t be modified by the user", "username": "Mat" }, { "code": "", "text": "If you’re Bundling your Realm database with the app, then it’s not stored on disk as a separate file.It’s a read only database that exists within the app bundle. In that case it wouldn’t need to be moved or copied as your code reads it directly from the bundle.", "username": "Jay" }, { "code": "", "text": "ok thank you !\nDo you know a tutorial for bundling a realm DataBase in a swiftUI project ?", "username": "Mat" }, { "code": "", "text": "I do! There are some right on the Swift SDK Tutorial site. There’s a quick startand then a SwiftUI Guide", "username": "Jay" }, { "code": "", "text": "Hello Jay !\nI can’t find an easy way to load the Realm DataBase from the bundle\nCould you help me ?\n(I’m new in app coding )Thank you again", "username": "Mat" }, { "code": "", "text": "ok I just tryed:import SwiftUI\nimport RealmSwiftlet realmUrl = Bundle.main.url(forResource: “default”, withExtension: “.realm”)\nlet realm = try ! Realm(fileURL: realmUrl!)It seems to work…", "username": "Mat" }, { "code": "let config = Realm.Configuration(\n fileURL: Bundle.main.url(forResource: \"BundledRealm\", withExtension: \"realm\"),\n readOnly: true) //bundled Realms are read-only\n\nlet realm = try! Realm(configuration: config)\n\nlet results = realm.objects(DogClass.self).where { $0.name == \"Spot\" }\n", "text": "Great! That looks really close. You may want to remove the . from the realm extension as it’s extraneous.Here’s how we do it for local only Realms", "username": "Jay" }, { "code": "let config = Realm.Configuration(\n fileURL: Bundle.main.url(forResource: \"BundledRealm\", withExtension: \"realm\"),\n readOnly: true) //bundled Realms are read-only\n\nlet realm = try! Realm(configuration: config)\n", "text": "Thanks to you, it’s now working on my iPad!\nThank you very much!!Kind regards", "username": "Mat" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm dataBase transfert on iOS for test
2022-07-12T09:26:37.329Z
Realm dataBase transfert on iOS for test
1,581
https://www.mongodb.com/…4_2_1024x512.png
[ "aggregation", "dot-net", "production" ]
[ { "code": "IMongoQueryable.AppendStage()$topN$groupEstimatedDocumentCountcountcountEstimatedDocumentCountEstimatedDocumentCountstrict: falseServerApi", "text": "This is the general availability release for the 2.17.0 version of the driver.The main new features in 2.17.0 include:EstimatedDocumentCount is implemented using the count server command. Due to an oversight in versions 5.0.0-5.0.8 of MongoDB, the count command, which EstimatedDocumentCount uses in its implementation, was not included in v1 of the Stable API. If you are using the Stable API with EstimatedDocumentCount, you must upgrade to server version 5.0.9+ or set strict: false when configuring ServerApi to avoid encountering errors.For more information about the Stable API see:https://mongodb.github.io/mongo-csharp-driver/2.17/reference/driver/stable_api/The full list of JIRA issues resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.17.0%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
.NET Driver 2.17.0 Released
2022-07-18T21:58:12.510Z
.NET Driver 2.17.0 Released
1,986
null
[]
[ { "code": "", "text": "Not sure if this is possible, I hope someone from the team is reading this as a feature request.I would like to create different data api access keys for our microservice and scope each api key to a specific collection, so key A can only read from collection1, while key B can read/write from collection2. Is this possible? I like the simplicity of the regular mongodb user access permissions, but they are not available for the data api.", "username": "Florian_Bischoff" }, { "code": "{\n \"%%user.id\": *<ID ASSOCIATED WITH API KEY>*\n}\n", "text": "Hey Florian -Yes you can actually do this today, but it will require some extra configuration. To set this up, go into ‘Advanced Settings’ in your Data API app and go to ‘Rules’ in the sidenav\n\nimage2359×101 16.9 KB\ntoday, you should have a set of ‘Default Rules’ that are set to Read & Write = True\nimage1405×119 9.48 KB\nYou can actually delete this configuration from the menu, and click into each collection and set up rules separately. For each one, you can set a different ''Apply When\" for a different api key.i.e. you can click into Collection A, set up read only rules, and then set the apply when to beThe ID can be found in the API key settings in the Data API page\n\nimage2033×303 21.6 KB\nmore examples for apply when expressions are here", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data API: Restrict access to read/write to collection
2022-07-16T06:58:33.930Z
Data API: Restrict access to read/write to collection
2,516
null
[ "ruby", "mongoid-odm" ]
[ { "code": "", "text": "Hey.I’ve made a webcrawler using Ruby on Rails API and MongoDB/Mongoid that returns a JSON. Is there that I can use some Mongo module for caching the data returned as JSON? How to implement it?", "username": "Victor_Lacerda" }, { "code": "", "text": "Hi @Victor_Lacerda,The Mongoid Query Cache may provide the functionality you’re looking for. When enabled, the query cache saves the results of previously executed find and aggregation queries and reuses them in the future instead of performing the queries again, thus increasing application performance and reducing database load.", "username": "alexbevi" } ]
Caching data in Ruby on Rails using Mongoid
2021-12-01T16:21:55.454Z
Caching data in Ruby on Rails using Mongoid
4,445
https://www.mongodb.com/…4_2_1024x512.png
[ "ruby" ]
[ { "code": "", "text": "Hello,I noticed during failover, the ping command doesn’t return immediately as indicated by the documentation:Typically, it will take 60s before it finally fails with NoServerAvailable: No primary server is available in cluster.I was looking at the source code to determine if there was a way to set a timeout:\ndb.command(:ping => 1)However I didn’t find anything. Any suggestions or guidance on how to force it to timeout sooner?Thanks!", "username": "Joanne_Polsky" }, { "code": "pingserver_selection_timeoutrequire 'bundler/inline'\n\ngemfile do\n source 'https://rubygems.org'\n gem 'mongo'\nend\n\nclient = Mongo::Client.new('mongodb://missing:27017/test', server_selection_timeout: 5)\nclient.command(ping: 1)\n", "text": "Hi @Joanne_Polsky,The documentation for the ping command only indicates that the command will return immediately even if the server is write-locked. This is not the same thing as a server being unavailable; which is what would happen during a failover.However I didn’t find anything. Any suggestions or guidance on how to force it to timeout sooner?What I believe you’re looking to adjust is the server_selection_timeout, which defaults to 30s.For example:", "username": "alexbevi" } ]
Ruby-mongo-driver ping command does not return immediately when NoServerAvailable
2022-03-09T22:19:49.601Z
Ruby-mongo-driver ping command does not return immediately when NoServerAvailable
3,071
null
[ "aggregation", "replication", "sharding", "database-tools", "backup" ]
[ { "code": "", "text": "I have a replicaset with 3 nodes(Primary-Secondary-Secondary) on version 4.2.21. The storage size is around 2TB. The data is expected to grow further in the coming months. To cater to that we are planning to move to a sharded setup.\nI am considering using zstd compression but since the collection is already created I am unable to update the block compressor.\nAny ideas around how can I update the compressor to zstd ?One possible approach is to:But this does not seem achievable for 2TBs of data. Is there any other way of achieving this ?\nCan mongodump and restore help here ? I am doubtful if with dump and restore we can change the compressor because it also copies the collection metadata.", "username": "Ishrat_Jahan" }, { "code": "", "text": "Hi @Ishrat_Jahan ,Usually we recommend to setup on the instance level and initial sync each replica set member one by one with zstd setup on.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny Instead of doing step by step initial sync on each of the instances, can we setup a new replicaset and use mongomirror to migrate the data?", "username": "Ishrat_Jahan" }, { "code": "", "text": "@Ishrat_Jahan Potentially you can if the target is an Atlas cluster deployment.But it requires your application to change your connection string and restart it most probably, so doing this via a resync is the recommended way in this scenario.", "username": "Pavel_Duchovny" }, { "code": "", "text": "Would the mongo mirror work if the both the source and target are not Atlas cluster deployment ?", "username": "Ishrat_Jahan" }, { "code": "", "text": "It might work BUT it’s not supported and was not design for non atlas migrations…So it’s at your own risk…", "username": "Pavel_Duchovny" } ]
Mongo Update compressor for an existing collection
2022-07-13T08:40:59.393Z
Mongo Update compressor for an existing collection
2,286
null
[ "indexes" ]
[ { "code": "", "text": "Is there a way to view the content of an index? I would like to see if an entry for a document is created or not created into an index.", "username": "Bluetoba" }, { "code": "explain()explain(\"executionStats)COLLSCANIXSCAN> use test\nswitched to db test\n> db.test.drop()\nfalse\n> db.test.insertOne({\"hello\" : \"world\"})\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"5f5f38cee83a779d9f5a4410\")\n}\n> db.test.find({\"hello\" : \"world\"})\n{ \"_id\" : ObjectId(\"5f5f38cee83a779d9f5a4410\"), \"hello\" : \"world\" }\n> db.test.find({\"hello\" : \"world\"}).explain(\"executionStats\")\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"test.test\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"hello\" : {\n\t\t\t\t\"$eq\" : \"world\"\n\t\t\t}\n\t\t},\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"COLLSCAN\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"hello\" : {\n\t\t\t\t\t\"$eq\" : \"world\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"direction\" : \"forward\"\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 1,\n\t\t\"executionTimeMillis\" : 0,\n\t\t\"totalKeysExamined\" : 0,\n\t\t\"totalDocsExamined\" : 1,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"COLLSCAN\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"hello\" : {\n\t\t\t\t\t\"$eq\" : \"world\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : 1,\n\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\"works\" : 3,\n\t\t\t\"advanced\" : 1,\n\t\t\t\"needTime\" : 1,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 0,\n\t\t\t\"restoreState\" : 0,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"direction\" : \"forward\",\n\t\t\t\"docsExamined\" : 1\n\t\t}\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"JD10Gen.local\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.2.7\",\n\t\t\"gitVersion\" : \"51d9fe12b5d19720e72dcd7db0f2f17dd9a19212\"\n\t},\n\t\"ok\" : 1\n}\n> db.test.createIndex({\"hello\" : 1 })\n{\n\t\"createdCollectionAutomatically\" : false,\n\t\"numIndexesBefore\" : 1,\n\t\"numIndexesAfter\" : 2,\n\t\"ok\" : 1\n}\n> db.test.find({\"hello\" : \"world\"}).explain(\"executionStats\")\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"test.test\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"hello\" : {\n\t\t\t\t\"$eq\" : \"world\"\n\t\t\t}\n\t\t},\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"hello\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"hello_1\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"hello\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"hello\" : [\n\t\t\t\t\t\t\"[\\\"world\\\", \\\"world\\\"]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 1,\n\t\t\"executionTimeMillis\" : 0,\n\t\t\"totalKeysExamined\" : 1,\n\t\t\"totalDocsExamined\" : 1,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"nReturned\" : 1,\n\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\"works\" : 2,\n\t\t\t\"advanced\" : 1,\n\t\t\t\"needTime\" : 0,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 0,\n\t\t\t\"restoreState\" : 0,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"docsExamined\" : 1,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"nReturned\" : 1,\n\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\"works\" : 2,\n\t\t\t\t\"advanced\" : 1,\n\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"hello\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"hello_1\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"hello\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"hello\" : [\n\t\t\t\t\t\t\"[\\\"world\\\", \\\"world\\\"]\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 1,\n\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\"dupsDropped\" : 0\n\t\t\t}\n\t\t}\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"JD10Gen.local\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.2.7\",\n\t\t\"gitVersion\" : \"51d9fe12b5d19720e72dcd7db0f2f17dd9a19212\"\n\t},\n\t\"ok\" : 1\n}\n>\n", "text": "Do A query using explain() it will tell you if the query is using an index. In the example below we create a document. Check for its existence. Then run explain(\"executionStats) on the same query. You will see that that executiion stage and winning stage are COLLSCAN. Indicating a collection scan. MongoDB does a collection scan when there is no index.Now we create an index and rerun the query. Now we will see the stage is IXSCAN. This represents an index scan, i.e. the query is using the index. IXSCAN indicates the document can be found in the index.", "username": "Joe_Drumgoole" }, { "code": "", "text": "Thank you for the explanation.", "username": "Bluetoba" }, { "code": "", "text": "Hi @Joe_Drumgoole ,\nIs it possible to view the contents of the index in general? I want to explore the raw data stored in indexes to get a better idea of the backend. I can see some encrypted wiredtiger files with the name index-xxxxx.wt in the data path. Is it possible to access them through mongo shell or parse the files in some human readable format ?", "username": "Manas_Joshi1" }, { "code": "", "text": "The index files are stored in an internal format that is not parseable by the driver unfortunately.", "username": "Joe_Drumgoole" }, { "code": "", "text": "Hi friends I have been working on this topic, and found cursor.returnKey() makes exactly what you need.", "username": "santiago_quevedo" }, { "code": "", "text": "But in aggregation it looks like that it was deprecated, and you can do something like that with $meta, but only for debugging purposes, not for logic ones.\nI think that the possibility of returning only the index data, will be a great idea for index that have subdocuments, making incredible faster the querys in this case.\nI am suggesting to have something like $returnKey available in the pipelines.", "username": "santiago_quevedo" } ]
Viewing content of an index
2020-09-13T11:03:30.094Z
Viewing content of an index
3,147
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 5.0.10-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.9. The next stable release 5.0.10 will be a recommended upgrade for all 5.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.10-rc0 is Released
2022-07-18T15:58:15.219Z
MongoDB 5.0.10-rc0 is Released
2,272
null
[]
[ { "code": "", "text": "Hi there,I have the same problem running on flutter in release mode. (realm: ^0.3.1+beta)\nI/flutter (10201): [ERROR] Realm: Failed to resolve ‘ws.ap-southeast-2.aws.realm.mongodb.com:443’: Host\nnot found (authoritative)\nI/flutter (10201): [ERROR] Realm: SyncError message: Host not found (authoritative) category: SyncErrorWeirdly, in debug it can be resolved:\nI/flutter ( 9398): [INFO] Realm: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\nI/flutter ( 9398): [INFO] Realm: Connected to endpoint ‘13.54.209.90:443’ (from ‘192.168.x.x:52892’)Thanks", "username": "Allan_Clempe" }, { "code": "<uses-permission android:name=\"android.permission.INTERNET\"/>", "text": "it turned out to be a internet permission, nothing to do with realm libraryjust adding the following permission in the androidmanisfest.xml file for release solved my problem\n<uses-permission android:name=\"android.permission.INTERNET\"/>\nCheers", "username": "Allan_Clempe" }, { "code": "", "text": "Hello @Allan_Clempe ,Welcome to the Community Glad to know, this got resolved. I have moved this to a separate topic.Please do create a new topic if any tag or category is different. It helps in the separation of content and issues I will close this post now.Happy Coding Cheers, ", "username": "henna.s" }, { "code": "", "text": "", "username": "henna.s" } ]
I/flutter: [ERROR] Realm: Failed to resolve': Host not found (authoritative)
2022-07-11T05:11:18.425Z
I/flutter: [ERROR] Realm: Failed to resolve&rsquo;: Host not found (authoritative)
2,492
null
[ "aggregation", "java", "spring-data-odm" ]
[ { "code": "targetCollection@Aggregationpublic interface PersonRepository extends MongoRepository<Person, String> {\n\n\n@Meta(allowDiskUse = true)\n@Aggregation(pipeline = {\"{$group: { .... } }\", \"{$out: 'targetCollection'}\" })\nStream<Person> deduplicateToTargetCollection();\n\n}\ndeduplicateToTargetCollection()StreamStream", "text": "I want to execute a mongodb aggregation pipeline with Spring Data Mongodb. The last step in that pipeline is writing out all documents to the collection targetCollection in the same database.I implemented the pipeline with an @Aggregation annotation in my MongoRepository:As the Collection has a significant number of Documents I don’t want the Documents of the new Collection I am writing to with this aggregation pipeline to be transferred from mongodb to the Java application.However, putting a void return type on the method signature of deduplicateToTargetCollection() does not work. So, the best I could come up with was setting the return type to Stream .Is the assumption correct that if I just close that Stream and don’t query it for its elements there isn’t any transfer of the Documents in the newly created Collection to the Spring Data Mongodb Java application that executes the aggregation? Does anyone know whether this is documented anywhere to confirm?Much appreciate any help here.", "username": "uli" }, { "code": "", "text": "There is no mechanism for this yet. The issue is being tracked in the Spring Data MongoDB Github project here: Kindly add skipOutput functionality to @Meta for use with @Aggregation · Issue #4088 · spring-projects/spring-data-mongodb · GitHub.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Return value of mongodb aggregation with $out as the last stage in aggregation pipeline
2022-07-18T02:46:17.205Z
Return value of mongodb aggregation with $out as the last stage in aggregation pipeline
2,755
null
[ "golang" ]
[ { "code": "", "text": "If can , Do you have example project , I really need to learn and use it with production. thx in advance", "username": "Re3v3s_N_A" }, { "code": "", "text": "Have you read the Quickstart?", "username": "Jack_Woehr" }, { "code": "// Use interface (cleaner)\ntype DbStore interface {\n\tGetDb() *mongo.Database\n\tGetClient() (*mongo.Client, error)\n\tColl(name string, opts ...*options.CollectionOptions) *mongo.Collection\n\tDisconnect() error\n}\ntype dbStore struct {\n\tdb *mongo.Database\n\tclient *mongo.Client\n}\nfunc NewDbStore(opts *options.ClientOptions, dbName string) (DbStore, error) {\n\tclient, db, err := connect(opts, dbName)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &dbStore{db: db, Client: client}, nil\n}\n\nfunc (md *dbStore) GetDb() *mongo.Database {\n\treturn md.db\n}\n\nfunc (md *dbStore) GetClient() (*mongo.Client, error) {\n\tif md.client != nil {\n\t\treturn md.client, nil\n\t}\n\treturn nil, errors.New(\"client is missing (nil) in Mongo Data Store\")\n}\n\nfunc (md *dbStore) Coll(name string, opts ...*options.CollectionOptions) Collection {\n\treturn NewCollection(md.db, name, opts...)\n}\n\nfunc (md *dbStore) Disconnect() error {\n\terr := md.client.Disconnect(ctx())\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\nfunc connect(opts *options.ClientOptions, dbName string) (*mongo.Client, *mongo.Database, error) {\n\tvar connectOnce sync.Once\n\tvar db *mongo.Database\n\tvar client *mongo.Client\n\tvar err error\n\tconnectOnce.Do(func() {\n\t\tclient, db, err = connectToMongo(opts, dbName)\n\t})\n\n\treturn client, db, err\n}\n\nfunc connectToMongo(opts *options.ClientOptions, dbName string) (*mongo.Client, *mongo.Database, error) {\n\tvar err error\n\tclient, err := mongo.NewClient(opts)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tctx, cancel := context.WithTimeout(context.Background(), defaultConfig.ctxTimeout)\n\tdefer cancel()\n\terr = client.Connect(ctx)\n\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\terr = client.Ping(ctx, readpref.Primary())\n\tif err != nil {\n\t\t_ = client.Disconnect(ctx)\n\t\treturn nil, nil, errors.New(fmt.Sprintf(\"Cannot connect do db. Error: %v\", err))\n\t}\n\tvar db = client.Database(dbName)\n\n\treturn client, db, nil\n}\ntype MyRepo interface{\n Find(ctx context.Context, filters interface{}) []SomeType, error\n}\n\ntype myRepo struct {\n store DbStore \n}\nfunc NewMyRepo(store DbStore) MyRepo {\n return myRepo{ store: store }\n}\n\n\nfunc (r *myRepo) Find(ctx context.Context , filters interface{}) []Document, error{\n // Query\n}\n\n\nvar mystore DbStore \nvar myRepo MyRepo \nfunc main() {\n mystore = NewDbStore(yourOptions, \"yourDbName\")\n myRepo = NewMyRepo(mystore)\n \n}\nmyRepo.Find(ctx, filter)\n", "text": "Sample code that I used many times… Feel free to use it. It is just a draft, it could be not valid as I don’t have a machine with go installed with me today A simple repoSome file (like main.go or any init file that runs on startup)in any file", "username": "Sebastien_Tachier" }, { "code": "import (\n \"context\"\n \"time\"\n\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n \"go.mongodb.org/mongo-driver/mongo/readpref\"\n)\n\n// Global var\nvar client *mongo.Client\n\nfunc main() {\n \n ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\n defer cancel()\n client, err := mongo.Connect(ctx, options.Client().ApplyURI(\"mongodb://localhost:27017\"))\n \n}\n\nfunc PerformSomethingWithClient() {\n client.XXXXX()\n}\n", "text": "Or if you want something more straightforwardIn some file", "username": "Sebastien_Tachier" } ]
How can i connect mong 1 time and can reuse it ? Go + Mongo
2022-07-05T17:44:32.257Z
How can i connect mong 1 time and can reuse it ? Go + Mongo
3,036
null
[ "aggregation" ]
[ { "code": "", "text": "here is data model, i have nested array(orderList) where I have objects related to different collections e.g [companies, featuredProductsList, FeaturedBannner etc]. i have to send complete object to fe. how i can lookup from diffrent tables at the same request.here is data model[{\n“_id”: “62541d653d00378feb2fc117”,\n“orderList”: [\n{\n“type”: “featured-companies”,\n“id”: [\n“624c0a89b9298a2a0fa21941”\n]\n},\n{\n“type”: “banner”,\n“id”: [\n“624c0d25b9298a2a0fa21988”\n],\n“size”: 5\n},\n{\n“type”: “featured-products-list”,\n“id”: [\n“624c0b1ab9298a2a0fa21964”\n]\n}\n]\n}]", "username": "Ali_Haider1" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and publish sample documents from all input collections and sample of documents of the expected result. Make sure it is JSON text that we can easily cut-n-paste into our server instance.Also publish what you have tried and indicate how it fails to provide the desired results. This will save us time by not pursuing a solution you already rejected.", "username": "steevej" } ]
How to fetch data from multiple tables using lookup in same request.?
2022-05-28T04:49:27.759Z
How to fetch data from multiple tables using lookup in same request.?
4,204
null
[ "queries" ]
[ { "code": "", "text": "I created an Index like below.db.product.createIndex({“name”:“text”,“description”:“text”}Then I want to perform the find operation in the created Index like thisdb.product.find({$text: {$search: “Laptop”}}, {score: {$meta: “textScore”}}).sort({score:{$meta:“textScore”}})I got some results also. But how should I modify the find query to get the field name/names(in my case name and description) also in the results where the find query search actually matched?", "username": "pradeep_t1" }, { "code": "", "text": "Did you find any solutions for this ?", "username": "Ertugrul_Saruhan" }, { "code": "", "text": "No…! Not yet. Still searching for a solution.", "username": "pradeep_t1" } ]
How to find the field name where the search has found
2021-11-08T07:07:52.422Z
How to find the field name where the search has found
1,917
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hi,\nI get below error while using the example on mongodb Realm.05-04 16:44:13.056 5525-5551/m.realmjava E/REALM_SYNC: Connection[1]: Failed to resolve ‘ws.ap-south-1.aws.realm.mongodb.com:443’: Host not found (authoritative)As per mongodb webstie mongodbp atlas is supported on ap-south-1A similar error for another AWS site got resolved as per another discussion in this forum by updated settings by MongoDB team. Does this error also needs similar fix?\n-sandeep", "username": "Sandeep_R.K" }, { "code": "", "text": "Hi, I’m encountering the same issue when trying to use Realm Sync offline. As soon as I turn off wifi and reopen the app, I get this error message. I’m really struggling to meet the offline-first capability here. Did you resolve your issue? Could anyone help?", "username": "Laekipia" }, { "code": "", "text": "Hi,\nI am also having this issue with the .NET SDK.\nAny progress with this issue?", "username": "ScribeDev" }, { "code": "", "text": "Same issue with Android SDK, App logs full of errors\nE/REALM_SYNC: Connection[2]: Failed to resolve ‘ws.ap-south-1.aws.realm.mongodb.com:443’: Host not found (authoritative)", "username": "Chandan_Thakur" }, { "code": "", "text": "Hello @Chandan_Thakur,Welcome to the community!There was an issue internally with a deployment but it is resolved. Could you confirm this is working for you now?I look forward to your response.Kind Regards,\nHenna", "username": "henna.s" }, { "code": "", "text": "3 posts were split to a new topic: I/flutter: [ERROR] Realm: Failed to resolve’: Host not found (authoritative)", "username": "henna.s" }, { "code": "", "text": "", "username": "henna.s" } ]
Realm SYNC Host not found
2021-05-08T19:29:04.234Z
Realm SYNC Host not found
5,637
https://www.mongodb.com/…a9828c2f059b.png
[ "atlas-functions" ]
[ { "code": "", "text": "My organization uses an Atlas App Services backend instance for their production web application. How can I apply rate limits / throttles to calls to Functions? For example, apply a hard limit to 200 Function calls or apply a rate limit of 3 Function calls per minute.I understand the question has been asked before here.I’d like to know how to create strategies for implementing these features at this moment? This is a strategy I have come up with so far:\nScreenshot 2022-07-04 150029623×675 20.3 KB\nHowever, is there anything I can do within Atlas App Services to achieve similarly?", "username": "njt" }, { "code": "", "text": "Hello, bumping up my post to get some replies, hopefully.", "username": "njt" }, { "code": "", "text": "Without knowing what is intended for the data, how static it is, how much data is it, budget, and what the queries are doing I’m going to put down an idea that I’ve often used to overcome system with no simple rate limiting. The main reason for this is often users will still find ways around rate limiting so lets explore another possibility but it has some assumptions that could not make this a possible solution. Hopefully it helps.1 Word: CachingInstead of rate limiting if possible think of it in a way of, how can I remove the desire for someone to spam requests. Looking at some options think of something like Cloudflare where the data is served from a host pipped through Cloudflare and let them handle the requests. Services like Cloudflare often have automatic rate limiting (that will ban IPs that spam too much too quickly if wanted) but mostly they also have nice caching services without much effort. A Cloudflare worker perhaps could query your data and cache it for quite a long time.This is just one quick example but you would need to open your mind to many possibilities and explore. If you can cache it, cache it and don’t rate limit. Because if you can cache it you have so many more options available to you. You could also have some service like Google Cloud Functions/Amazon Lambdas/App Service Triggers automatically on timers to put these results a user often calls for to a cache/storage/hosting that doesn’t directly require you to open the gates to direct user interaction that requires rate limiting. If you can go this caching way and done correctly it wouldn’t matter if they do 1 request or 1000 request in 10 minutes because your cache says 10 minutes, its gonna be 10 minutes before they see new data. People don’t really spam if they gonna get the same answer no matter what.Good luck!", "username": "CloudServer" }, { "code": "", "text": "Thank you for the suggestions!I shall look into caching as well as configuring App Service Triggers and other serverless functions on timers to put the data to a cache/storage/hosting service.", "username": "njt" } ]
Create rate limiting / throttling strategies for Atlas App Services
2022-07-04T07:01:55.746Z
Create rate limiting / throttling strategies for Atlas App Services
3,039
null
[ "compass", "connecting" ]
[ { "code": "mongodb+srv://sam:<password>@cluster0.tpvpc.mongodb.net/test", "text": "mongodb+srv://sam:<password>@cluster0.tpvpc.mongodb.net/test\nAfter changing the password and the ‘test’ name to my password and database, while connecting to my database from compass it throws this error.\nNot just compass, also when am trying to connect from my application with its corresponding connection string, am having the same error.\nThe error which keeps on coming is :\nquerySrv EREFUSED _mongodb._tcp.cluster0.tpvpc.mongodb.netThanks A2A ", "username": "Swarnab_Mukherjee" }, { "code": "", "text": "Is your IP address correctly whitelisted?", "username": "MaBeuLux88" }, { "code": "", "text": "I faced a similar issue while using MongoDB on Windows. What worked for me simply using a different internet connection. Like on switching to mobile hotspot it worked. Maybe there are some ISP level related", "username": "Aman_Mishra" }, { "code": "const express = require(\"express\");\n\nconst mongodb = require(\"mongodb\");\n\nconst router = express.Router();\n\n// Get Posts\n\nrouter.get(\"/\", async (req, res) => {\n\n try {\n\n const posts = await loadPostsCollection();\n\n res.send(await posts.find({}).toArray());\n\n } catch (e) {\n\n console.log(e);\n\n }\n\n});\n\nasync function loadPostsCollection() {\n\n const client = await mongodb.MongoClient.connect(\n\n \"mongodb://abc123:<password>@vue-expess-mongodb-shard-00-00.cc1pu.mongodb.net:27017,vue-expess-mongodb-shard-00-01.cc1pu.mongodb.net:27017,vue-expess-mongodb-shard-00-02.cc1pu.mongodb.net:27017/posts_db?ssl=true&replicaSet=atlas-zk87tt-shard-0&authSource=admin&retryWrites=true&w=majority\",\n\n {\n\n useNewUrlParser: true,\n\n useUnifiedTopology: true,\n\n },\n\n );\n\n return client.db(\"posts_db\").collection(\"posts\");\n\n}\n\nmodule.exports = router;\n", "text": "Hello,I have been experience the same issue on my project but I managed to resolve by changing node version see below image.Capture746×261 12.1 KBFilename - Posts.jsLemme know if this help!Thanks", "username": "Ambrose_Sulley" }, { "code": "", "text": "Thanks man. That was exactly the case with me. I was trying to connect with my collage wifi. But that gave an error.But still didn’t got why my clg wifi restricts connecting to mongodb and what can I do for it?Any help would be appreciated!", "username": "Hotel_Client" }, { "code": "", "text": "It’s possible that your college is blocking outbound connection towards port 27017. Please check with a 4G/5G connection maybe to see if this unlocks the situation. If this is the case, then you need to talk to the network admin.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Make sure to change the node version to 2.2.12 , this was my probleme\nURI should look like this: mongodb://ayoubmongo:@ayoubkhatouri-shard-00-00.dopzz.mongodb.net:27017,ayoubkhatouri-shard-00-01.dopzz.mongodb.net:27017,ayoubkhatouri-shard-00-02.dopzz.mongodb.net:27017/?ssl=true&replicaSet=atlas-2ichwf-shard-0&authSource=admin&retryWrites=true&w=majority", "username": "AYOUB_KHATOURI" } ]
querySrv EREFUSED _mongodb._tcp.cluster0.tpvpc.mongodb.net
2020-08-31T17:34:19.982Z
querySrv EREFUSED _mongodb._tcp.cluster0.tpvpc.mongodb.net
27,814
null
[ "production", "php", "field-encryption" ]
[ { "code": "encryptedFieldsMapbypassQueryAnalysisMongoDB\\Driver\\Manager::__construct()queryTypeMongoDB\\Driver\\ClientEncryption::encrypt()mongocryptdMongoDB\\Driver\\BulkWriteMongoDB\\Driver\\Queryletcommentcommentpecl install mongodb-1.14.0\npecl upgrade mongodb-1.14.0\n", "text": "The PHP team is happy to announce that version 1.14.0 of the mongodb PHP extension is now available on PECL. This release introduces support for MongoDB 6.0 and Queryable Encryption.Release HighlightsTo support Queryable Encryption, encryptedFieldsMap and bypassQueryAnalysis auto encryption options have been added to MongoDB\\Driver\\Manager::__construct() . Additionally, new algorithms and a queryType option have been added to MongoDB\\Driver\\ClientEncryption::encrypt() . Support for the Automatic Encryption Shared Library, an alternative to mongocryptd , has also been introduced. MongoDB\\Driver\\BulkWrite and MongoDB\\Driver\\Query support a let option for defining variables that can be accessed within query filters and updates. Additionally, both classes now support a comment option of any type (previously a string comment was only supported for queries).This release upgrades our libbson and libmongoc dependencies to 1.22.0. The libmongocrypt dependency has been upgraded to 1.5.0.A complete list of resolved issues in this release may be found in JIRA.DocumentationDocumentation is available on PHP.net.InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL.", "username": "jmikola" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB PHP Extension 1.14.0 Released
2022-07-16T21:06:50.438Z
MongoDB PHP Extension 1.14.0 Released
4,744
null
[ "aggregation", "production", "php" ]
[ { "code": "MongoDB\\Database::createCollectionMongoDB\\Database::dropCollectionMongoDB\\Collection::dropencryptedFieldsfindfindAndModifydeleteupdateletcommentwatch()fullDocumentfullDocumentBeforeChangefullDocumentBeforeChangeMongoDB\\Database::createCollection()MongoDB\\Database::modifyCollection()changeStreamPreAndPostImageswatch()showExpandedEventsMongoDB\\Database::createCollection()createMongoDB\\Collection::estimatedDocumentCount()countaggregate$collStatscountestimatedDocumentCount()countcountestimatedDocumentCountMongoDB\\Driver\\ServerApimongodbcomposer require mongodb/mongodb:1.13.0\nmongodb", "text": "The PHP team is happy to announce that version 1.13.0 of the MongoDB PHP library is now available. This release introduces support for MongoDB 6.0 and Queryable Encryption.Release Highlights MongoDB\\Database::createCollection , MongoDB\\Database::dropCollection , and MongoDB\\Collection::drop now support an encryptedFields option. This is used by the library to manage internal collections used for queryable encryption.Helper methods for find , findAndModify , delete , and update commands now support a let option, which can be used to define variables that can be accessed within query filters and updates. Additionally, all helpers now support a comment option of any type (previously a string comment was only supported for queries).Change Streams with Document Pre- and Post-Images are now supported. Change stream watch() helpers now accept “whenAvailable” and “required” for the fullDocument option and support a new fullDocumentBeforeChange option, which accepts “whenAvailable” and “required”. Change events may now include a fullDocumentBeforeChange response field. Additionally, MongoDB\\Database::createCollection() and MongoDB\\Database::modifyCollection() now support a changeStreamPreAndPostImages option to enable this feature on collections. Lastly, change stream watch() helpers now accept a showExpandedEvents option to enable the server to return additional events for DDL operations (e.g. creating indexes and collections) in the change stream. MongoDB\\Database::createCollection() now supports creating clustered indexes and views. Clustered indexes were introduced in MongoDB 5.3. Views date back to MongoDB 3.4 but the corresponding options for the create command were never added to the library’s helper method. MongoDB\\Collection::estimatedDocumentCount() has been changed to always use the count command. In a previous release (1.9.0), the method was changed to use aggregate with a $collStats stage instead of the count command, which did not work on views. Reverting estimatedDocumentCount() to always use the count command addresses the incompatibility with views. Due to an oversight, the count command was omitted from the Stable API in server versions 5.0.0–5.0.8 and 5.1.0–5.3.1. Users of the Stable API with estimatedDocumentCount are advised to upgrade their MongoDB clusters to 5.0.9+ or 5.3.2+ (if on Atlas) or disable strict mode when using MongoDB\\Driver\\ServerApi .This release upgrades the mongodb extension requirement to 1.14.0.A complete list of resolved issues in this release may be found in JIRADocumentationDocumentation for this library may be found in the PHP Library Manual.InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "jmikola" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB PHP Library 1.13.0 Released
2022-07-16T21:16:36.916Z
MongoDB PHP Library 1.13.0 Released
2,164
null
[ "swift", "react-native" ]
[ { "code": "let realm = try! Realm()Realm()2022-01-26 09:09:48.817027+1300 essyncer[11028:2186300] [boringssl] boringssl_metrics_log_metric_block_invoke(151) Failed to log metrics\n2022-01-26 09:09:48.826227+1300 essyncer[11028:2186307] [boringssl] boringssl_metrics_log_metric_block_invoke(151) Failed to log metrics\nRealm file is currently open in another process which cannot share access with this process. All processes sharing a single file must be the same architecture.pod 'RealmSwift', '10.22.0'", "text": "I’m working on a React Native app that calls native iOS (Swift) and Android (Java) code for Realm.\nI’ve got Andoird working where I can call a function from React Native → hit Java code → make a request → store the response data in local Realm. For iOS I’ve followed the docs https://docs.mongodb.com/realm/sdk/swift/\nbut when the code gets to let realm = try! Realm() it never returns from Realm(), the next line of code isnt reached.The only output I get isThe following is created\ndefault.realm.managment folder\ndefault.realm.lock\ndefault.realmWhen I open default.realm in Realm Studio I get\nRealm file is currently open in another process which cannot share access with this process. All processes sharing a single file must be the same architecture.\nIf I exit the simulator I can view the default.realm in Realm Studio, there is no schema which is expected.Installed RealmSwift with pods\npod 'RealmSwift', '10.22.0'\nXcode Version 13.2.1\nTarget is iOS 11.0", "username": "Nate_Fort" }, { "code": "let realm = try! Realm()Realm()trydo {…} catch {…}", "text": "Hi @Nate_Fort,but when the code gets to let realm = try! Realm() it never returns from Realm() , the next line of code isnt reached.", "username": "Paolo_Manna" }, { "code": "", "text": "Welcome to the forumsCan you post a code snippet showing the code you are using including the surrounding code? e.g. if it’s within a function, can you include that so we have a better understanding of when that code is being called?", "username": "Jay" }, { "code": "", "text": "Seems related to this", "username": "Jay" } ]
Try! Realm() never returns
2022-01-25T20:29:19.710Z
Try! Realm() never returns
4,349
null
[ "node-js" ]
[ { "code": "function user_delete(req, res, next) {\n User.findOne({ _id: req.params.id })\n .then(user => {\n if (!user) {\n return next('The user you requested could not be found.')\n }\n\n Child.remove({ userId: user._id }).exec();\n user.remove();\n return res.status(200).send('User deleted');\n\n }).catch(err => {\n console.log(err)\n if (err.kind === 'ObjectId') {\n return next(res.status(404).send({\n success: false,\n message: \"User not found with id \"\n }));\n }\n return next(res.status(500).send({\n success: false,\n message: \"Error retrieving User with id \"\n }));\n });\n};\nrouter.delete('/delete/:id', user_delete);\nfunction deleteFileStream(fileKey, next) {\n const deleteParams = {\n Key: fileKey,\n Bucket: bucket_name,\n }\n s3.deleteObject(deleteParams, (error, data) => {\n next(error, data)\n })\n}\nexports.deleteFileStream = deleteFileStream;\nfunction delete_child(req, res, next) {\n Child.findById(req.params.id)\n .then(child => {\n if (!child) {\n return next(res.status(404).send({\n success: false,\n message: \"child not found with id \" + req.params.id\n }));\n }\n\n // deleting the images of questions also if it has image\n if(question.file !== '') {\n const url_parts = url.parse(question.file, true);\n const datas = url_parts.pathname.split('getImage/')\n const filekey = datas.pop();\n console.log(filekey);\n deleteFileStream(filekey); // calling the delete function\n }\n child.remove()\n return res.send({\n success: true,\n message: \"child successfully deleted!\"\n });\n }).catch(err => {\n if (err.kind === 'ObjectId' || err.name === 'NotFound') {\n return res.status(404).send({\n success: false,\n message: \"child not found with id \" + req.params.id\n });\n }\n return res.status(500).send({\n success: false,\n message: \"Could not delete question with id \" + req.params.id\n });\n });\n}\nrouter.delete('/delete/:id', delete_child);\n", "text": "I have a database in which the user is a parent and it has some child documents child document has image data too and those images are stored in the AWS s3 bucket. I used MongoDB middleware remove to perform cascade delete. If I delete parents then the data from the child table is also deleted but the image data remains in the s3 bucket. How can I implement the logic that image data should also be deleted from the server on deleting the parent? I also wrote AWS SDK delete APIs but how can I connect them to the parent document?// This is the parent delete API// Delete function for aws SDK delete a file from s3// Child delete document APIIf I call the child API the image is also deleted from the server as I am deleting it but if I delete the parent the child is deleted but not the image. Can anybody tell me, please? I am struggling with this use case.", "username": "Naila_Nosheen" }, { "code": "name: {\n\n type: String,\n\n required: true\n\n},\n\nemail: {\n\n type: String,\n\n required: true,\n},\n\npassword: {\n\n type: String,\n\n required: true,\n\n},\n\nprofilePictureURL: {\n\n type: String\n\n},\nquestion: {\n type: String,\n required: true\n},\nimageUrl: {\n type: String,\n require: false\n},\nuserId: {\n type: Schema.Types.ObjectId,\n required: true,\n ref: \"user\"\n},\n", "text": "@Kushagra_Kesav this is my user model(parent)const User = mongoose.model(‘User’, new mongoose.Schema({}));//this is the child modelconst question = new mongoose.Schema({})", "username": "Naila_Nosheen" }, { "code": "userModel{\n_id: ObjectId(...),\nname: \"Naila_Nosheen\",\nprofilePictureURL: \"https://s3.us-west-2.amazonaws.com/mybucket/image01.jpg\",\n...\n}\nchildModel{\n_id: ObjectId(...),\nquestion: \"What is your hobby?\"\nimageUrl: \"https://s3.us-west-2.amazonaws.com/mybucket/question_image01.jpg\"\nuserId: ObjectId(...)\n}\nvar path = require(\"path\")\n\nfunction user_delete(req, res) {\n const user = User.findById(req.params.id);\n const child = Child.find({ userId: user._id });\n\n const fileName = path.basename(child.imageUrl)\n const keyName = path.basename(user.profilePictureURL)\n\n // var user.profilePictureURL = \"https://s3.us-west-2.amazonaws.com/mybucket/image01.jpg\"\n // const keyName = path.basename(user.profilePictureURL) // \"image01.jpg\"\n\n //Deleting the user from the DB\n User.findByIdAndRemove(req.params.id)\n .then(data => {\n if (!data) {\n console.log({ message: \"User not found with id \" + req.params.id });\n return;\n }\n\n //Deleting the Image from the S3 bucket\n deleteFileStream(keyName, (error, data) => {\n if (error) {\n console.log({message: error.message });\n return;\n }\n console.log({ message: \"<Message>\" });\n })\n }).then(() => {\n\n //Deleting the child from the DB\n Child.findByIdAndRemove(child._id)\n .then(data => {\n if (!data) {\n console.log({ message: \"Child not found with id \" + child._id });\n return;\n }\n\n //Deleting the Image of child from the S3 bucket\n deleteFileStream(fileName, (error, data) => {\n if (error) {\n console.log({ message: error.message });\n return;\n }\n console.log({ message: \"<Message>\" });\n })\n });\n });\n}\n", "text": "Hi @Naila_Nosheen,The userModel I’m assuming here:Similarly, the childModel:Here, I’m suggesting the single function using multiple promises, which will delete the user and child image data from S3 after it gets deleted from the MongoDB databases.Note that this is an untested example and may not work in all cases. Please do the test any code thoroughly with your use case so that there are no surprises.I hope it answers your questions. Please let us know if you have any follow-up questions.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you so much @Kushagra_Kesav for your solution. I want to ask one thing if the parent has more than one child then the same process would be used for all children?", "username": "Naila_Nosheen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to delete data from child and also its image data on server if we delete parent in nodejs mongodb
2022-07-04T22:51:20.942Z
How to delete data from child and also its image data on server if we delete parent in nodejs mongodb
3,760
null
[ "node-js", "replication", "devops" ]
[ { "code": "version: \"3\"\nservices:\n mongo1:\n hostname: mongo1\n image: mongo\n expose:\n - 27017\n ports:\n - 27017:27017\n restart: always\n command: --replSet rs0 --bind_ip_all\n volumes:\n - ../DB/localMongoData/db:/data/db\n healthcheck:\n test: test $$(echo \"rs.initiate().ok || rs.status().ok\" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1\n interval: 10s\n # logging:\n # driver: none\n\n mongo2:\n hostname: mongo2\n image: mongo\n expose:\n - 27017\n ports:\n - 27018:27017\n restart: always\n command: --replSet rs0 --bind_ip_all\n # logging:\n # driver: none\n\n mongo3:\n hostname: mongo3\n image: mongo\n expose:\n - 27017\n ports:\n - 27019:27017\n restart: always\n command: --replSet rs0 --bind_ip_all\n # logging:\n # driver: none\n\nrs.initiate(\n {\n _id : 'rs0',\n members: [\n { _id : 0, host : \"mongo1:27017\" },\n { _id : 1, host : \"mongo2:27017\" },\n { _id : 2, host : \"mongo3:27017\", arbiterOnly: true }\n ]\n }\n)\n", "text": "Hey,I am using docker-compose with a couple of container to create a local replica set.\nAs soon as I moved to Mongodb5 the replica set started breaking.my docker-compose file looks like this:inside the 1st container I ranto create the replica set and it worked on mongo 4.x.xI have tried searching for a solution and I couldn’t find anything that really works, I need a replica set for transactions.\nany suggestions and ideas are welcome, I also tried using one container with mongo 5 and standalone replica set but I couldn’t connect to it", "username": "Eitan_Kats" }, { "code": "", "text": "Same problem here. ", "username": "Mike_Tobias" }, { "code": "", "text": "Something that has worked for me was to clear the volumes of the mongo cluster and reconfigure the replica set.\nI had stale configuration there and that is why I had issue.", "username": "Eitan_Kats" }, { "code": "", "text": "Bump - having similar problems.", "username": "tyteen4a03" } ]
Local replica set on docker-compose
2021-12-02T08:44:04.982Z
Local replica set on docker-compose
18,968
null
[]
[ { "code": "Document not found!\n", "text": "When I go to the system.profile collection from the database, I can see an archive of records. but i cant open a single record. i see a this warning:I am using a mongo-express web admin panel.", "username": "mohammad_parishan" }, { "code": "db.system.profile", "text": "db.system.profile will be empty until you specifically set the profiling level. but be careful as it can fill your disk space very fast. you would not want to leave it set to level 2 for long.check the following page:", "username": "Yilmaz_Durmaz" }, { "code": "_idsystem.profile_idfinddb.setProfilingLevel(2)\n... wait for operations\ndb.setProfilingLevel(0)\n... now do your analysis\ndb.system.profile.drop()\n... drop so so free disk space\n", "text": "I noticed you have said you see records but cannot open a single record in it from admin panel. Sorry I missed that.Have you checked about profiling levels? This is a side effect on how profiling records data and how mongo-express shows documents.profiling documents are not traditional documents that are not recorded with an _id field. system.profile is written by the server itself and is read-only by users.on the other hand, mongo-express lists documents but adds a click event to table rows each bearing the _id field of that document. when you click on them, it makes find query with that id.since system.profile entries does not have an id, that find query just fails hence that error you see.profiling is for debugging and performance checks, so is better to be used by such tools, not by a general purpose tool. if you have that purpose, do not forget to set the level to 0 after you job ends and then just drop the collection.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "tnx for reply.can you explain how we can analysis the profiler results?", "username": "mohammad_parishan" }, { "code": "", "text": "I am no expert on that, at least for now I hope these may help.In addition to above link, you can use this one to know bit more about the fields recorded into the profile documents: Database Profiler Output — MongoDB ManualThere also seems a GUI tool, Query Profiler, for Atlas but “Only available on M10+ clusters and serverless instances”. you can read about it in Monitor Query Performance — MongoDB Atlas", "username": "Yilmaz_Durmaz" }, { "code": "db.system.profile", "text": "Back for the sake of the completeness of the Monitoring tools. I cannot say if they are directly related to a db.system.profile, yet would still be helpful ", "username": "Yilmaz_Durmaz" } ]
I can't open any system.profile collections record
2022-07-12T11:01:30.943Z
I can&rsquo;t open any system.profile collections record
2,490
null
[ "php" ]
[ { "code": "_id_idSlim Select<select>var jsPlayers2 = [\n\t{\"placeholder\": true, \"text\": \"Type Name\"},\n\t{\"text\": \"Leo Messi\", \"value\": \"sdfhkj29dfaj\"},\n\t{\"text\": \"Joe Bloggs\", \"value\": \"ajdsfh438yca22\"},\n\t{\"text\": \"Jane Doe\", \"value\": \"abc3\"}\n];\njson_encode($players){_id: {$oid: \"609d0993906429612483cfb1\"}, name: \"Jane Doe\"}", "text": "I have pulled the MongoDB documents into a PHP array variable. I need to covert this to a JS array variable instead, with some changes to the array structure to:This is because my use case is to pass the JS array into the Slim Select JS library, which allows me to pass in a data array in this format to add items to the <select> element:I use the PHP json_encode($players) function for the conversion between languages. The current format of my JS array is:{_id: {$oid: \"609d0993906429612483cfb1\"}, name: \"Jane Doe\"}Can I do this in PHP (before) or JS (after) the conversion?", "username": "Dan_Burt" }, { "code": "foreach($players as $player) {\n\t\t$player[\"value\"] = (string) $player['_id'];\n\t\t$player[\"text\"] = $player[\"name\"];\n\t\tunset($player[\"clubs\"]);\n\t\tunset($player[\"_id\"]);\n\t\tunset($player[\"name\"]);\n}\n", "text": "I resolved this in a PHP loop as follows to change the array structure as my JS library required:", "username": "Dan_Burt" }, { "code": "const aggregation = [\n {\n $addFields: { value: { $toString: \"$_id\" }, text: \"$name\" }\n },\n {\n $project: { _id:0, value:1, text:1 }\n }\n];\n\ndb.yourcollection.aggregate(aggregation);\n", "text": "you may want to do this on the server side:this one is in javascript/mongo shell format. edit/change it as required to PHP ", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change the `_id` field name and value
2022-07-05T15:09:55.122Z
Change the `_id` field name and value
8,907
null
[]
[ { "code": "", "text": "I’m new to MongoDB with data in Mongo Atlas. I’ve to write a query to update a string field in a collection with large number of documents (north of 500,000). What is the best/ most efficient way to update a string field (say update the case) in all of the collection? Also, is there a way to flag any records that fail the operation or I’ve to resort to JS functions?I’ll appreciate some pointers towards any material that discusses tips, pitfalls, best practices etc.Thank you.", "username": "ibacompre" }, { "code": "updateManybulk operationsvar bulk = db.items.initializeUnorderedBulkOp();\nbulk.find( { status: \"D\" } ).delete();\nbulk.find( { status: \"P\" } ).update( { $set: { points: 0 } } )\nbulk.execute();\nupdateManybulkWriteupdateOne", "text": "Hey @ibacompre,Welcome to the MongoDB Community forums What is the best/ most efficient way to update a string field (say update the case) in all of the collections?There are two ways to do that, one is db.collection.updateMany() and another one is Bulk.find.update().The updateMany operation is overall faster as it is parsed as an ordered bulk write operation that can re-use the resources when modifying the grouped documents matching the applied filter.Whereas bulk operations create multiple different operations as shown below - that are sent in a single request to the server but are performed as a single separate operation:The benefit of bulk operations over separate operations is that it generates a single request to the server for all included operations instead of a new request for each operation. It also allows us a higher level of control as to what documents are updated to minimize the risk of conflicting updates or undesired updates.I think you can use either one of them.If you are absolutely sure that the data to be updated is clean (i.e. won’t have failures), then updateMany & bulkWrite are valid options. For bulkWrite: Excluding Write Concern errors, ordered operations stop after an error, while unordered operations continue to process any remaining write operations in the queue, unless when run inside a transaction.However, if you cannot be sure of the cleanliness of the data, using updateOne in a loop with proper error handling may be more efficient in the long run. The problem documents can be recorded in a list to be looked at later, while the loop can continue processing the rest of the job. This may be less efficient for the server compared to the bulk method, but if you’re expecting an error to happen, this could be a more efficient process.In conclusion, What is the best/most efficient way depends on the use case and whether any error is expected in the data. Also would depend on the required “efficiency”, whether it’s an efficiency of the whole process in the face of errors, or efficiency for the server hardware.If you have any doubts, please feel free to reach out to us.Regards,\nKushagra Kesav", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best/Most efficient way to update a string field in all of the collections with large number of documents
2022-07-01T01:16:23.009Z
Best/Most efficient way to update a string field in all of the collections with large number of documents
3,775
null
[ "data-modeling", "swift", "flexible-sync" ]
[ { "code": "", "text": "This will be a rather long and broad question, if it’s not the right place to ask, please point me to a better place to do so. I’m not sure if flexible sync or sync in general is the right option for our application:Requirements (simplified):\nWe are building a social media application with complex requirements and data structure. The core functionality are posts, which are displayed to users dynamically. Users periodically post location data, based on which some posts from a Post table are displayed to them. In addition, some of the posts are displayed independently of location, but only to some of the users.\nEach user will have an “infinite” feed of these posts (which is already not quite ideal for a database approach but rather caching I suppose?)\nUsers can also add each other, having friends lists and so on. There are a lot more functionalities of the app, but already with the things listed, I have my doubts.Thoughts and doubts\nAfter working with partition based sync for a few weeks and coming up with our partition strategy, there are a few issues. First of all, it all feels far from ideal, since we have a ton of data duplicates which are being generated and deleted as the user interacts with the app (sends posts, adds users) or moves (gets some of the posts dependent on location). Posts get duplicated when a user is supposed to see them and users duplicated when another user adds them. It all works, but is rather inefficient, slow and tedious, since all the data has to be kept up to date and has to be “distributed” to the users. Also it is not quite clear to me how the infinite feed is properly implemented on the client side (swift SDK). With thousands of users and even more posts, duplication and keeping data persistent will be very painful, or at least inefficient.\nI have now started reading into flexible-sync, but since the examples are very simple I have no idea whether it would work well for our use-case.Question(s)If anything is unclear I can always elaborate or show our models or data structure this far.", "username": "David_Kessler" }, { "code": "", "text": "Hi David,To answer your questions:One word of caution (since I assume your model is very link-heavy) is that in Flexible sync you query on each collection, so if you query on collection A and that has links to collection B, then you will not get the objects/documents in collection B unless you are querying on those too. There are ways to design your schema to avoid this problem, but your explanation made me think that you may run into this issue. We are working on this issue and trying to automatically pull in all linking objects, but we still would suggest designing your schema such that you can do it naturally.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks for the reply, that’s great news. I saw that in the Atlas UI it no longer says “Preview” for flexible sync, but I can’t see any new article with an update on the status of flexible sync. When did it go out of preview?Sure that makes sense. You are right with the linking, for example we have a a field “creator” on each duplication of a post, linking to a user object… with partition sync we had to duplicate the user to have the same partition as the duplicate post, otherwise the user could obviously not be accessed. So with flexible sync we would have just one instance of the post and could just link to one user since it’s access is no longer managed by partition. But when retrieving the posts we would not get the user object because of flexible sync right?There are ways to design your schema to avoid this problemCould you elaborate on how that can be done or where I can read more about this? And what do you mean by:designing your schema such that you can do it naturallyMy first intuition would be to just open a new subscription for the user, so once we have a post, we open a subscription using the id of the user in the “creator” field. Which seems a quite inconvenient… But I guess that would also not be “naturally” and doesn’t involve adjusting the schema, so I believe it’s not what you meant.Let’s say in my UI I want to display a list of post and in each the name of the creator, so a property of a linked object of that post…Also, could you give a timeline when the “automatically pull in all linking objects” feature could be ready?cc @Ian_Ward @Tyler_Kaye", "username": "David_Kessler" }, { "code": "{\n _id: ObjectId, \n message: \"hi\", \n creator: {\n user_name: \"Tyler\", \n user_id: ObjectId(\"5dfa7b09d5ec134c607cc57e\"),\n },\n}\n", "text": "Hi,When did it go out of preview?We announced the general availability of Flexible Sync at MongoDB World. See here: https://webassets.mongodb.com/MongoDB-World-2022-Datasheet.pdf?_ga=2.91648025.241869671.1654536218-105395967.1654279900As for your second point, I am not totally sure what your partitioning schema was, but the general gist is that you can open a “subscription” on the Posts table and that will send all posts (and all embedded objects) that match your query, but it will not send the linking objects (it will send the links, they will just be implicitly null since the Client-side realm doesnt have the underlying “User” objects). Therefore, you would just want to also add a subscription on the “Users” collection.I think the ideal way to do this is to have some data duplication (this is a MongoDB concept in general). You do not actually want the entire User object for all of these posts (in fact, its probably a security risk to do so), so you can instead model your schema like this:This way the document has all of the information you want to show but you still have the linking information if you want to “navigate” to the user or if you want to actually download the whole user document. I think in this case an interesting question is “do you really want to download the user object (which might be big) for all posts that a user sees?”This is a pretty normal situation to find yourself in and MongoDB normally suggests denormalization of data for this: 6 Rules of Thumb for MongoDB Schema Design | MongoDB BlogThat way you keep all of the information relevant to the “post” in a single document while also retaining the ability to link to other documents (user) while still embeddeding some of the more relevant fields from the user document within the post document", "username": "Tyler_Kaye" }, { "code": "", "text": "Thank you so much for the detailed response, that’s very helpful.It makes sense to duplicate some of the data into the posts, though it’s not ideal having to keep that data up to date when it gets modified. But by using triggers I think that should be no problem (or at least it’s still better than with the duplication of partition based approach). For the subscription on the “User” I suppose one would have to use the links retrieved from the Posts as query? Or query the “User” table again with other parameters?One last question: in the linked article about flexible sync from the datasheet you sent, there is this paragraph:\n\"The new Flexible Sync feature in Atlas Device Sync elegantly solves geo-partitioning issues by allowing us to only synchronize nearby spatialized content relevant to each user… \" which sounds exactly like something we would need as well for some parts of our app.\nWe used the geoNear aggregation stage with the partition based solution, and I was planning on doing something similar (like using geoNear in a trigger based on movement to enter userIds into a post as an array “visibleTo”). But it sounds like there should be a much more elegant solution using flexible sync, do you know how that could be done?", "username": "David_Kessler" }, { "code": "", "text": "I would also need to add, that these documents that should be synchronised because they are nearby are displayed with their distance from the user, so a simple query would not suffice…", "username": "David_Kessler" } ]
Complex Schema design, (flexible-) sync compatibility
2022-07-14T11:45:58.189Z
Complex Schema design, (flexible-) sync compatibility
2,423
null
[]
[ { "code": "BEGIN\n", "text": "SET @sql = NULL; SELECT GROUP_CONCAT(DISTINCT CONCAT( ‘MAX(IF(c.role_id = ‘’’, c.role_id, ‘’’, true, false)) AS ‘’’, role_name, ‘’’’ ) ) INTO @sql FROM tbl_role c;\nSET @sql = CONCAT('Select staff.user_name, ‘, @sql, ’ From tbl_staff staff Left Join tbl_staffRole staffRole On staff.user_name = staffRole.user_name Left Join tbl_role c On c.role_id = staffRole.role_id Group by staff.user_name’);\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\nEND", "username": "Hiral_Makwana" }, { "code": "", "text": "Hi @Hiral_Makwana,I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "@Kushagra_Kesav\nThanks for reply…\nalready find solution for half queries as i know mongodb aggregation up to middle level .", "username": "Hiral_Makwana" } ]
Change mySql code to mongodb. any one know how to do it?
2022-07-16T05:39:27.131Z
Change mySql code to mongodb. any one know how to do it?
1,413
https://www.mongodb.com/…b_2_1024x431.png
[]
[ { "code": "", "text": "MongoDB Atlas advertises quite a bit about how it can be integrated with Hashicorp Vault:Automate secrets management for MongoDB Atlas database users and programmatic API keys with two new secrets engines, available in HashiCorp Vault 1.4.Simplify secrets management for your MongoDB cloud databases on MongoDB Atlas with HashiCorp Vault.These methods detail utilizing vault’s dynamic secrets engine. However, there appears to be a database user limit of 100. When using vault dynamic secrets, this quickly becomes problematic. I have a k8s application that is running about 50 pods. We have vault agent side cars interfacing with vault to checkout out dynamic user creds for mongo atlas. As such, each pod gets its own credential (as it should).Now, when it comes time to deploy a change, 50 new pods are spun up and the others torn down. That’s another 50 users created + the 50 from before which continue to exist until they expire. We’ve already hit the limit. What if I need to do a rollback or another deploy before the old creds expire? This does not seem workable.Is there any official approach to get around this? In a busy environment that is at scale I could easily see hundreds if not thousands of users being present.Thanks!", "username": "Andy_Nemzek" }, { "code": "", "text": "Hi Andy, 100 DB users per Atlas project is a soft limit: if you have a need to go higher, please file a support case. We will make the configurable via API call in future.-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi Andrew,Thanks for the reply. I’ve since been informed that we possibly have a dozen or so microservices utilizing the same project. At scale, I could see this possibly yielding thousands of dynamic secrets in play. I guess my company previously opened a support ticket to increase user count from 100 to 200. Is thousands feasible? Is there any real limit we’re dealing with or what is the rationale behind the cap in the first place?Thanks!", "username": "Andy_Nemzek" } ]
Vault Dynamic Secret Management and Atlas User Limits
2022-07-09T04:35:21.748Z
Vault Dynamic Secret Management and Atlas User Limits
1,664
null
[ "node-js", "connecting" ]
[ { "code": "", "text": "no primary server available\nI connect my mongoDB with nodejs api but get some error with this No primary server available in digitalocan server please help me for any suggestion for this\nI user the url for this\nmongodb+srv://user:[email protected]/admin?authSource=admin&replicaSet=db-mongodb-blr1-95134And I Use my monogdb compass database connection\nmongodb+srv://user:[email protected]/admin?tls=true&authSource=admin&replicaSet=db-mongodb-blr1-95134&tlsCAFile=C%3A%5CUsers%5CKishan%5CMusic%5CDownloads%5Cca-certificate+%284%29.crt\nand this working and i connect the server databasebut not working with nodejs", "username": "kishan_gopal" }, { "code": "", "text": "Are you sure the IP address of the source context is on the Atlas IP Access List?", "username": "Andrew_Davidson" }, { "code": "", "text": "You are using a certificate file in Compass but you don’t seem to use the same file in your node.js connection. Check your driver’s manuals on how to do that.", "username": "Yilmaz_Durmaz" } ]
No primary server available
2022-07-11T02:41:51.318Z
No primary server available
3,566
https://www.mongodb.com/…1_2_1024x272.png
[ "node-js" ]
[ { "code": "{\n \"_id\": 1,\n \"SDG\": {\"number\": 13, \"name\": \"Climate action\"},\n \"Source\": \"link\",\n \"Organization\": \"Kingston\"\n}\n", "text": "I have excel data that look like this:But there’s around 4 more columns and like 500 rows in total. I’m trying to get objects/items in BSON to look like this:I’m new to MongoDB and was wondering how I can turn this excel data into the correct format for me to simply add it to my database on MongoDB. Is there a place where I can just drag and drop some data and it’ll turn it into BSON or would I have to manually do it/write code to do it? How would I go about doing that?Second question, how can I make it so that all someone has to do to update the database is update the original excel sheet and simply just replace a line of code or drag and drop the new file into MongoDB? Is this possible?Thank you", "username": "Simerus_Mahesh" }, { "code": "", "text": "I recommend processing this via a python script (or similar) first to JSON or to a dictionary data type in Python: from there you can mongoimport the JSON or use the pymongo driver to load it directly into Atlas", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I add excel sheet data to my database which I can then access using MERN?
2022-07-14T16:57:08.384Z
How do I add excel sheet data to my database which I can then access using MERN?
1,880
null
[ "connecting", "security" ]
[ { "code": "", "text": "Currently, we are using the aws, gcp cloud. we have a problem about connecting from gcp cloud to mongodb atlas aws via internal. Because it’s a difference the provider cloud. So, we can not peering between 2 vpc.\nHow can we config on this case? please help me.", "username": "roger.le" }, { "code": "", "text": "MongoDB Atlas always requires private TLS network encryption: it’s just that you need to add the public IP of the source context to the Atlas access list", "username": "Andrew_Davidson" } ]
How connect the gcp vpc to mongodb atlas aws by private?
2022-07-03T09:42:54.521Z
How connect the gcp vpc to mongodb atlas aws by private?
2,569
null
[ "aggregation", "atlas-search" ]
[ { "code": "db.product.aggregate([\n {\n \"$search\": {\n \"autocomplete\": {\n \"path\": \"description\",\n \"query\": \"product\"\n }\n }\n },\n {\n \"$match\": {\n \"tenantId\": \"bbb60d4e-212f-445e-97a7-ddad13395931\",\n \"isArchive\": false,\n \"isActive\": true\n }\n },\n {\n \"$sort\": {\n \"description\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 10\n }\n])\ndb.product.aggregate([\n {\n \"$search\": {\n \"$match\": \"tenantId\": \"bbb60d4e-212f-445e-97a7-ddad13395931\",\n \"autocomplete\": {\n \"path\": \"description\",\n \"query\": \"product\"\n }\n }\n },\n {\n \"$match\": {\n \"isArchive\": false,\n \"isActive\": true\n }\n },\n {\n \"$sort\": {\n \"description\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 10\n }\n])\ndb.product.aggregate([\n {\n \"$search\": {\n \"index\": \"description-search-index\",\n \"compound\": {\n \"must\": [\n {\n \"equals\": {\n \"path\": \"description\",\n \"value\": \"bbb60d4e-212f-445e-97a7-ddad13395931\"\n }\n },\n {\n \"autocomplete\": {\n \"path\": \"description\",\n \"query\": \"produ\"\n }\n }\n ]\n }\n }\n },\n {\n \"$project\": {\n \"description\": 1\n }\n }\n])\n", "text": "Hello there,It’s me again. I am running the following query in Mongo:So, the query is working fine, but when I query in autocomplete a word with lots of matches, it takes more than a minute to be completed.So I was wondering if it is possible to use the $match of tenantId inside the search. To contextualize better: I am working on a system that you can have your online store, so when you search for a product, it must appear only products from that store (tenantId).Searching, I read something about using compound inside the $search, but I couldn’t discover how to make an exact match in tenantId. I saw the $equal option, but it is only possible to use with ObjectId or boolean.I know that in elasticsearch you have the term search, and it is possible to make an exact match.In a hypothetical and a contextualization way, what I need is:What I tried with compound that gives me an error:If I have to create a specification in the search index for the tenantId field, it would be very helpful to understand the best way for exact match performance.", "username": "Renan_Geraldo" }, { "code": "", "text": "You could try using filter within Compound - see example here.Curious what error you saw for the above. Will try to reproduce myself but let me know if this helps!", "username": "Elle_Shwer" } ]
Is it possible to use exact match in Atlas Search?
2022-07-11T22:58:17.878Z
Is it possible to use exact match in Atlas Search?
3,433
null
[]
[ { "code": "", "text": "Hi fellows,I build a replication set of three nodes om ports 27011 27012 and 27013. if I rundb.serverStatus()[‘repl’] I get no response neither on the master node1 nor the default mongo on port 27017 butdb.serverStatus() shows me\n{\n***\t“ok” : 0,***\n***\t“errmsg” : “command serverStatus requires authentication”,***\n***\t“code” : 13,***\n***\t“codeName” : “Unauthorized”,***\n***\t“$clusterTime” : {***\n***\t\t“clusterTime” : Timestamp(1657802463, 1),***\n***\t\t“signature” : {***\n***\t\t\t“hash” : BinData(0,“eNE3SoG72nibCQq+fImaox4jGhQ=”),***\n***\t\t\t“keyId” : NumberLong(“7119783323987083266”)***\n***\t\t}***\n***\t},***\n***\t“operationTime” : Timestamp(1657802463, 1)***on node1and{\n***\t“ok” : 0,***\n***\t“errmsg” : “command serverStatus requires authentication”,***\n***\t“code” : 13,***\n***\t“codeName” : “Unauthorized”***\n}on the default mongo on port 27017Why does thisdb.serverStatus()[‘repl’]not show any response?Thanks in advance,Uli", "username": "Ulrich_Kleemann1" }, { "code": "*** “errmsg” : “command serverStatus requires authentication”,***\n*** “code” : 13,***\n*** “codeName” : “Unauthorized”***\n", "text": "Hi\nthere is a responseyou need to authenticate yourself", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "Hi Arek,Thanks fpor your reply. How do I authenticate myself using thedb.serverStatus()[‘repl’]command?Regards,Uli", "username": "Ulrich_Kleemann1" }, { "code": "mongosh --port 27017 --authenticationDatabase \"admin\" -u \"user\" -p \"password\"\nmongosh --port 27017\nuse admin\ndb.auth(\"myUserAdmin\", passwordPrompt()) // or cleartext password\n", "text": "you need to authenticate yourself when you are connecting to mongo instance - for example via mongosh,oronce you authenticate yourself successfully you can use\ndb.serverStatus()", "username": "Arkadiusz_Borucki" }, { "code": "--host--port", "text": "27011 27012 and 27013in addition to @Arkadiusz_Borucki 's answer above on how you connect with credentials, you are connecting to the wrong server.the server on 27017 seems to be a single node default server running with the default config file. This server is not a part of your replica set.You need to use --host and --port flags for your replica set servers, where ports are the ones I quoted. if all 3 runs on your local machine, you don’t need to supply host IP, else you will need that too.", "username": "Yilmaz_Durmaz" } ]
db.serverStatus()['repl'] no response
2022-07-14T12:43:35.903Z
db.serverStatus()[&lsquo;repl&rsquo;] no response
2,201
null
[ "dot-net", "containers" ]
[ { "code": "", "text": "Hello,I have been trying for days to write a simple proof of concept web api that writes 5 entries into a mongodb. The application is written in .net 3.1, it is the default weather sample that gets generated when a new web api is created. I have altered the api to write the weather samples to a collection, it is built inside of a docker container that has been pushed and setup as a service in Google Cloud Run. On my local instance the docker container works perfectly, writes the 5 entries into my local mongodb. However, the cloud run service errors out stating it is getting a ```2022-07-07 15:24:49.688 MST —> System.IO.EndOfStreamException: Attempted to read past the end of the stream.\nDefaultEvery single time. I was using the free cluster, but was unable to see the mongodb logs so I purchased the paid version. According to the log files, right after authentication, it says “Interrupted operation as its client disconnected”. I am using the latest Mongo Db.driver (2.16.1). For a more detailed description and to see the log files entries please go to https://stackoverflow.com/questions/72905203/cannot-connect-to-mongodb-atlas-from-google-cloud-run-docker-container on Stack overflow. At this point I do not know where to concentrate my efforts, (is it on the Cloud Run side or Mongodb side?), any help would be greatly appreciated.R", "username": "Ray_Simpson" }, { "code": "", "text": "hey, have you configured network access on atlas side? Can you connect to your atlas cluster if you don’t use docker, like with a simple console app?", "username": "Dmitry_Lukyanov" }, { "code": "", "text": "Dmitry thank you for responding,I have been able to connect to the atlas connection from my development machine. Try to make sure it could connect, I added 0.0.0.0/16 to the white list to make sure it wasn’t a connection side issue. I am currently trying to get a VM of Mongo Express on the cloud side to work, to see if I can connect to atlas from there. If I am able to do so, then I would assume that its an issue with the driver for .Net when running in a container in GCP as it works just fine outside of the cloud.R", "username": "Ray_Simpson" }, { "code": "", "text": "I think I am going to give up on this and ask for a refund for my Mongo DB Atlas service. I have been trying non-stop to get this to work for a solid week and I just can’t try to make the round peg fit in the square hole anymore. Maybe my approach is off, but I thought making a docker container that was run on the google cloud and connected to Atlas would be a breeze, but it turns out to be a hurricane instead. To those that responded to my post, I appreciate it.", "username": "Ray_Simpson" }, { "code": "", "text": "Hi, Ray,We are sorry to hear about your difficulties in connecting from Docker on GCP to MongoDB Atlas. It is interesting that Docker locally is able to connect without problem. I’m not sure what peculiarities of Docker running on GCP would cause the problems that you’re encountering. It would be helpful to have the full stack trace from the logs that you shared initially. This will help us determine whether the connection failure is when attempting to reach DNS, establishing monitoring / RTT connections, or set up the connection pool connections. Thanks in advance for any additional information that you can provide.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hey James,\nHere it is:{“t”:{\"$date\":“2022-07-10T06:15:59.771+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:54648”,“uuid”:“cdeb7bd0-e4f9-4725-8794-3999d1342e12”,“connectionId”:783,“connectionCount”:36}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.771+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:54650”,“uuid”:“483d20d5-7ceb-49aa-860e-75e2631d0c43”,“connectionId”:784,“connectionCount”:37}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.782+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:54652”,“uuid”:“42b8bcb1-9888-4ae5-8ea5-fc5edf120e47”,“connectionId”:785,“connectionCount”:38}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.783+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn784”,“msg”:“Connection ended”,“attr”:{“remote”:“127.0.0.1:54650”,“uuid”:“483d20d5-7ceb-49aa-860e-75e2631d0c43”,“connectionId”:784,“connectionCount”:37}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.783+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn783”,“msg”:“Connection ended”,“attr”:{“remote”:“127.0.0.1:54648”,“uuid”:“cdeb7bd0-e4f9-4725-8794-3999d1342e12”,“connectionId”:783,“connectionCount”:36}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.783+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:54654”,“uuid”:“d48b39a5-531e-430b-9a63-445d49aa0521”,“connectionId”:786,“connectionCount”:37}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.805+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn785”,“msg”:“client metadata”,“attr”:{“remote”:“127.0.0.1:54652”,“client”:“conn785”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.806+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:54656”,“uuid”:“7c1061f4-0575-4b37-834d-5ea0c2640c9c”,“connectionId”:787,“connectionCount”:38}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.806+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn786”,“msg”:“client metadata”,“attr”:{“remote”:“127.0.0.1:54654”,“client”:“conn786”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.818+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn787”,“msg”:“client metadata”,“attr”:{“remote”:“127.0.0.1:54656”,“client”:“conn787”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.818+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn787”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:“mms-automation”,“authenticationDatabase”:“admin”,“remote”:“127.0.0.1:54656”,“extraInfo”:{},“error”:“BadValue: SCRAM-SHA-256 authentication is disabled”}}\n{“t”:{\"$date\":“2022-07-10T06:15:59.824+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20250, “ctx”:“conn787”,“msg”:“Authentication succeeded”,“attr”:{“mechanism”:“SCRAM-SHA-1”,“speculative”:false,“principalName”:“mms-automation”,“authenticationDatabase”:“admin”,“remote”:“127.0.0.1:54656”,“extraInfo”:{}}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.382+00:00”},“s”:“I”, “c”:“SHARDING”, “id”:20997, “ctx”:“conn782”,“msg”:“Refreshed RWC defaults”,“attr”:{“newDefaults”:{}}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.386+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“192.168.248.2:44416”,“uuid”:“f97bd60d-3747-4864-8b45-58770eda6315”,“connectionId”:788,“connectionCount”:39}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.386+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“192.168.248.2:44418”,“uuid”:“439e6612-715e-4290-bfcb-afa40f51fcc2”,“connectionId”:789,“connectionCount”:40}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.408+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn789”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.248.2:44418”,“client”:“conn789”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.408+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn788”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.248.2:44416”,“client”:“conn788”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.409+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“192.168.248.2:44420”,“uuid”:“df1fd54b-e6fd-48b3-afc4-2afd2f5bc63a”,“connectionId”:790,“connectionCount”:41}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.419+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn790”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.248.2:44420”,“client”:“conn790”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.430+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20250, “ctx”:“conn790”,“msg”:“Authentication succeeded”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:\"__system\",“authenticationDatabase”:“local”,“remote”:“192.168.248.2:44420”,“extraInfo”:{}}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.431+00:00”},“s”:“I”, “c”:\"-\", “id”:20883, “ctx”:“conn788”,“msg”:“Interrupted operation as its client disconnected”,“attr”:{“opId”:15903}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.432+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn789”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.248.2:44418”,“uuid”:“439e6612-715e-4290-bfcb-afa40f51fcc2”,“connectionId”:789,“connectionCount”:40}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.433+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn788”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.248.2:44416”,“uuid”:“f97bd60d-3747-4864-8b45-58770eda6315”,“connectionId”:788,“connectionCount”:39}}\n{“t”:{\"$date\":“2022-07-10T06:16:10.434+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn790”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.248.2:44420”,“uuid”:“df1fd54b-e6fd-48b3-afc4-2afd2f5bc63a”,“connectionId”:790,“connectionCount”:38}}\n{“t”:{\"$date\":“2022-07-10T06:16:15.912+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“Checkpointer”,“msg”:“WiredTiger message”,“attr”:{“message”:\"[1657433775:912061][4435:0x7f96a8fff700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 423, snapshot max: 423 snapshot count: 0, oldest timestamp: (1657433535, 1) , meta checkpoint timestamp: (1657433767, 1) base write gen: 1\"}}\n{“t”:{\"$date\":“2022-07-10T06:16:27.462+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“192.168.248.4:50328”,“uuid”:“c8c3b09e-ddee-4aa4-a63e-34f1051d2627”,“connectionId”:791,“connectionCount”:39}}\n{“t”:{\"$date\":“2022-07-10T06:16:27.473+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn791”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.248.4:50328”,“client”:“conn791”,“doc”:{“driver”:{“name”:“NetworkInterfaceTL”,“version”:“5.0.9”},“os”:{“type”:“Linux”,“name”:“CentOS Linux release 7.9.2009 (Core)”,“architecture”:“x86_64”,“version”:“Kernel 3.10.0-1160.66.1.el7.x86_64”}}}}\n{“t”:{\"$date\":“2022-07-10T06:16:27.485+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20250, “ctx”:“conn791”,“msg”:“Authentication succeeded”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:\"__system\",“authenticationDatabase”:“local”,“remote”:“192.168.248.4:50328”,“extraInfo”:{}}}\n{“t”:{\"$date\":“2022-07-10T06:16:27.486+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“192.168.248.4:50334”,“uuid”:“db907fee-ff31-431d-9179-ef32db6cfb46”,“connectionId”:792,“connectionCount”:40}}\n{“t”:{\"$date\":“2022-07-10T06:16:27.594+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn792”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.248.4:50334”,“client”:“conn792”,“doc”:{“driver”:{“name”:“NetworkInterfaceTL”,“version”:“5.0.9”},“os”:{“type”:“Linux”,“name”:“CentOS Linux release 7.9.2009 (Core)”,“architecture”:“x86_64”,“version”:“Kernel 3.10.0-1160.66.1.el7.x86_64”}}}}\n{“t”:{\"$date\":“2022-07-10T06:16:27.601+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20250, “ctx”:“conn792”,“msg”:“Authentication succeeded”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:\"__system\",“authenticationDatabase”:“local”,“remote”:“192.168.248.4:50334”,“extraInfo”:{}}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.487+00:00”},“s”:“I”, “c”:“SHARDING”, “id”:20997, “ctx”:“conn782”,“msg”:“Refreshed RWC defaults”,“attr”:{“newDefaults”:{}}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.490+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“192.168.248.2:44434”,“uuid”:“8ba9aa9f-9d58-42dd-9f8e-9e1de6fa241f”,“connectionId”:793,“connectionCount”:41}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.491+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“192.168.248.2:44436”,“uuid”:“6b1f202b-7db4-45ba-80ce-6302593880bc”,“connectionId”:794,“connectionCount”:42}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.511+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn794”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.248.2:44436”,“client”:“conn794”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.511+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn793”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.248.2:44434”,“client”:“conn793”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.512+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“192.168.248.2:44438”,“uuid”:“22f52e37-a1ee-4cf4-911b-1347710f8697”,“connectionId”:795,“connectionCount”:43}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.522+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn795”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.248.2:44438”,“client”:“conn795”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“amd64”},“platform”:“go1.17.10”,“application”:{“name”:“MongoDB Automation Agent v12.0.6.7562 (git: cddb628636c9576fed56ce13bdc0a3d24e65ca1c)”}}}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.533+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20250, “ctx”:“conn795”,“msg”:“Authentication succeeded”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:\"__system\",“authenticationDatabase”:“local”,“remote”:“192.168.248.2:44438”,“extraInfo”:{}}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.535+00:00”},“s”:“I”, “c”:\"-\", “id”:20883, “ctx”:“conn793”,“msg”:“Interrupted operation as its client disconnected”,“attr”:{“opId”:16607}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.536+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn794”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.248.2:44436”,“uuid”:“6b1f202b-7db4-45ba-80ce-6302593880bc”,“connectionId”:794,“connectionCount”:42}}\n{“t”:{\"$date\":“2022-07-10T06:16:33.536+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn793”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.248.2:44434”,“uuid”:“8ba9aa9f-9d58-42dd-9f8e-9e1de6fa241f”,“connectionId”:793,“connectionCount”:41}}", "username": "Ray_Simpson" }, { "code": "mongodmongo-csharp-drivermongo-go-drivermongo-go-driver", "text": "Hi, Ray,Thank you for providing the mongod logs. From these I can see that the .NET/C# Driver isn’t even attempting to connect. These would show up in the client metadata with a driver name of mongo-csharp-driver whereas all we see are the mongo-go-driver and intracluster connections. (The mongo-go-driver is used by Atlas for various monitoring and orchestration functions.)It would be helpful to have the full logs including stack traces from your client application. That will hopefully allow us to see where the failure is happening that is preventing your connection to your MongoDB Atlas cluster.Thank you for providing the data to help us look into this issue with you.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "var connectionString = \"mongodb+srv://xxxmongodbuser:[email protected]/?retryWrites=true&w=majority\";\n var settings = MongoClientSettings.FromConnectionString(connectionString);\n settings.ConnectTimeout = TimeSpan.FromMinutes(1);\n settings.SocketTimeout = TimeSpan.FromMinutes(1);\n settings.ServerSelectionTimeout = TimeSpan.FromMinutes(1);\n settings.SslSettings = new SslSettings\n {\n EnabledSslProtocols = System.Security.Authentication.SslProtocols.Tls12\n };\n settings.UseTls = true;\n mongoClient = new MongoClient(settings);\n}\n\npublic static void SaveData(WeatherForecast[] weatherForecasts)\n {\n var database = mongoClient.GetDatabase(\"WeatherDB\");\n var collection = database.GetCollection<WeatherForecast>(\"Weather\");\n collection.InsertMany(weatherForecasts);\n }\n", "text": "Blockquote\nA timeout occurred after 120000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “ReplicaSet”, Type : “ReplicaSet”, State : “Disconnected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/ac-xxxxxxx-shard-00-00-pri.xxxxxx.mongodb.net:27017” }”, EndPoint: “Unspecified/ac-xxxxxxx-shard-00-00-pri.xxxxxx.mongodb.net:27017”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: “MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n—> System.TimeoutException: Timed out connecting to 192.168.248.2:27017. Timeout was 00:02:00.\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.Connect(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n— End of inner exception stack trace —\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)”, LastHeartbeatTimestamp: “2022-07-14T20:42:18.6034577Z”, LastUpdateTimestamp: “2022-07-14T20:42:18.6034634Z” }, { ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/ac-xxxxxxx-shard-00-01-pri.xxxxxx.mongodb.net:27017” }”, EndPoint: “Unspecified/ac-xxxxxxx-shard-00-01-pri.xxxxxx.mongodb.net:27017”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: “MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n—> System.TimeoutException: Timed out connecting to 192.168.248.3:27017. Timeout was 00:02:00.\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.Connect(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n— End of inner exception stack trace —\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)”, LastHeartbeatTimestamp: “2022-07-14T20:42:18.6013861Z”, LastUpdateTimestamp: “2022-07-14T20:42:18.6013965Z” }, { ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/ac-xxxxxxx-shard-00-02-pri.xxxxxx.mongodb.net:27017” }”, EndPoint: “Unspecified/ac-xxxxxxx-shard-00-02-pri.xxxxxx.mongodb.net:27017”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: “MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n—> System.TimeoutException: Timed out connecting to 192.168.248.4:27017. Timeout was 00:02:00.\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.Connect(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n— End of inner exception stack trace —\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)”, LastHeartbeatTimestamp: “2022-07-14T20:42:18.8760889Z”, LastUpdateTimestamp: “2022-07-14T20:42:18.8760893Z” }] }.Here is the stack from my program. The program itself is very simple:", "username": "Ray_Simpson" }, { "code": "mongodjxxx-cluster-pri.xxxxxx.mongodb.netjxxx-cluster.xxxxxx.mongodb.net-pri", "text": "Hi, Ray,Thank you for providing the stack trace and program. Both are very helpful in understanding the root cause.I can see from your connection string and StackOverflow question that you are using VPC peering. The driver has created monitoring connections to the cluster nodes and attempted to heartbeat with each of them. These heartbeats are failing, which explains why the mongod logs show no connections from the driver. The monitoring connections and heartbeats are not even reaching your Atlas cluster.It would be helpful to determine if the problem is with the VPC peering setup or another problem. I would suggest removing VPC peering from the equation (at least temporarily) and seeing if you can connect from your Docker-hosted application to your MongoDB Atlas cluster.If a non-VPC connection also fails, please provide the application logs so we can determine if the monitoring threads are still unable to successfully connect to the cluster. If you are able to connect, then the VPC peering configuration is the cause of your connection failures and efforts should be focused there.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hey James,First thank you for trying to help me, I greatly appreciate it. I went back and recreated everything from epoch and it still failed. However, when I did what you suggested and tried to connect without the private connection - it worked. I have always been of the opinion that the issue is on the cloud side, as I am not a network engineer and have always struggled with that side of technology. I am going to keep working with it and see if I can get it to work over the peer connection. Thank you again.R", "username": "Ray_Simpson" }, { "code": "", "text": "Hi, Ray,Glad to hear that you were able to successfully connect without VPC peering. Yes, network engineering is its own microcosm with specialized skills and nomenclature. And the “magic” of containerized environments introduce their own challenges - especially when the physical NIC is shared between multiple virtual NICs.Still it is good that you were able to identify that the root cause of the connection failure was network configuration/challenges rather than an intrinsic driver issue. I would encourage you to ask for assistance with VPC peering setup in the MongoDB Atlas category as they will have more experience in this topic.Sincerely,\nJames", "username": "James_Kovacs" } ]
Client Disconnect on Cloud Run Docker .Net container
2022-07-11T16:40:12.258Z
Client Disconnect on Cloud Run Docker .Net container
4,553
null
[ "database-tools" ]
[ { "code": "", "text": "The console shows me an error and does not allow to execute a massive collection in a json.\n-There are no users in the database\n-This error from C9\nError:\nrs0:PRIMARY> mongoimport --db test --collection SucursalesDoc --file example.json\n2022-07-15T14:29:34.004+0000 E QUERY [js] SyntaxError: missing ; before statement @(shell):1:14", "username": "Lourdes_Nataly_Rojas_Hernandez" }, { "code": "", "text": "The query is:\nmongoimport -d test -c SucursalesDoc /home/ec2-user/environmentexample.json", "username": "Lourdes_Nataly_Rojas_Hernandez" }, { "code": "", "text": "You have to run mongoimport from os prompt but you are running it from mongo prompt\nPlease exit from mongo and run it from os prompt", "username": "Ramachandra_Tummala" }, { "code": "", "text": "thanks, but I already did that option and I get the same error.\nI thought it was for some connection with the ssl. but it still does not respond to any command", "username": "Lourdes_Nataly_Rojas_Hernandez" }, { "code": "", "text": "You mean same syntax error?\nShow us a screenshot", "username": "Ramachandra_Tummala" }, { "code": "", "text": "\nimage1380×617 93.3 KB\n\nThis is a other way that i found on the internet. (Image 1)", "username": "Lourdes_Nataly_Rojas_Hernandez" }, { "code": "", "text": "and the first way was this.:\n\nimage1152×153 19.2 KB\nbut i have an error.\nMy mail is: [email protected]", "username": "Lourdes_Nataly_Rojas_Hernandez" }, { "code": "", "text": "Is mongo tools bin added to your path?\necho $pathor run mongoimport giving full path of the binary", "username": "Ramachandra_Tummala" } ]
Help, i can not import json from c9 to mongodb, I have a error in execution query
2022-07-15T14:40:08.182Z
Help, i can not import json from c9 to mongodb, I have a error in execution query
2,595
null
[]
[ { "code": "", "text": "I would like to remove my account. How can I do this?", "username": "Dolores_Kefalos" }, { "code": "", "text": "Hi @Dolores_Kefalos,Welcome to the MongoDB Community forums Which specific account do you want to delete? If you are looking to delete your MongoDB Atlas account refer to this page here.If you have any doubts, please feel free to reach out to us.Regards,\nKushagra Kesav", "username": "Kushagra_Kesav" }, { "code": "", "text": "I set up an account under “Try for Free”, and I no longer need this account.", "username": "Dolores_Kefalos" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I delete my account and start over?
2022-07-15T15:49:30.239Z
How do I delete my account and start over?
2,568
https://www.mongodb.com/…e_2_1024x512.png
[ "aggregation", "python", "text-search" ]
[ { "code": "{\n \"_id\": \"937a04d3f516443e87abe8308a1fe83e\",\n \"username\": \"andy\",\n \"full_name\": \"andy white\",\n \"image\" : \"https://abc.com/xy.jpg\",\n... etc\n}\nmatch_stage = [\n {\"$match\": {\"$text\": {\"$search\": \"abc\"}}},\n {\"$sort\": {\"score\": {\"$meta\": \"textScore\"}}},\n {\"$project\": {\"username\": 1,\"full_name\": 1,\"image\":1}}\n]\n\nstages = [\n *match_stage\n]\nusers = users_db.aggregate(stages)\n", "text": "I have a collection named users, it has following attributesi want to make a text search on full_name and username using aggregation pipeline, so that if a user search for any 3 letters, then the most relevant full_name or username returned sorted by relevancy,\ni have already created text index on username and full_name and then i tried query from below link:but i am getting below error:pymongo.errors.OperationFailure: FieldPath field names may not start with ‘$’. Consider using $getField or $setField., full error: {‘ok’: 0.0, ‘errmsg’: “FieldPath field names may not start with ‘$’. Consider using $getField or $setField.”, ‘code’: 16410, ‘codeName’: ‘Location16410’, ‘$clusterTime’: {‘clusterTime’: Timestamp(1657811022, 14), ‘signature’: {‘hash’: b’a\\xb4rem\\x02\\xc3\\xa2P\\x93E\\nS\\x1e\\xa6\\xaa\\xb0\\xb1\\x85\\xb5’, ‘keyId’: 7062773414158663703}}, ‘operationTime’: Timestamp(1657811022, 14)}Note: i am using pymongo", "username": "Zeeshan_Anis" }, { "code": "mongoshmongodb.collection.getIndexes()", "text": "@Zeeshan_Anis, your PyMongo code works fine, without any errors. Just verify if the Text Index is created by running this command from the mongosh or mongo shell - db.collection.getIndexes(), or just lookup in the Compass under the Indexes tab.What are the versions of MongoDB database, Python and PyMongo you are working with?", "username": "Prasad_Saya" }, { "code": "", "text": "\nScreenshot 2022-07-15 at 10.14.43 AM1322×720 57.8 KB\n\nHi @Prasad_Saya i am using MongoDB 5.0.9 Enterprise, Python 3.9.7 and pymongo 4.1.1\nscreenshot attached after running getindexes on users collection, right now i have removed index from username and index added on just full_name attribute just to get it start working but this issue is still unresolved", "username": "Zeeshan_Anis" }, { "code": "testtestfull_namefull_name$matchimport pymongo\nclient = pymongo.MongoClient()\ncollection = client.test.test\n\ndoc = { \"username\" : \"andy\", \"full_name\" : \"andy white\", \"image\" : \"https://abc.com/xy.jpg\" }\nresult = collection.insert_one(doc)\nprint(result, '\\n')\n\nfor doc in collection.find():\n\tprint(doc)\n\nresult = collection.create_index([( \"full_name\", pymongo.TEXT )])\nprint('\\n', result, '\\n')\n\nresult = list(collection.list_indexes())\nprint(result, '\\n')\n\npipeline = [\n { \"$match\": { \"$text\": { \"$search\": \"white\" } } },\n { \"$sort\": { \"score\": { \"$meta\": \"textScore\" } } },\n { \"$project\": { \"username\": 1, \"full_name\": 1, \"image\": 1 } }\n]\n\nprint(list(collection.aggregate(pipeline)))\n", "text": "@Zeeshan_Anis, I tried the following code and works fine using MongoDB v4.2, Pyhton 3.8 and PyMongo 4.x. The code creates a new collection test in the database test, inserts a document in it, queries the document, creates a Text Index on the field full_name, and queries on the full_name using an aggregation $match stage.I don’t see any errors in the output. The aggregation prints the document.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Prasad_Saya for your help, i also able to run this but how can i search with half name, like i search for “and” and i get all the results closer to “and” and sorted by relevancy", "username": "Zeeshan_Anis" }, { "code": "$regex$regexMatch$regexFind", "text": "@Zeeshan_Anis, for searching with a substring of text field you can use the regular expressions. There is a $regex query operator and also $regexMatch and $regexFind aggregation operators you can try using.", "username": "Prasad_Saya" }, { "code": "match_stage = [\n {\"$match\": {\"$text\": {\"$search\": \"whi\"}, \"full_name\": {\"$regex\": \"whi\"}}},\n {\"$sort\": {\"score\": {\"$meta\": \"textScore\"}}}\n]\n raise OperationFailure(errmsg, code, response, max_wire_version)\npymongo.errors.OperationFailure: FieldPath field names may not start with '$'. Consider using $getField or $setField., full error: {'ok': 0.0, 'errmsg': \"FieldPath field names may not start with '$'. Consider using $getField or $setField.\", 'code': 16410, 'codeName': 'Location16410', '$clusterTime': {'clusterTime': Timestamp(1657869096, 4), 'signature': {'hash': b'\\xb6\\x03Nh\\x8cX\\xab\\xb9)\\xa2\\x8c_^\\xa8\\x0f\\xf25\\xbd\\x89_', 'keyId': 7062773414158663703}}, 'operationTime': Timestamp(1657869096, 4)}\n", "text": "@Prasad_Saya i am trying below pipeline:but i again started getting below error:one more thing when i was trying without regex, i was getting error due to $project, like the query you are running above successfully, i am able to run if i remove $project from there, but with project my query breaks with the error i posted in my first question, its strange", "username": "Zeeshan_Anis" }, { "code": "match_stage = [\n {\"$match\": {\"$text\": {\"$search\": \"whi\"}, \"full_name\": {\"$regex\": \"whi\"}}},\n {\"$sort\": {\"score\": {\"$meta\": \"textScore\"}}}\n]\n$regex", "text": "Are you sure its the correct syntax? I don’t know if you can use the $regex operator within a Text Search.", "username": "Prasad_Saya" }, { "code": "", "text": "its the syntax i used before but without text search, but i found one link to use regex with search just now and trying:Learn how to use a regular expression in your Atlas Search query.", "username": "Zeeshan_Anis" }, { "code": "", "text": "About the link in your previous comment:@Zeeshan_Anis I am not familiar with the Atlas Search. You can lookup in the MongoDB Server documentation.", "username": "Prasad_Saya" }, { "code": "match_stage = [{\n \"$search\": {\n \"index\": \"full_name_txt\",\n \"regex\": {\n \"query\": search_key,\n \"path\": \"full_name\"\n\n }\n }\n}\n]\nmatch_stage = [\n {\"$match\": {\"$text\": {\"$search\": \"whit\"}, \"full_name\": {\"regex\": \"whit\"}}},\n {\"$sort\": {\"score\": {\"$meta\": \"textScore\"}}},\n {\"$project\": {\"username\": 1, \"full_name\": 1}}\n]\n", "text": "@Prasad_Saya thanks, i tried below pipeline:but its not searching partial strings, i am trying more, but meanwhile can u give the idea, why query is failing in aggregate with $project which i posted at first i.e:", "username": "Zeeshan_Anis" }, { "code": "", "text": "Hi @Zeeshan_AnisI think you’re mixing Atlas Search & Legacy Text Search. Those are the two types of text search supported in MongoDB (see Text Search). However, to use Atlas Search your data would need to be in MongoDB Atlas.i am using MongoDB 5.0.9 EnterpriseThis is an on-prem installation, thus it only supports the Legacy Text Search.if a user search for any 3 letters, then the most relevant full_name or username returned sorted by relevancyThe legacy text search does not support partial matches. If this is your requirement, then you would have to use Atlas search. There is a tutorial to do exactly this: How to Run Partial Match Atlas Search Queries. There are also examples using Python in the page (note that you can select the language for the examples in that page).However should you decide that your data needs to stay on-prem, then @Prasad_Saya’s working example is a great starting point. You just don’t have the ability to do partial matches.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for your answer @kevinadi. its new for me , i will try this partial search in Atlas as i am using Mongo DB Atlas", "username": "Zeeshan_Anis" } ]
Text search in aggregation
2022-07-14T15:30:00.449Z
Text search in aggregation
5,257
null
[]
[ { "code": "", "text": "hello,we have problem with mongodb big cpu usage without slow queries,\ndo you know what is next step in goal for detect reason and fix problem?", "username": "onerror_onerror" }, { "code": "", "text": "Hi,\nhow did you check that there were no slow queries or COLLSCAN’s ?\nWhat about other metrics like the current number of available WiredTiger tickets?\nYou need to provide more information (including mongo version and deployment type - eg. Replica Set or Sharded cluster)", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "hi,in mongo is log, where appear all requests which is slower then 100msdb.serverStatus().wiredTiger.cache\n{\n“application threads page read from disk to cache count” : 2342,\n“application threads page read from disk to cache time (usecs)” : 131576,\n“application threads page write from cache to disk count” : 74736,\n“application threads page write from cache to disk time (usecs)” : 894555,\n“bytes allocated for updates” : 739705320,\n“bytes belonging to page images in the cache” : 128861140,\n“bytes belonging to the history store table in the cache” : 571,\n“bytes currently in the cache” : 878399222,\n“bytes dirty in the cache cumulative” : 12955102239,\n“bytes not belonging to page images in the cache” : 749538081,\n“bytes read into cache” : 123093866,\n“bytes written from cache” : 3180820163,\n“cache overflow score” : 0,\n“checkpoint blocked page eviction” : 77,\n“checkpoint of history store file blocked non-history store page eviction” : 0,\n“eviction calls to get a page” : 6654,\n“eviction calls to get a page found queue empty” : 4557,\n“eviction calls to get a page found queue empty after locking” : 5,\n“eviction currently operating in aggressive mode” : 0,\n“eviction empty score” : 0,\n“eviction gave up due to detecting an out of order on disk value behind the last update on the chain” : 0,\n“eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update” : 0,\n“eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain” : 0,\n“eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update” : 0,\n“eviction passes of a file” : 137,\n“eviction server candidate queue empty when topping up” : 8,\n“eviction server candidate queue not empty when topping up” : 0,\n“eviction server evicting pages” : 0,\n“eviction server slept, because we did not make progress with eviction” : 52590,\n“eviction server unable to reach eviction goal” : 0,\n“eviction server waiting for a leaf page” : 882148,\n“eviction state” : 64,\n“eviction walk most recent sleeps for checkpoint handle gathering” : 0,\n“eviction walk target pages histogram - 0-9” : 37,\n“eviction walk target pages histogram - 10-31” : 95,\n“eviction walk target pages histogram - 128 and higher” : 0,\n“eviction walk target pages histogram - 32-63” : 5,\n“eviction walk target pages histogram - 64-128” : 0,\n“eviction walk target pages reduced due to history store cache pressure” : 0,\n“eviction walk target strategy both clean and dirty pages” : 0,\n“eviction walk target strategy only clean pages” : 0,\n“eviction walk target strategy only dirty pages” : 137,\n“eviction walks abandoned” : 29,\n“eviction walks gave up because they restarted their walk twice” : 92,\n“eviction walks gave up because they saw too many pages and found no candidates” : 0,\n“eviction walks gave up because they saw too many pages and found too few candidates” : 0,\n“eviction walks reached end of tree” : 227,\n“eviction walks restarted” : 0,\n“eviction walks started from root of tree” : 135,\n“eviction walks started from saved location in tree” : 2,\n“eviction worker thread active” : 4,\n“eviction worker thread created” : 0,\n“eviction worker thread evicting pages” : 1767,\n“eviction worker thread removed” : 0,\n“eviction worker thread stable number” : 0,\n“files with active eviction walks” : 0,\n“files with new eviction walks started” : 135,\n“force re-tuning of eviction workers once in a while” : 0,\n“forced eviction - history store pages failed to evict while session has history store cursor open” : 0,\n“forced eviction - history store pages selected while session has history store cursor open” : 0,“forced eviction - history store pages successfully evicted while session has history store cursor open” : 0,\n“forced eviction - pages evicted that were clean count” : 0,\n“forced eviction - pages evicted that were clean time (usecs)” : 0,\n“forced eviction - pages evicted that were dirty count” : 348,\n“forced eviction - pages evicted that were dirty time (usecs)” : 527113,\n“forced eviction - pages selected because of a large number of updates to a single item” : 0,\n“forced eviction - pages selected because of too many deleted items count” : 639,\n“forced eviction - pages selected count” : 939,\n“forced eviction - pages selected unable to be evicted count” : 104,\n“forced eviction - pages selected unable to be evicted time” : 142,\n“hazard pointer blocked page eviction” : 105,\n“hazard pointer check calls” : 2706,\n“hazard pointer check entries walked” : 39667,\n“hazard pointer maximum array length” : 2,\n“history store score” : 0,\n“history store table insert calls” : 0,\n“history store table insert calls that returned restart” : 0,\n“history store table max on-disk size” : 0,\n“history store table on-disk size” : 4096,\n“history store table out-of-order resolved updates that lose their durable timestamp” : 0,\n“history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp” : 0,\n“history store table reads” : 0,\n“history store table reads missed” : 0,\n“history store table reads requiring squashed modifies” : 0,\n“history store table truncation by rollback to stable to remove an unstable update” : 0,\n“history store table truncation by rollback to stable to remove an update” : 0,\n“history store table truncation to remove an update” : 0,\n“history store table truncation to remove range of updates due to key being removed from the data page during reconciliation” : 0,\n“history store table truncation to remove range of updates due to out-of-order timestamp update on data page” : 0,\n“history store table writes requiring squashed modifies” : 0,\n“in-memory page passed criteria to be split” : 974,\n“in-memory page splits” : 487,\n“internal pages evicted” : 2,\n“internal pages queued for eviction” : 1,\n“internal pages seen by eviction walk” : 38,\n“internal pages seen by eviction walk that are already queued” : 0,\n“internal pages split during eviction” : 0,\n“leaf pages split during eviction” : 110,\n“maximum bytes configured” : 33094107136,\n“maximum page size at eviction” : 4911259,\n“modified pages evicted” : 2037,\n“modified pages evicted by application threads” : 0,\n“operations timed out waiting for space in cache” : 0,\n“overflow pages read into cache” : 0,\n“page split during eviction deepened the tree” : 0,\n“page written requiring history store records” : 0,\n“pages currently held in the cache” : 2227,\n“pages evicted by application threads” : 0,\n“pages evicted in parallel with checkpoint” : 988,\n“pages queued for eviction” : 800,\n“pages queued for eviction post lru sorting” : 510,\n“pages queued for urgent eviction” : 1572,\n“pages queued for urgent eviction during walk” : 222,\n“pages queued for urgent eviction from history store due to high dirty content” : 0,\n“pages read into cache” : 2454,\n“pages read into cache after truncate” : 38,\n“pages read into cache after truncate in prepare state” : 0,\n“pages requested from the cache” : 150589702,\n“pages seen by eviction walk” : 2345,\n“pages seen by eviction walk that are already queued” : 463,\n“pages selected for eviction unable to be evicted” : 182,\n“pages selected for eviction unable to be evicted as the parent page has overflow items” : 0,“pages selected for eviction unable to be evicted because of active children on an internal page” : 0,\n“pages selected for eviction unable to be evicted because of failure in reconciliation” : 0,\n“pages selected for eviction unable to be evicted because of race between checkpoint and out of order timestamps handling” : 0,\n“pages walked for eviction” : 30554,\n“pages written from cache” : 75277,\n“pages written requiring in-memory restoration” : 1662,\n“percentage overhead” : 8,\n“the number of times full update inserted to history store” : 0,\n“the number of times reverse modify inserted to history store” : 0,\n“tracked bytes belonging to internal pages in the cache” : 4629506,\n“tracked bytes belonging to leaf pages in the cache” : 873769716,\n“tracked dirty bytes in the cache” : 631664742,\n“tracked dirty pages in the cache” : 1089,\n“unmodified pages evicted” : 0\n}this all statswe have standalone server, not replicaset", "username": "onerror_onerror" }, { "code": "Keyhole-loginfotop\niostat\nps -eo pcpu,pid,user,args | sort -k 1 -r | head -10\ndb.currentOp({\"secs_running\": {$gte: 3}})\ndb.serverStatus().wiredTiger.concurrentTransactions\ndb.serverStatus().globalLock\n", "text": "What is your mongo version ? There is a very good tool keyhole, it will help you collects information about your instance, also you can use keyhole to analyze your mongod log file - The logs keep a history of the server operations over time. Keyhole, with the -loginfo option, reads mongo logs and prints a summary of slow operations grouped by query patterns.Is MongoDB only one instance that is running on your machine or there is also some other server? Is the CPU consumption high all the time or only from time to time ?\nin the beginning, during high CPU usage, can you provide the output of:", "username": "Arkadiusz_Borucki" } ]
Mongodb big cpu without slow queries
2022-07-14T16:45:33.174Z
Mongodb big cpu without slow queries
2,219
null
[ "queries", "node-js", "crud", "mongoose-odm" ]
[ { "code": "mongoose findOneAndUpdatefor(object in array)\n mongoose.findOneAndUpdate({a: object.a}, {b:object.b}) // this runs 11lakh times which is taking a lot of time.\n", "text": "I have 1.1 million records in an array. Each record represents a document in mongoDb. I need to update each document in DB through a script.I created a script and iterated over an array of 1.1 million records and called mongoose findOneAndUpdate method. This approach works but it takes a lot of time.Pseudocode:Is there a way I can update the records in an time efficient manner?", "username": "amit_dhawan" }, { "code": "", "text": "You need https://www.mongodb.com/docs/manual/reference/method/db.collection.bulkWrite/I do not know if you can use it despite the fact you are using mongoose. Hopefully, you still have access to the native driver API.", "username": "steevej" }, { "code": "", "text": "so I need to something like pass an array of 1.1 million operations to bulkwrite function. Corect?", "username": "amit_dhawan" }, { "code": "", "text": "something like pass an array of … operations to bulkwrite function. Corect?The API documentation I shared clearly indicates that the first parameter has to be an array of operations. I really do not know what other confirmation you need. I really cannot explain better than what is there.", "username": "steevej" }, { "code": "", "text": "Thanks for replying. I was just confirming on the large number of operations can be supported or not.", "username": "amit_dhawan" }, { "code": "100,000hello.maxWriteBatchSizemaxWriteBatchSize100,000200,000100,000", "text": "There’s a limit with bulkwrite, however you can still proceed to add 1.1 million arrays (memory intensive).For your general knowledgeThe number of operations in each group cannot exceed the value of the maxWriteBatchSize of the database. As of MongoDB 3.6, this value is 100,000 . This value is shown in the hello.maxWriteBatchSize field.This limit prevents issues with oversized error messages. If a group exceeds this limit, the client driver divides the group into smaller groups with counts less than or equal to the value of the limit. For example, with the maxWriteBatchSize value of 100,000 , if the queue consists of 200,000 operations, the driver creates 2 groups, each with 100,000 operations.If you use runCommand, however, it will throw error if it exceeds the limit.", "username": "Dave_Teu" } ]
Mongodb update takes a lot of time for large number of input
2022-07-13T15:41:35.344Z
Mongodb update takes a lot of time for large number of input
3,863
null
[ "dot-net" ]
[ { "code": "public class Game : RealmObject\n{\n [PrimaryKey] long Id { get; set; }\n public ISet<User> Players { get; }\n /* Other Properties */\n}\n\npublic class User: RealmObject\n{\n [PrimaryKey] long Id { get; set; }\n public string Name { get; set; }\n\n [Backlink(nameof(Game.Players))]\n public IQueriable<Game> Games { get; }\n}\nrealm.write( () =>\n{\n/* This code does not work */\n // myPlayer comes from a different method and is not retrieved from the database\n if ( !myGame.Players.Contains( myPlayer ) )\n myGame.Players.Add(myPlayer); // <-- this is the problem code\n\n/* This code works */\n if ( !myGame.Players.Contains( myPlayer ) )\n {\n var player = realm.Find<User>( myPlayer.Id ) ?? myPlayer;\n myGame.Players.Add( player );\n }\n\n/* This code should work as well */\n var game = myGame.GetDeepCopy();\n game.Players.Add( myPlayer ); \n realm.Add( game, update:true );\n});\n", "text": "Hi,I have a question regarding updating an entry in a write operation with data already existing in the database but provided from outside. Consider this:Now, I want to update the Game Object with new Data, including new Users:The problem arises when the user is not yet a player for the game and gets added, but already exists in the database. I get the player from a different source, not from the database. I know the name and id correctly, that is not a problem. All I want is to add it to the game. But that will through an exception for an already existing primary key.If I search for the player by Id and add the realm object instead it works. But since I know that the player is not modified in any way and it only gets added to the Player list of the game, retrieving this object is just needless overhead.In my actual code, this would have to be done for multiple types with multiple objects each, so for one “Game” it would have to be done a dozen times, and for hundreds or even thousands of “Games”. So it would be quite a lot of overhead.Right now I see only two ways of adding the player to the game in my example, either retrieve the objects and add those to the list or create a copy, add the non-realm object to the copy and upsert them to the realm database (I assume, didn’t test this version).Does someone have a way to do this without any needless searches or copies?", "username": "Thorsten_Schmitz" }, { "code": "usergamemyGame.Players.Adduseruserpkuseruser", "text": "Hi again @Thorsten_Schmitz,When adding a user to a game the backlink that you’ve defined must add the user to the database too, if not already existing. This is the only way for the backlink to function at all.\nNow, you can’t add multiple objects with the same primary key. They have to be unique.\nSo, in your case, when myGame.Players.Add realm searches for the user to link to and if not there it’ll add it. And if another user was already there with the same pk you get the error you mentioned.You seem concerned with performance. Usually realm is quite fast on retrievals/reads. Have you profiled your application and found the hit to be considerable? If so, we’d be interested in seeing your results.Overall, if your profiling shows considerable hits you could consider caching the users and only use those. This should allow your “different source” to return the same user instance which will prevent realm from adding it again.", "username": "Andrea_Catalini" }, { "code": "", "text": "Hi,I haven’t profiled anything yet. I will see if I can add this.I’m worried because it might end up being several hundred thousand or (worst case) even millions of lookups, all performed locally on phone or tablet. And the objects would be disgarded afterwards.It will also make the code much more cluttered and I was hoping to avoid that somehow.Caching is not an option because the data in my app will come from an external source and is parsed by my app. Returning a RealmObject for this would just move all those lookups to another function.Also, this is part of a bigger update function. It’s basically abatch/bulk operation, so all those lookups are performed in that one function, and I would have to cache everything at once.I need to see how I can improve performance by putting multiple update operations in one write, but I’m worried about syncronization between the update threads as I need to handle things like multiple thread trying to add the same new user.The user is just for better storage and the backlink querrying. I could make it an EmbeddedObject, but I’m worried about memory usage. Using an EmbeddedObject would result in duplicate strings. And as they are much bigger that the long keys the database size might increase a lot, and I have to consider a database size of several Gb on fat32 Android systems. So I can’t just neglect memory usage either.", "username": "Thorsten_Schmitz" }, { "code": "BacklinkEmbeddedObject", "text": "Backlinks in your case are the right choice. EmbeddedObjects are just duplication and it’d take time to update the “same user” in all games.\nAnd I agree that caching those instances doesn’t sound a good fit for you.About the code clutter, that should be easily fixed by a helper or extension method. So that from the surface your call would be a 1 line call.Adding an object to a collection by PK is currently a functionality that we don’t support. You could open a feature request on the github repo of the .NET SDK.Let’s talk about performance:\nYes, if you can do fewer writes with larger updates in each you’ll have some gains. In fact, this would create fewer Transactions hence less jumps back and forth between managed and unmanaged.\nLastly, I understand your concerns about possible hit in performance. But, in general, any performance conversation should be backed by profiled code. This is simply because, while theoretically an operation is slow in practice, it may almost be imperceptible to the users.I hope this helps.", "username": "Andrea_Catalini" } ]
Update a list with existing elements throughs exception
2022-07-11T22:03:45.643Z
Update a list with existing elements throughs exception
1,536
https://www.mongodb.com/…d_2_1024x535.png
[ "node-js", "production" ]
[ { "code": "client.connect()client.connect()client.startSession()collection.initializeUnorderedBulkOp()collection.initializeOrderedBulkOp()collection.bulkWrite()client.connect()", "text": "The MongoDB Node.js team is pleased to announce version 4.8.0 of the mongodb package!In this release you will now get auto-complete and type safety for nested keys in an update filter. See the example below:\n\nimage11118×585 55.5 KB\nIn our last release we made explicitly calling client.connect() before performing operations optional with some caveats. In this release client.startSession() can now be called before connecting to MongoDB.NOTES:We invite you to try the mongodb library immediately, and report any issues to the NODE project.", "username": "neal" }, { "code": "", "text": "A post was split to a new topic: Error: Can’t resolve ‘mongodb-client-encryption’", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Node.js Driver 4.8.0 Released
2022-07-13T15:50:40.884Z
MongoDB Node.js Driver 4.8.0 Released
2,536
null
[ "replication" ]
[ { "code": "", "text": "Hi All,\nI have a problem with only one collection on my mongodb replicaset. The Mongo DB Server version is 4.4.6. The collection is working since more than an year. The size of it is around 30 GB and the amount of documents is around 4.300.000. The issue is that all over sudden, I am not able to write and insert new data on this collection. I am able to delete and modify old data, but when trying to write - I receive timeout. There are no problems to write on the other collections in the same DB or to create new collections.\nThe replication status is OK and there is no lag between the nodes.Could you please give me some advice what can I look for?Thank you", "username": "Aleksandar_Aleksandrov" }, { "code": "", "text": "check if your user privilege is changed to read only. or if you are logging in with a non-privileged user. or you are logging into a secondary node instead of a primary.then check the log file at the time you are trying to write. check config file if there is a size setting from when the server/database was created. check the database itself from the shell if there is a size restriction on it.a disk failure holding the file for this database might give an error on the primary, so check OS for such errors.", "username": "Yilmaz_Durmaz" } ]
Unable to write in one collection from database
2022-07-12T16:05:23.515Z
Unable to write in one collection from database
2,104
null
[]
[ { "code": "", "text": "I have a total of 8GB ram, and 8GB swap space. While building mongo from source, the usage goes out of bounds, and it crashes. As far as i have notices, the swap space goes to a maximum of 1.3G before crashing, I was wondering if this is what is intended by the OS?PS. It successfully got built in my 8GB RAM intel mac (2017 model)Hence, I Wanted to confirm if increasing my RAM to 16 or 24GB would actually solve the issue? Like what could the max RAM usage jump to?Thanks", "username": "Sahil_Chawla1" }, { "code": "python3 buildscripts/scons.py install-mongod -j1\n", "text": "As per https://jira.mongodb.org/browse/SERVER-68043Try running using -j1 or -jX (where X is the number of threads). Example:", "username": "psyntium" } ]
Unable to build from source (Ubuntu)
2022-06-08T05:59:07.854Z
Unable to build from source (Ubuntu)
2,604
null
[ "php" ]
[ { "code": "", "text": "We have a huge DB (<500GB) hosted on AWS. When we clone this and try to access it, we are getting this message - “Detected corrupt BSON data for field path ‘domain’ at offset 62 {“exception”:”[object] (MongoDB\\Driver\\Exception\\UnexpectedValueException(code: 0): Detected corrupt BSON data for field path ‘domain’ at offset 62 at vendor/jenssegers/mongodb/src/Jenssegers/Mongodb/Query/Builder.php:410)\"The index rebuilding is failing during Mongo DB repair. This process takes days to complete. We are using Laravel and GitHub - mongodb/mongo-php-driver: The Official MongoDB PHP driver. The package suggests repairing the DB but since thats failing, we have been stuck at this for weeks. Any help would be much appreciated,", "username": "indrajith_N_A" }, { "code": "mongoexport", "text": "Hello @indrajith_N_A ,Welcome to the community!! Could you please provide more information to help us understand this issue in detail?When we clone this and try to access it, we are getting this messageThe index rebuilding is failing during Mongo DB repair.*Are you using mongod --repair for this and what does it says when it fails?The package suggests repairing the DB but since thats failing, we have been stuckRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi Tarun,How was the clone clone created?\nWe took an AMI of the AWS instance.MongoDB shell version is 3.6.4 and driver is mongo-php-driver\nGitHub - mongodb/mongo-php-driver: The Official MongoDB PHP driver-Does exporting the collection using mongoexport show the same error?\nWe tried the export feature using MongoDB Compass.On exporting the data, we are getting an error “Invalid UTF-8 string in BSON document”. We could try out MongoExport if you think it could make any difference.What’s the error you see during index building?After a couple of days, the server crashes and the DB becomes unusable. We doubled the RAM and tried again, same happened. Will try to find out the error.Are you using mongod --repair for this and what does it says when it fails?\nWe did try this option. I will find out if any error messages can be dug out.Package name is mentioned at the top. Error message is - Detected corrupt BSON data for field path ‘domain’. It happens when filtering some data.", "username": "indrajith_N_A" }, { "code": "", "text": "Hi @indrajith_N_AWe took an AMI of the AWS instance.If the idea is to copy a deployment from one server to another, using the tested and supported MongoDB backup methods may be better. In fact, I would recommend you try using the supported backup & restore methods to see if it results in the same error you’re seeing with the AMI process.We could try out MongoExport if you think it could make any difference.It would be interesting to see if mongoexport encounters the same issue. Please attach all error messages from mongoexport if this is possible.Tarun", "username": "Tarun_Gaur" }, { "code": "2022-07-04T01:59:01.203+0000 I CONTROL [initandlisten]\n2022-07-04T01:59:01.229+0000 I STORAGE [initandlisten] Expected index data is missing, rebuilding. NS: breachaware.breached_accounts Index: _id_ Ident: index-3--8744299071372410985\n2022-07-04T01:59:01.229+0000 I STORAGE [initandlisten] Expected index data is missing, rebuilding. NS: breachaware.breached_accounts Index: domain_alias_compound_index Ident: index-4--8744299071372410985\n2022-07-04T01:59:01.229+0000 I STORAGE [initandlisten] Expected index data is missing, rebuilding. NS: breachaware.breached_accounts Index: breach_id_index Ident: index-5--8744299071372410985\n2022-07-04T01:59:01.229+0000 I INDEX [initandlisten] found 2 index(es) that wasn't finished before shutdown\n2022-07-04T01:59:01.229+0000 F - [initandlisten] Fatal assertion 40592 InternalError: IndexCatalog has left over indexes that must be cleared ns: breachaware.breached_accounts at src/mongo/db/db.cpp 465\n2022-07-04T01:59:01.229+0000 F - [initandlisten]\n\n***aborting after fassert() failure\n", "text": "This is the error that we’re getting while indexing", "username": "indrajith_N_A" }, { "code": "breachaware.breached_accounts_id_domain_alias_compound_indexbreach_id_index", "text": "It seems like a data corruption issue, could you help me with below?Are you seeing any issues with the original Database(not the clone)? That is, in the collection breachaware.breached_accounts , are you seeing the aforementioned indexes intact and functional ( _id_ , domain_alias_compound_index , and breach_id_index ).The message is typically displayed when there is disk-level data corruption. If there is no issue on the original database, and this is only present on the clone, then the clone has corrupted data.Please use the supported backup & restore method for moving data between MongoDB instances , or using an initial sync on a replica set. Other methods are not supported, may cause issue with the clone/backup, and can possibly also affect the integrity of the original database if the backup method is especially invasive (e.g. not shutting down MongoDB before copying data, inadvertent modification of the dbpath while mongod is running, etc., any of them can have catastrophic consequences).Tarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Detected corrupt BSON data error when cloning a huge DB
2022-07-01T12:08:34.548Z
Detected corrupt BSON data error when cloning a huge DB
4,137
https://www.mongodb.com/…e_2_1024x512.png
[ "security" ]
[ { "code": "", "text": "Hi All,\nWe are looking at MongoDB for our NoSQL db and need an answer on what the default permission are for a user e.g., if a user is created and no roles are provided.Hopefully the answer is nothing (including query and writing any data), but this article implies a user (without any roles) will be able to read all of the data in the db that the user was created", "username": "Michael_Byrd1" }, { "code": "roles:[]> show collections\nMongoServerError: not authorized on test to execute command \n{ listCollections: 1, filter: {}, cursor: {}, nameOnly: true, authorizedCollections: false, \nlsid: { id: UUID(\"...\") }, $db: \"test\", $readPreference: { mode: \"primaryPreferred\" } }\n", "text": "Hi @Michael_Byrd1,Welcome to the MongoDB Community Forums what the default permission is for a user e.g., if a user is created and no roles are provided.There is no default permission if you do not enter any value within the roles:[] and if you try to execute the command after being authorized as that particular user, it will throw an error:this article implies a user (without any roles) will be able to read all of the data in the DB that the user was createdSorry, but I couldn’t find this statement in the official docs. Please share the screenshot if you read it somewhere in our documentation.Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
If a user is created with an empty array of roles are given what can the user do?
2022-07-14T03:40:00.341Z
If a user is created with an empty array of roles are given what can the user do?
2,603
null
[ "kubernetes-operator" ]
[ { "code": "", "text": "Hi all,\nI come across this link https://docs.mongodb.com/manual/tutorial/configure-encryption/ and it is exactly I want to do. But I can’t find any doc saying how to do when deploying the db CRD in k8s. So I wonder if data encryption is support in k8s operator?", "username": "stanley_tam1" }, { "code": " additionalMongodConfig:\n security:\n enableEncryption: true\n kmip:\n clientCertificateFile: \"/kmip/cert/cert.pem\"\n serverCAFile: \"/kmip/ca/ca.pem\"\n serverName: xx.xx.xx.xx\n port: xxxx\n\n", "text": "for Enterprise operator you can do it. not sure with community version.", "username": "sergey_kosourikhin" } ]
Can k8s enterprise operator support deploy mongo to its encryption at rest?
2022-01-30T08:10:08.123Z
Can k8s enterprise operator support deploy mongo to its encryption at rest?
3,271
null
[ "dot-net", "crud" ]
[ { "code": "", "text": "I’ve seen a lot of solutions relating to updating a document array but nothing seems to work. I need to change the first item of an array (dosage) to a new value (newdosage) in C#I specify the document I want to update:\nfilter = Builders.Filter.Eq(\"_id\", new ObjectId(Key));I try to create an update query:\nfirst try: var update = Builders.Update.Set( f => f.dosage[-1], newdosage);\nThis doesn’t work because the dosage property is not known in a BsonDocumentsecond try: var update = Builders.Update.Set(x => x.Dosage[-1], newdosage);\nThis is recognized to be a valid assignment so I try to execute: collection.UpdateOne(filter, update);collection is assigned and the database would be accessed fine except UpdateOne requires\nthe second parameter to be of BsonDocument type.I’ve tried numerous other solutions but nothing works. Can anyone help:\nThanks.", "username": "Dennis_Kuhl" }, { "code": "Builders<BsonDocument>.Filter...Builders<BsonDocument>.Update...", "text": "Try Builders<BsonDocument>.Filter... and Builders<BsonDocument>.Update...Quick Start: C# and MongoDB - Update Operations | MongoDBalso check Update Arrays in a Document — Node.js (mongodb.com)", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "The C# starters guide always gives examples of filtering on array elements before an update but in this case the array elements are unknown data or complex objects. Their positioning examles using $ don’t work in C#.\nThe problem is clear in this example of array insertion:set up ‘collection’ to access data base, then:\nvar filter = Builders.Filter.Eq(Keys.MongoId, ObjectId.Parse(chatRoomId));\nvar update = Builders.Update.PushEach(Keys.Comments, new List() { comment }, position: 0);\ncollection.UpdateOne(filter, update);This gives a C# syntax error because filter and update are not created as ‘Builders…’\nThere are many “solutions” that seem to get around C# typing in the mongodb update calls.", "username": "Dennis_Kuhl" }, { "code": "UpdateDefinitionBuilderTests.cs", "text": "if you haven’t found it yet, will you also check this: Indexed_Positional_Typedthe file belongs to a test file of UpdateDefinitionBuilderTests.cs of the driver and seems to do the thing you are trying.every “test” file in the repo is also an example of that feature so much so that even documentation points to these files for examples. again, if you haven’t been there before, you will find that useful.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
C# updating document array item
2022-07-14T18:55:24.505Z
C# updating document array item
5,248
null
[ "swift", "react-native" ]
[ { "code": "Delete Accountid", "text": "Well, it’s July and Apple is requiring the deletion of accounts from in the app.I have been monitoring this part of the documentation to make sure that I can do this correctly, but it seems like it’s still in manual mode for the time being.I created a function where I remove their login information from my CustomJWT database (props to Cosync for making JWT information easy to handle.Next, I need to delete all of the data in that user’s partition, and ultimately delete the user.I can easily wipe all of the data, but there doesn’t seem to be a way to programmatically delete the user. There does seem to be a way in the Swift SDK, but these apps are React Native.I thought I could just leave that user there, but it seems like creating a new account with the same email will ultimately grab that old user account, and I don’t want that to happen. For instance, I could delay deleting their data for 72 hours and offer a restore option through support, but if they Delete Account and sign up again with the same email, that data will still be linked to the old App Services User since I’m partitioning on that id.Is there a way to delete an App Services User from a function yet? If not, is this something that is coming? This seems to be pretty important for anyone building on iOS now since our updates won’t go through App Review without it.", "username": "Kurt_Libby1" }, { "code": "", "text": "Hey @Kurt_Libby1 - the delete user API does exist in React Native, but it looks like we missed adding it to that page. I’ll get that updated.For the testing apps we use for the docs, we have written a custom deleteAllUsers func that authenticates via the Admin API and deletes users. You could do something similar to delete specific users. The users endpoints include separate endpoints to delete confirmed & pending users. I’ll add something to the docs about using a custom func w/the admin API to delete users, too.Thanks for pointing this out!", "username": "Dachary_Carey" }, { "code": "", "text": "Following up; I’ve updated the Delete Users page to have links to those SDK APIs in RN and Node.js, and have added an example of a custom function you might use to delete users via the App Services Admin API. Hope this helps!", "username": "Dachary_Carey" }, { "code": "", "text": "Amazing! Thanks Dachary.", "username": "Kurt_Libby1" } ]
Delete App Services User from backend
2022-07-13T11:35:56.167Z
Delete App Services User from backend
2,314
null
[ "aggregation", "data-modeling" ]
[ { "code": "media.aggregate(\n [\n {\n \"$match\":\n {\n \"uuid\":\"9ca7b8b4f68348869f05dbcf4e62a560\",\n \"type\":\"playlist\"\n }\n },\n {\n \"$lookup\":\n {\n \"from\":\"media\",\n \"localField\":\"tracks._id\",\n \"foreignField\":\"_id\",\n \"as\":\"tracks\"\n }\n }\n ]\n)\n{\n\"_id\": \"62baa8249f19e6ca929191f5\",\n \"title\": \"This is a playlist\",\n \"tracks\": [\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n }\n }\n ],\n \"type\": \"playlist\"\n}\n \"_id\": \"62baa8249f19e6ca929191f5\",\n \"title\": \"This is a playlist\",\n \"tracks\": [\n {\n \"_id\": {\n \"$oid\": \"62bd5bb68fce34575360669c\"\n },\n \"album\": {\n \"_id\": {\n \"$oid\": \"62bd63c88fce34575360669d\"\n },\n \"thumbnail\": \" https://i.scdn.co/image/ab67616d00001e02a9956b47dc867ce769c7841f\"\n },\n \"artists\": [\n {\n \"_id\": {\n \"$oid\": \"62bd69c68fce3457536066a1\"\n }\n }\n ],\n \"disc_number\": 1,\n \"duration\": 166.64,\n \"explicit\": false,\n \"is_playable\": true,\n \"name\": \"About A Girl ( New Release )\",\n \"popularity\": 73,\n \"track\": true,\n \"track_number\": 2,\n \"type\": \"track\",\n \"uuid\": \"55yvzYuvJYG2RUEnMK78tr\",\n \"audio\": {\n \"_id\": {\n \"$oid\": \"62bd8b518fce3457536066b1\"\n },\n \"duration\": 483000\n },\n \"is_new_release\": true,\n \"is_promoted\": true,\n \"title\": \"hey\"\n }\n ],\n \"type\": \"playlist\"\n}\n", "text": "Hello,I am working on an audio streaming platform and I am facing a problem where if a user adds the same song twice in a playlist I will only be able to recieve it once,I am running the following query:This is the data that is present in my playlist before the aggregation:My goal is to get all the tracks even though they are the same, I would like them aggregated as multiple objects but I am getting the following response when i execute the query:", "username": "Emilio_El_Murr" }, { "code": "[{\n $match: {\n _id: '62baa8249f19e6ca929191f5',\n type: 'playlist'\n }\n}, {\n $unwind: {\n path: '$tracks'\n }\n}, {\n $lookup: {\n from: 'media',\n localField: 'tracks._id',\n foreignField: '_id',\n as: 'tracks'\n }\n}, {\n $facet: {\n rootObj: [\n {\n $limit: 1\n }\n ],\n tracks: [\n {\n $group: {\n _id: '$_id',\n tracks: {\n $push: {\n $first: '$tracks'\n }\n }\n }\n }\n ]\n }\n}, {\n $replaceRoot: {\n newRoot: {\n $mergeObjects: [\n {\n $first: '$rootObj'\n },\n {\n $first: '$tracks'\n }\n ]\n }\n }\n}]\n", "text": "Hi @Emilio_El_Murr ,Well to do what you want the aggregation might get more complex. In general the best guidance is in the playlist document store any information that is vital for the presented songs in the playlist and not only ideas to avoid complex self joins.However, if you still like to try my workaround pipeline you are welcome:", "username": "Pavel_Duchovny" }, { "code": "", "text": "This is some kind of optimization that I came to appreciate. By doing it like this, duplicated values are $lookup-ed up only once save processing time on the server. It also preserve bandwith since duplicate resulting documents are not sent over the wire.An alternative to the $unwind/$group would be $set stage where you use $map to complete the source array with the resulting array. That would negate the optimization supplied by default. But that’s fine because it would be your choice and others, like me would keep benefiting from the optimization. I prefer to do this kind of data structure cosmetic at the application level as it is easier to scale. Doing the final matching and sending duplicate data from the server affects all users. Doing the final matching in the client code, and even in the front end, only affect the ones with duplicate.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation lookup not getting the document more than once if I am looking up the same ID
2022-07-13T07:45:28.373Z
Aggregation lookup not getting the document more than once if I am looking up the same ID
2,575
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.10.0 of the MongoDB Go Driver.This release includes the addition of queryable encryption support, an automatic encryption shared library, key management API operations, improvements to full document requests, UUID generation refactoring, optimizing memory consumption when compressing wire messages, and a provisional API for timeout. For more information please see the 1.10.0 release notes.You can obtain the driver source from GitHub under the v1.10.0 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team", "username": "Preston_Vasquez" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.10.0 Released
2022-07-14T14:53:29.258Z
MongoDB Go Driver 1.10.0 Released
1,930
null
[ "aggregation", "data-modeling", "compass" ]
[ { "code": "$group: {\n {\n _id: '$region',\n \"customer_average_income\": {\n $avg: '$customer_income'\n },\n \"customer_average_expenses\": {\n $avg: '$customer_expenses'\n },\n \"customer_average_something\": {\n $avg: '$customer_something'\n }\n } \n}\n$group: {\n {\n _id: '$region',\n \"$map\": {\n input: '$$ROOT',\n in: {$avg: '$$ROOT'}\n }\n}\n", "text": "Hello, I want to do something that seemed trivial to me, but I can’t find a way to do it in mongodb aggregations. I’m using Compass to make my aggregations.Let’s say I have this aggregationIt works fine, but I have to hard-code the fields of which I want to calculate the average, manually. Let’s say I have many of those fields of which I want to calculate the average, how do I achieve an equivalent aggregation where I can just pass the field as a variable?Something like this, but $map cannot be used in $group aggregation:What am I missing? Any help will be very appreciated, thank you.", "username": "Davide_Di_Grande" }, { "code": "field_to_average = \"customer_something\"\n\n$group : {\n _id : \"$region\" ,\n [ field_to_average + \"_average\" ] : { $avg : \"$\" + field_to_average } \n}\n", "text": "I achieve an equivalent aggregation where I can just pass the field as a variableIn JS you could do:", "username": "steevej" }, { "code": "", "text": "Well, that involves using code to build the aggregation, I already thought about that. But…\nIs that a good practice with mongoDB? The view would not update automatically if I added new fields, for example, unless you run the code again, am I right?", "username": "Davide_Di_Grande" } ]
How do I iterate over all fields of a document (or a sub-document) when applying an aggregation?
2022-07-02T16:19:27.752Z
How do I iterate over all fields of a document (or a sub-document) when applying an aggregation?
2,839
null
[ "java", "android" ]
[ { "code": "NETWORK_IO_EXCEPTION(realm::app::CustomError:1000): javax.net.ssl.SSLHandshakeException: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.\n\njavax.net.ssl.SSLHandshakeException: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.\n", "text": "I have been using Realm SDK for Android. Everything was working fine until 2 days ago when 6 of my devices stopped communicating with the Realm app. Only devices with Android 6 or below are affected. Newer devices are working fine.Upon investigating, this was the error thrown when trying to log in to realm:It looks like the error is related to the server (MongoDB Realm), but is there something I can do from the client side?", "username": "binokary" }, { "code": "", "text": "The issue here is that MongoDB Cloud transitioned to use a Let’s Encrypt as the Certificate Authority for all its services. These certificates are signed with the ISRG Root X1 certificate. To prevent service disruptions, ensure you have the ISRG Root X1 certificate in your trusted certificate store. Because these devices are running an operating system from 2015, they will not have this certificate, you will need to either manually add the certificate or update the operating system.", "username": "Ian_Ward" }, { "code": "", "text": "Thanks, it worked after adding the certificates!", "username": "binokary" }, { "code": "", "text": "can you explain or share a link that explains how to add the certificate?", "username": "Gali_Ravi_Praveen" } ]
Getting SSL error since last 2 days on Android 6 devices
2022-03-23T11:46:39.629Z
Getting SSL error since last 2 days on Android 6 devices
3,621
null
[]
[ { "code": "", "text": "I am facing problem in importing MongoDB client in my Svelte Application", "username": "CCSWMS_Dev" }, { "code": "", "text": "Hi @CCSWMS_Dev, welcome to the community.\nCan you please post the error that you are facing?In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer", "username": "SourabhBagrecha" }, { "code": "", "text": "2 posts were split to a new topic: Realm Web Error: Class extends value # is not a constructor or null in Svelte App", "username": "SourabhBagrecha" } ]
MongoDB Client in Svelte
2022-03-15T06:44:28.848Z
MongoDB Client in Svelte
3,490
https://www.mongodb.com/…87f29bb3b479.png
[ "aggregation", "queries", "node-js", "graphql" ]
[ { "code": "", "text": "Hi! I’m having the following error. Trying to filter the search using $near. The problem is that in graphQL it’s returning the following error while launching it in Heroku: “message”: “$geoNear, $near, and $nearSphere are not allowed in this context”The Thing i’m finding really weird it that whenever I run the same code in localhost it works just fine! I really don’t know how to fix this and if someone has an idea feel free to share it with me!\nif (search.within) {\nconst { coordinates, distance: $maxDistance } = {\ncoordinates: [context.longitude, context.latitude],\ndistance: search.within,\n};\nquery.geo = {\n$near: {\n$geometry: {\ntype: ‘Point’,\ncoordinates,\n},\n$maxDistance,\n},\n};\n}const results = await Venue.paginate(query, {\npage: 1,\nlimit: 1000,\nsort,\n});\nimage904×554 40.4 KB\n", "username": "Bruno_Quagliata" }, { "code": "", "text": "Hi @Bruno_Quagliata ,The main problem is that the graphql API goes via a user context that needs to go through app services rules defined.There is a limitation that $near or $geonear and similar operators cannot support that :Perhaps try to use a system function and have this query available via an http endpoint or check if a custom system resolver is a possibility to answer the geo query.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny it seems like whenever I connect it to localhost Mongo using version 4.4.13 it works just fine but if I connect it to another cluster of Mongo 5.0.9 it returns the (“message”: “$geoNear, $near, and $nearSphere are not allowed in this context”) error. The program was using an older database deployed in Compose and working just fine, I migrated it to Atlas and for some time it was working but it eventually started to fail. I’m guessing this might be a Mongo issue.\nimage910×830 68.8 KB\n", "username": "Bruno_Quagliata" }, { "code": "", "text": "@Bruno_Quagliata ,It might be related to driver / server version .I am not familiar with the specific but you can. Look hereHi,\nAfter mongoose, mongodb update i have an error message with $near, I don't …know if it mongodb or mongoose problem, before update it's work fine, thanks for help\n\napi : \n![near](https://user-images.githubusercontent.com/36846411/45107055-a8053a80-b138-11e8-8890-a5b7f61646a0.png)\n\nclient : \n![near2](https://user-images.githubusercontent.com/36846411/45107065-b2273900-b138-11e8-8abf-6483d0c2b315.png)\n\nclient code : \n![near3](https://user-images.githubusercontent.com/36846411/45107080-bb180a80-b138-11e8-8cd3-c97256425ffd.png)", "username": "Pavel_Duchovny" } ]
$geoNear, $near, and $nearSphere are not allowed in this context
2022-07-13T19:30:22.800Z
$geoNear, $near, and $nearSphere are not allowed in this context
4,603
null
[]
[ { "code": "", "text": "MongoDB is install on window server But now how to fix issue if my database server consume more space on drive", "username": "Ashish_Wanjare" }, { "code": "", "text": "The obvious solutions:1 - get bigger disk\n2 - have less data in your database (might need #4 below to fully benefit)Less obvious solutions:3 - https://www.mongodb.com/docs/manual/core/wiredtiger/#compression\n4 - https://www.mongodb.com/docs/manual/reference/command/compact/", "username": "steevej" } ]
MongoDB Disk Utilization threshold on windows drive
2022-07-14T10:43:50.969Z
MongoDB Disk Utilization threshold on windows drive
1,361
null
[ "aggregation", "time-series" ]
[ { "code": "db.createCollection(\n 'data',\n {\n timeseries: {\n timeField: 'timestamp',\n metaField: 'metadata',\n granularity: 'hours',\n },\n },\n);\ndb.getCollection('data').insert({\n timestamp: ISODate('2022-04-20T12:00:00Z'),\n metadata: '123456',\n value: 0.2,\n})\ndb.getCollection('data').insert({\n timestamp: ISODate('2022-04-20T12:00:00Z'),\n metadata: '123456',\n value: 0.3,\n})\ndb.getCollection('data').aggregate([\n {\n $match: {\n metadata: '123456',\n timestamp: ISODate('2022-04-20T12:00:00Z'),\n },\n },\n {\n $group: {\n _id: '$timestamp',\n docs: {$push: '$$ROOT'},\n },\n },\n {\n $replaceRoot: {\n newRoot: {\n $reduce: {\n input: '$docs',\n initialValue: [\n {\n _id: {'$toObjectId': '000000000000000000000000'},\n },\n ],\n in: {\n $cond: {\n if: {'$gt': ['$$this._id', '$$value._id']},\n then: '$$this',\n else: '$$value',\n },\n },\n },\n },\n },\n },\n])\n", "text": "I have a time series collection and sometimes I’m getting correcting values, e.g. ‘at this timestamp the value is actually 0.2 instead of 0.3’. Since updates can only be made on metafield, I’m stuck with a duplicate values and only one of them is correct. I’m getting around this by aggregation (I’m ‘throwing away’ the old values from result set base on its _id).Simple example:\nI have this collectionThen I insert a documentAnd some time in the future the correcting documentTo get that data I use following aggregationThere are a lot more documents than these two and the match stage usually uses a date range, but I hope the example is enough for an illustration.Now finally onto my question. Would it be possible / safe / advisable to update the value directly in the underlying bucket collection (‘system.buckets.data’ in this case)?\nMy thoughts on this are:", "username": "prunevac" }, { "code": "", "text": "Unfortunately I am in the same situation where I am on the brink of simply modifying the underlying bucket as well to get around mongodb’s limitations.\nMongoDB 6 seems to still not support these manipulations: https://www.mongodb.com/docs/v6.0/core/timeseries/timeseries-limitations/", "username": "Stefan_de_Jong" }, { "code": "", "text": "How did you proceed? We are in the same boat…", "username": "Benjamin_Behringer" }, { "code": "", "text": "I just run the aggregation I posted above for every data retrieval.", "username": "prunevac" } ]
Timeseries: Modifying underlying bucket collection for easier queries
2022-04-20T10:22:53.280Z
Timeseries: Modifying underlying bucket collection for easier queries
3,337
null
[]
[ { "code": "{ tags: 1, sort.profit: -1 }\n{\n\"filter\": {\n \"tags\": {\n \"$all\": [\n \"profitability|profitable\",\n \"currency-code|gbp\",\n \"bids|no-bid\",\n \"end-time|less-than-24-hours\"\n ]\n }\n },\n \"sort\": {\n \"sort.profit\": -1\n },\n \"limit\": 50,\n}\n{\n \"type\": \"command\",\n \"ns\": \"\",\n \"command\": {\n \"find\": \"table\",\n \"filter\": {\n \"tags\": {\n \"$all\": [\n \"profitability|profitable\",\n \"currency-code|gbp\",\n \"bids|no-bid\",\n \"end-time|less-than-24-hours\"\n ]\n }\n },\n \"sort\": {\n \"sort.profit\": -1\n },\n \"limit\": 50,\n \"lsid\": {\n \"id\": {\n \"$binary\": {\n \"base64\": \"\",\n \"subType\": \"\"\n }\n }\n },\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 0,\n \"i\": 1\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"\",\n \"subType\": \"\"\n }\n },\n \"keyId\": \n }\n },\n \"$db\": \"\"\n },\n \"planSummary\": \"IXSCAN { tags: 1, sort.profit: -1 }\",\n \"keysExamined\": 2310,\n \"docsExamined\": 2310,\n \"fromMultiPlanner\": true,\n \"cursorExhausted\": true,\n \"numYields\": 444,\n \"nreturned\": 50,\n \"queryHash\": \"\",\n \"planCacheKey\": \"\",\n \"reslen\": 342662,\n \"locks\": {\n \"Global\": {\n \"acquireCount\": {\n \"r\": 445\n }\n },\n \"Mutex\": {\n \"acquireCount\": {\n \"r\": 1\n }\n }\n },\n \"readConcern\": {\n \"level\": \"local\",\n \"provenance\": \"implicitDefault\"\n },\n \"storage\": {\n \"data\": {\n \"bytesRead\": 698110513,\n \"timeReadingMicros\": 7267966\n },\n \"timeWaitingMicros\": {\n \"cache\": 28\n }\n },\n \"remote\": \"\",\n \"protocol\": \"op_msg\",\n \"durationMillis\": 8175,\n \"v\": \"5.0.9\"\n}\n", "text": "Hi there, I am a bit new to using Mongo and I am set up using mongo atlas to store a load of documents (~500,000) that all have an array of tags on them. The tag array usually has around 10 to 20 tags.I am trying to query for documents based on several tags and sorted by a field.\nThe Index I have is set out like:and then the query is something like:I’ve attached a profiler log with some bits redactedI am seeing query times of more than 20 seconds in some cases. I assume I have set myself up incorrectly or I am trying to make mongo do something that it wasn’t designed for.", "username": "Doog" }, { "code": "{ sort.profit: -1, tags: 1 }", "text": "Hi @Doog and welcome in the MongoDB Community !Try to invert your index to { sort.profit: -1, tags: 1 } as the ESR rule is broken here because $all is a range query.This should avoid the in-memory sort which can be expensive, especially if you don’t have a lot of RAM.Can you please share the winning plan of the explain(true) of this query before and after?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'pokeprice.marketplace.listings',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { tags: { '$eq': 'profitability|profitable' } },\n { tags: { '$eq': 'currency-code|gbp' } },\n { tags: { '$eq': 'bids|no-bid' } },\n { tags: { '$eq': 'end-time|less-than-24-hours' } }\n ]\n },\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'LIMIT',\n limitAmount: 50,\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { tags: { '$eq': 'profitability|profitable' } },\n { tags: { '$eq': 'currency-code|gbp' } },\n { tags: { '$eq': 'bids|no-bid' } }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { tags: 1, 'sort.profit': -1 },\n indexName: 'tags_1_sort.profit_-1',\n isMultiKey: true,\n multiKeyPaths: { tags: [ 'tags' ], 'sort.profit': [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n tags: [\n '[\"end-time|less-than-24-hours\", \"end-time|less-than-24-hours\"]'\n ],\n 'sort.profit': [ '[MaxKey, MinKey]' ]\n }\n }\n }\n },\n rejectedPlans: [\n\n ]\n },\n command: {\n find: 'marketplace.listings',\n filter: {\n tags: {\n '$all': [\n 'profitability|profitable',\n 'currency-code|gbp',\n 'bids|no-bid',\n 'end-time|less-than-24-hours'\n ]\n }\n },\n sort: { 'sort.profit': -1 },\n limit: 50,\n '$db': 'pokeprice'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1657142274, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"84e75b45192ea6df09a60ac4f3757211f2aab19a\", \"hex\"), 0),\n keyId: Long(\"7065994558026285062\")\n }\n },\n operationTime: Timestamp({ t: 1657142274, i: 1 })\n}\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'pokeprice.marketplace.listings',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { tags: { '$eq': 'profitability|profitable' } },\n { tags: { '$eq': 'currency-code|gbp' } },\n { tags: { '$eq': 'bids|no-bid' } },\n { tags: { '$eq': 'end-time|less-than-24-hours' } }\n ]\n },\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'LIMIT',\n limitAmount: 50,\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { tags: { '$eq': 'profitability|profitable' } },\n { tags: { '$eq': 'currency-code|gbp' } },\n { tags: { '$eq': 'bids|no-bid' } }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { tags: 1, 'sort.profit': -1 },\n indexName: 'tags_1_sort.profit_-1',\n isMultiKey: true,\n multiKeyPaths: { tags: [ 'tags' ], 'sort.profit': [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n tags: [\n '[\"end-time|less-than-24-hours\", \"end-time|less-than-24-hours\"]'\n ],\n 'sort.profit': [ '[MaxKey, MinKey]' ]\n }\n }\n }\n },\n rejectedPlans: [\n\n ]\n },\n command: {\n find: 'marketplace.listings',\n filter: {\n tags: {\n '$all': [\n 'profitability|profitable',\n 'currency-code|gbp',\n 'bids|no-bid',\n 'end-time|less-than-24-hours'\n ]\n }\n },\n sort: { 'sort.profit': -1 },\n limit: 50,\n '$db': 'pokeprice'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1657143513, i: 11 }),\n signature: {\n hash: Binary(Buffer.from(\"35b990efa3495ecc98a61cdbce04286081f501c9\", \"hex\"), 0),\n keyId: Long(\"7065994558026285062\")\n }\n },\n operationTime: Timestamp({ t: 1657143513, i: 11 })\n}\n{\"sort.profit\": -1, \"tags\": 1}", "text": "Hi Maxime,Thank you for the reply, I have done as you asked and this is the result:\nPre index:Post indexIt seems it doesn’t want to choose the new indexjust to be clear I added {\"sort.profit\": -1, \"tags\": 1} as the new index.I removed the rejected plans from these because there were a large amount, would you like to see them too?", "username": "Doog" }, { "code": "", "text": "Looking through the plan it looks like it uses the index to find one of the tags then does an indexless search through the remainder, which in this case is going to be quite largeIs there a way to get it to search for every tag using the index?", "username": "Doog" }, { "code": "{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'pokeprice.marketplace.listings',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { tags: { '$eq': 'profitability|profitable' } },\n { tags: { '$eq': 'currency-code|gbp' } },\n { tags: { '$eq': 'bids|no-bid' } },\n { tags: { '$eq': 'end-time|less-than-24-hours' } }\n ]\n },\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'LIMIT',\n limitAmount: 50,\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { tags: { '$eq': 'profitability|profitable' } },\n { tags: { '$eq': 'currency-code|gbp' } },\n { tags: { '$eq': 'bids|no-bid' } },\n { tags: { '$eq': 'end-time|less-than-24-hours' } }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'sort.profit': Long(\"-1\"), tags: Long(\"1\") },\n indexName: 'sort.profit_-1_tags_1',\n isMultiKey: true,\n multiKeyPaths: { 'sort.profit': [], tags: [ 'tags' ] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'sort.profit': [ '[MaxKey, MinKey]' ],\n tags: [ '[MinKey, MaxKey]' ]\n }\n }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 50,\n executionTimeMillis: 28677,\n totalKeysExamined: 2671925,\n totalDocsExamined: 102010,\n executionStages: {\n stage: 'LIMIT',\n nReturned: 50,\n executionTimeMillisEstimate: 23897,\n works: 2671926,\n advanced: 50,\n needTime: 2671875,\n needYield: 0,\n saveState: 3855,\n restoreState: 3855,\n isEOF: 1,\n limitAmount: 50,\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { tags: { '$eq': 'profitability|profitable' } },\n { tags: { '$eq': 'currency-code|gbp' } },\n { tags: { '$eq': 'bids|no-bid' } },\n { tags: { '$eq': 'end-time|less-than-24-hours' } }\n ]\n },\n nReturned: 50,\n executionTimeMillisEstimate: 23891,\n works: 2671925,\n advanced: 50,\n needTime: 2671875,\n needYield: 0,\n saveState: 3855,\n restoreState: 3855,\n isEOF: 0,\n docsExamined: 102010,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 102010,\n executionTimeMillisEstimate: 1104,\n works: 2671925,\n advanced: 102010,\n needTime: 2569915,\n needYield: 0,\n saveState: 3855,\n restoreState: 3855,\n isEOF: 0,\n keyPattern: { 'sort.profit': Long(\"-1\"), tags: Long(\"1\") },\n indexName: 'sort.profit_-1_tags_1',\n isMultiKey: true,\n multiKeyPaths: { 'sort.profit': [], tags: [ 'tags' ] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'sort.profit': [ '[MaxKey, MinKey]' ],\n tags: [ '[MinKey, MaxKey]' ]\n },\n keysExamined: 2671925,\n seeks: 1,\n dupsTested: 2671925,\n dupsDropped: 2569915\n }\n }\n }\n },\n command: {\n find: 'marketplace.listings',\n filter: {\n tags: {\n '$all': [\n 'profitability|profitable',\n 'currency-code|gbp',\n 'bids|no-bid',\n 'end-time|less-than-24-hours'\n ]\n }\n },\n sort: { 'sort.profit': -1 },\n hint: 'sort.profit_-1_tags_1',\n limit: 50,\n '$db': 'pokeprice'\n },\n serverInfo: {\n\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1657186270, i: 1 }),\n signature: {\n\n }\n },\n operationTime: Timestamp({ t: 1657186270, i: 1 })\n}\n", "text": "If i force the query to use the index with sort first then tags it runs considerably slowerExamining around 250,000 keys using the sort first index, whereas it examines about 2000 keys with the tag first index", "username": "Doog" }, { "code": "", "text": "Following more investigation here, the slow query is not a find but a count.I am running a find and a count in parallel using the same filter but the count takes ~20sec and the find take s less than 2Is there a way to get explain from a count query?", "username": "Doog" }, { "code": "db.coll.explain(true).count({})\nmatch TAG1\nloookup ID,TAG2\nif matched LOOKUP ID,TAG3\nif matched LOOKUP ID,TAG4\n[\n {\n '$match': {\n 'tags': 'C#'\n }\n }, {\n '$lookup': {\n 'from': 'messages', \n 'let': {\n 'id': '$_id', \n 'tags': '$tags'\n }, \n 'pipeline': [\n {\n '$match': {\n '$expr': {\n '$and': [\n {\n '$eq': [\n '$$id', '$_id'\n ]\n }, {\n '$in': [\n 'Java', '$$tags'\n ]\n }\n ]\n }\n }\n }\n ], \n 'as': 'result'\n }\n }, {\n '$match': {\n '$expr': {\n '$eq': [\n {\n '$size': '$result'\n }, 1\n ]\n }\n }\n }, {\n '$replaceRoot': {\n 'newRoot': {\n '$arrayElemAt': [\n '$result', 0\n ]\n }\n }\n }, {\n '$lookup': {\n 'from': 'messages', \n 'let': {\n 'id': '$_id', \n 'tags': '$tags'\n }, \n 'pipeline': [\n {\n '$match': {\n '$expr': {\n '$and': [\n {\n '$eq': [\n '$$id', '$_id'\n ]\n }, {\n '$in': [\n 'JS', '$$tags'\n ]\n }\n ]\n }\n }\n }\n ], \n 'as': 'result'\n }\n }, {\n '$match': {\n '$expr': {\n '$eq': [\n {\n '$size': '$result'\n }, 1\n ]\n }\n }\n }, {\n '$replaceRoot': {\n 'newRoot': {\n '$arrayElemAt': [\n '$result', 0\n ]\n }\n }\n }, {\n '$lookup': {\n 'from': 'messages', \n 'let': {\n 'id': '$_id', \n 'tags': '$tags'\n }, \n 'pipeline': [\n {\n '$match': {\n '$expr': {\n '$and': [\n {\n '$eq': [\n '$$id', '$_id'\n ]\n }, {\n '$in': [\n 'Go', '$$tags'\n ]\n }\n ]\n }\n }\n }\n ], \n 'as': 'result'\n }\n }, {\n '$match': {\n '$expr': {\n '$eq': [\n {\n '$size': '$result'\n }, 1\n ]\n }\n }\n }, {\n '$replaceRoot': {\n 'newRoot': {\n '$arrayElemAt': [\n '$result', 0\n ]\n }\n }\n }\n]\nfrom random import sample\n\nfrom faker import Faker\nfrom pymongo import ASCENDING\nfrom pymongo import MongoClient\n\nfake = Faker()\n\n\ndef random_tags():\n return sample([\"Java\", \"JS\", \"Python\", \"C#\", \"Bash\", \"Closure\", \"Swift\", \"C++\", \"R\", \"Go\"], 4)\n\n\ndef random_messages():\n docs = []\n for _id in range(1, 10001):\n doc = {\n '_id': _id,\n 'user_id': fake.pyint(min_value=1, max_value=100),\n 'message': fake.sentence(nb_words=10),\n 'tags': random_tags()\n }\n docs.append(doc)\n return docs\n\n\nif __name__ == '__main__':\n client = MongoClient()\n db = client.get_database('test')\n messages = db.get_collection('messages')\n messages.drop()\n messages.insert_many(random_messages())\n print('Import done!')\n\n messages.create_index('tags')\n messages.create_index([('_id', ASCENDING), ('tags', ASCENDING)])\n {\n '$lookup': {\n from: 'messages',\n as: 'result',\n let: { id: '$_id', tags: '$tags' },\n pipeline: [\n {\n '$match': {\n '$expr': {\n '$and': [\n { '$eq': [ '$$id', '$_id' ] },\n { '$in': [ 'Java', '$$tags' ] }\n ]\n }\n }\n }\n ]\n },\n totalDocsExamined: Long(\"40320000\"),\n totalKeysExamined: Long(\"0\"),\n collectionScans: Long(\"8064\"),\n indexesUsed: [],\n nReturned: Long(\"4032\"),\n executionTimeMillisEstimate: Long(\"16709\")\n }\n", "text": "That’s surprising because they are doing the same thing. Also, why count + find? Just run find, collect the results in an array and then check the size of the array? Unless you are doing paginated queries, I don’t see the point of running both of them.You can explain a count like this:Finally, let’s get back to the initial issue. I asked for the “before” and “after” pipeline because I was almost certain there was already an issue with the initial index: MongoDB doesn’t support index intersection for $all queries and your explain plan confirms it: it’s just using the index to resolve the first equality filtering and then fetches the docs to resolves the 3 other checks. Sad but I guess there are valid reasons for not doing an intersection. If you had 80 values in the $all, the intersection would probably waste time. It’s also mentioned here: https://www.mongodb.com/docs/manual/core/index-intersection/. They are “disfavored in plan selection”.How you can solve this?Not ideal but it might just work.I wrote a small Python script to generate some fake docs to try this pipeline:I spent a few hours on this and I can’t get it to work. When I check the explain plan of the lookups, I don’t see an index being used:I also don’t understand why it says 40320000 docs examined when I have 10k in my collection… I must be doing something wrong here but I can’t put my finger on it.I’ll keep digging and ask a few colleagues around me. But for sure Atlas Search is the best / easier option.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "PIPELINE\n[\n {\n \"$search\": {\n \"index\": \"idx_name\",\n \"compound\": {\n \"must\": [\n { \"phrase\": { \"path\": \"tags\", \"query\": \"profitability|profitable\" }},\n { \"phrase\": { \"path\": \"tags\", \"query\": \"end-time|less-than-24-hours\"}},\n { \"phrase\": { \"path\": \"tags\", \"query\": \"bids|no-bid\" }},\n { \"phrase\": { \"path\": \"tags\", \"query\": \"currency-code|gbp\" }},\n { \"near\": {\n \"origin\": 1000000,\n \"pivot\": 1,\n \"score\": { \"boost\": { \"value\": 1000000000 } },\n \"path\": \"sort.profit\"\n }}\n ]\n }\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 40\n }\n ]\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"sort\": {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n \"tags\": {\n \"type\": \"string\"\n }\n }\n }\n}\n", "text": "Sorry yes, I should have explained the count situation in more detail.I do 2 queries, one to find the first 50 documents of a query and another to find to total count of documents in order to do pagination.For the time being i have been able to remove the count query and work around it.I took your advice to try and use a $search query for the same functionality and this is what I have found works:and using s search index like this:This allows me to add phrase queries that can target all the tags I need to and then pagination can be carried out using follow on pipeline stages.The sorting is done using the $near operator and boosting it’s score so that it is the most important factor in determining which documents are top of the list.This query is now performing the query i wanted to do in a range of about 200ms to 2000ms", "username": "Doog" }, { "code": "", "text": "20 sec => 200ms to 2s.I call that a win !\nI hope it’s good enough though. Thanks for sharing the query!\nping @John_Page", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query performance on array using $all is slow despite using an index
2022-07-05T09:07:30.584Z
Query performance on array using $all is slow despite using an index
2,980
null
[]
[ { "code": "", "text": "Thanks for the answers @MaBeuLux88 and @Stennie_X.I do not know why. but my previous post from the link below was banned and marked as some kind of propaganda. So, I will continue with this post.As I understand it, basically causal consistency reinforces the idea that mongodb offers strong cluster-wide consistency as a client will be able to read its own write. That’s it?Another question is, can the client only do this if it uses read concern combined with most write concern?The documentation page shows this.\nIf this combination of read concern major and write concern majority doesn’t happen, then does that mean mongodb doesn’t guarantee strong consistency?Thanks for the clarifications,\nCaio", "username": "morcelicaio" }, { "code": "write concernmajority", "text": "you need to take a breath before moving on depending on your write concern level, data can be lost before being distributed to all nodes, that is a fire-and-forget write. otherwise, you will always get an error or a confirmation. yet if you use a majority level write but something really bad happens that crashes those “majority” servers before data goes to “minority” ones, you may still lose data. but that is a danger over any kind of database. it is the worst-case scenario.otherwise, your data is guaranteed to be saved to all data-bearing nodes and be consistent across all reads.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thanks for the feedback, @Yilmaz_DurmazConsidering this worst case scenario you reported and considering mongodb at its strongest level of consistency, we can say then that even at its strongest level of consistency mongodb will still allow inconsistencies in the database, correct?That is, even at its maximum level of consistency, can mongodb still have inconsistencies in a distributed environment?Greetings,\nCaio", "username": "morcelicaio" }, { "code": "read preference", "text": "nope, if writes are lost by a worst-case scenario, you will still have a consistency of remaining already-written data as they were already been distributed to all other nodes. losing data is not equal to inconsistency.you will not find two nodes having different/inconsistent data unless one is new and data is still being distributed. and in that case, if you do not intentionally use read preference on the older one, you will be served from the node having the latest data, or you will be served “read-only” data if the election process does not find a suitable node to allow writes. this is also to prevent inconsistent writes.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi @morcelicaioI think @Yilmaz_Durmaz have provided a great explanation! So, I’d like to share a little of my take on this subject As I understand it, basically causal consistency reinforces the idea that mongodb offers strong cluster-wide consistency as a client will be able to read its own write. That’s it?It’s a bit more than that. Causal consistency provides: Read own writes, Monotonic reads, Monotonic writes, and Writes follow reads. According to Causal Consistency Guarantees:Another question is, can the client only do this if it uses read concern combined with most write concern?Yes, but also within a causally consistent client sessions. Check out the examples in the page on how to do this (note that you can select the language of choice for the examples there).If this combination of read concern major and write concern majority doesn’t happen, then does that mean mongodb doesn’t guarantee strong consistency?You can tune your consistency needs using read/write concerns as mentioned in Causal Consistency and Read and Write Concerns. Note that the stronger the guarantee, typically the more time it will take since MongoDB would need to ensure that all parts of the cluster are in sync with one another. This is the tradeoff, essentially.Using majority write + majority read is not enough to guarantee causality and reading your own writes, since it also depends on your read preference as well. In Read Your Own Writes: Prior to MongoDB 3.6, in order to read your own writes you must issue your write operation with { w: “majority” } write concern, and then issue your read operation with primary read preference, and either “majority” or “linearizable” read concern.That is, even at its maximum level of consistency, can mongodb still have inconsistencies in a distributed environment?I’m not sure I fully understand this question. Could you give an example of the inconsistency scenario you have in mind?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for the explanations, @kevinadi .I’m still new to the study of distributed databases, so I may not be able to express myself clearly sometimes.Thank you for your patience and explanations.In my case I will use a benchmark to check if mongodb guarantees acid properties when working in a distributed environment, because in the documentation mongodb says that it guarantees strong consistency of its data.\nI will use the YCSB+T benchmark to perform my tests.\nhttps://sci-hub.se/10.1109/ICDEW.2014.6818330What combinations of read concern, write concern, journal and read preference could I test with? There are many possibilities and I still have some doubts.", "username": "morcelicaio" }, { "code": "", "text": "I’m still new to the study of distributed databases, so I may not be able to express myself clearly sometimes.In that case, welcome and good to have you here @morcelicaio!What combinations of read concern, write concern, journal and read preference could I test with?Short answer is probably: depends on what you want to test I’m not an expert in testing, but I’m guessing it’s probably goes back to what you’re trying to see. MongoDB provides many, many different knobs you can change to basically customize to tailor the database’s performance vs. consistency model according to your exact needs. However there are some docs that may be useful as a starting point for your journey:You also might want to check out:Best of luck with your project!Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for the pointers, @kevinadi .I will continue my studies and come back here if I have any further questions.Greetings,\nCaio", "username": "morcelicaio" }, { "code": "", "text": "Ha I’m discovering this thread now. \nMy latest answer for reference:", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is causal consistency in mongodb? Continuation
2022-07-11T21:33:25.604Z
What is causal consistency in mongodb? Continuation
2,214
https://www.mongodb.com/…e_2_1024x512.png
[]
[ { "code": "", "text": "What is causal consistency in mongodb?Hi, I saw in the mongodb documentation on the ‘Causal Consistency and Read and Write Concerns’ page that you talk a lot about ‘causal consistency’. I didn’t quite understand this concept as it is not explained in detail on the page. What would be the idea of this ‘causal consistency’?\nWhat makes it different from ‘eventual consistency?’ Could you give me some examples to better understand this concept?The link to the page I’m referring to in the documentation is this:I thank the attention.", "username": "Caio_Morceli" }, { "code": "", "text": "Hi @Caio_Morceli and welcome in the MongoDB Community !Let’s try to explain the 2 concepts with a few lines:Eventual Consistency means that the data you are reading might not be consistent right now but it will be eventually. You get this if you read from secondaries using any of the readPreference that can read from a secondary. This means you chose to race with the replication and you don’t have the guarantee to read your own writes as you chose to write on the Primary and read from a Secondary without a guarantee.Causal Consistency basically prevents that from happening. If within a causal consistent session you write something, then read it 2 lines later, you now have a guarantee that you will read this write operation no matter what, even if you are racing against the replication. Of course it’s a trade off, this means you will have to hang a little to get what you want.Few years ago when 3.6 was released with this new feature, I published this demo (which is a bit old but…)master/4-causal-consistencyContribute to MaBeuLux88/mongodb-3.6-demos development by creating an account on GitHub.The idea was to demonstrate the concept. To guarantee that my secondary was slower than my script, I sent an internal command to pause the replication process. This is just to get a “consistent” result every time I run this script and not only when I get lucky with the race.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Welcome to the MongoDB Community @Caio_Morceli !Eventually consistent reply as @max replied before I posted my draft Causal consistency refers to guarantees around the order of operations observed by clients in a distributed system. Client sessions only guarantee full causal consistency with “majority” read concern and “majority” write concern, but different combinations are possible depending on your use case.The page you referenced outlines causal guarantees (for example “Read Your Own Writes”) with different combinations of read and write concern, including example scenarios.For some background on the original implementation of this feature in MongoDB, please see https://engineering.mongodb.com/post/ryp0ohr2w9pvv0fks88kq6qkz9k9p3. I think the introduction has some helpful context:Traditional databases, because they service reads and writes from a single node, naturally provide sequential ordering guarantees for read and write operations known as “causal consistency”. A distributed system can provide these guarantees, but in order to do so, it must coordinate and order related events across all of its nodes, and limit how fast certain operations can complete. While causal consistency is easiest to understand when all data ordering guarantees are preserved – mimicking a vertically scaled database, even when the system encounters failures like node crashes or network partitions – there exist many legitimate consistency and durability tradeoffs that all systems need to make.FYI causal consistency and associated guarantees are general data concepts for distributed systems (not specific to MongoDB):\n\t\t\t\tPages for logged out editors learn more\n\n\t\t\t Causal consistency is one of the major memory consistency models. In concurrent programming, where concurrent processes are accessing a shared memory, a consistency model restricts which accesses are legal. This is useful for defining correct data structures in distributed shared memory or distributed transactions.\n Causal Consistency is “Available under Partition”, meaning that a process can read and write the memory (memory is Available) even ...What makes it different from ‘eventual consistency?Causal consistency provides guarantees around the ordering of data operations observed by clients in a distributed system, which mimics a single vertically scaled database deployment.Eventual consistency refers to the behaviour that writes in a distributed system will converge with a consistent history (for example, via application of an idempotent replication oplog) , but are not guaranteed to be consistent if you read from different members of a cluster without appropriate read concerns.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks, @MaBeuLux88 and @Stennie_X.As I understand it then, basically causal consistency reinforces the idea that mongodb offers strong consistency across the cluster as a client will be able to read its own write. That’s it?Another question is: Can the customer only do this if he uses read concern combined with write concern majority?On the documentation page it shows this.\nIf this combination of read concern majority and write concern majority doesn’t happen, then does it mean that mongodb doesn’t guarantee strong consistency?Thanks for the clarifications,\nCaio", "username": "morcelicaio" }, { "code": "", "text": "A bit more than juste read your own writes actually:The paragraph below also covers at least a part of your question.And this doc answers completely your question I think with the table of guarantees:But to sum up, it’s a trade off. Test first with w=majority and readConcern=majority. If the performances are “good enough”, then you don’t have to make a trade off. You can then start to trade some of the consistency for speed, but my advice would be to do it step by step and maybe prefer an upgrade to SSD or a better CPU or network before doing a trade off. It’s very use case dependent as well. For some use cases, the trade off isn’t possible so the hardware path is the only solution.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
What is causal consistency in mongodb?
2022-06-28T11:54:35.371Z
What is causal consistency in mongodb?
6,954
https://www.mongodb.com/…a_2_1024x512.png
[]
[ { "code": "404 Not Found\nCode: NoSuchKey\nMessage: The specified key does not exist.\nKey: 61f057f63bf82311ace9b3a7/index.html\nRequestId: 477SBZEXBNXR5V0Z\nHostId: lNaJZFETHfR9ctR41NOdUUtwI5/aHQR5DyScDFb6cuRGgeBVtHZcqjqVjOoR0qSKAjOVoVh2rh4=\n", "text": "Hello\nI am doing this tutorialhttps://www.mongodb.com/developer/quickstart/realm-web-sdk/Here is my repoContribute to coding-to-music/MongoDB-Realm-web-SDK development by creating an account on GitHub.When I push on my repo it is deployed, however it is not deploying the index.html and data.js in the root directory.I am unable to manually upload the two files because the Upload button is grayed out and some error messages are displayed, as seen in this image:\nimage662×551 21 KB\nI can view the files are working correctly (querying Atlas) using Live Server, however the files are not picked up by the Realm dashboard and are not deployed to the URLhttps://realmwebsdk-suybq.mongodbstitch.com/I am seeing this error at the hosting URL", "username": "Tom_Connors" }, { "code": "", "text": "I am able to deploy on GitHub pages\nhttps://coding-to-music.github.io/MongoDB-Realm-web-SDK/", "username": "Tom_Connors" }, { "code": "", "text": "I’m having this same issue. Did anybody solve it yet?", "username": "Beau_Carnes" }, { "code": "", "text": "I solved it! The app I was hosting was a React app. On the Realm Hosting page, I had to go into settings, select “Single-Page App”, and the select “index.html”. Then everything worked.", "username": "Beau_Carnes" }, { "code": "", "text": "Thank you so much @Beau_Carnes. This solved the problem. I followed your tutorial to learn the MERN stack, I built the app. Then when I tried deploying the react app, it was not working after trying and trying to resolve this issue for hours I started looking on internet still nothing and somehow got here.Thanks you for both the tutorial and the solution :>", "username": "Rao_Umer" } ]
Realm App Hosting: cannot upload files, incorrect directory, files not found
2022-01-25T22:32:20.024Z
Realm App Hosting: cannot upload files, incorrect directory, files not found
2,646
null
[ "queries" ]
[ { "code": "", "text": "Is there a query in MongoDb that will allow me to search a key (not the value-- e.g. subject: English)? I want to load a dropdown menu with “subjects”. For example, let’s say there are 100 documents and 20 are math, 20 English, 20 Spanish, 20 science, and 20 history. In this example, there are 5 subjects( math, English, Spanish, science, and history) that should populate in the drop down menu. Is this possible with mql?", "username": "david_h" }, { "code": "db.collection.distinct(\"subject\")subject[ \"English\", \"Math\", ... ]", "text": "Hello @david_h, You can use the db.collection.distinct(\"subject\") method to get unique subjectvalues. Note the method returns an array of the subjects, e.g., [ \"English\", \"Math\", ... ]. And, you can populate the drop down menu from the array data.", "username": "Prasad_Saya" } ]
MongoDB query that returns the key only
2022-07-13T21:36:14.898Z
MongoDB query that returns the key only
1,268
https://www.mongodb.com/…1_2_1024x189.png
[ "atlas-functions", "serverless" ]
[ { "code": "", "text": "I am running distinct query on one collection, to get distinct id.\nafter getting distinct id’s I am looping over on that id’s, and updating some field with\nserverless function in Realm.But While Updating I am getting Following error.ran at 1657632180167\ntook\nerror:\nexecution time limit exceeded\nimage1308×242 40.1 KB\n\nimage1446×294 28.6 KB\nCan any Please help me whats goin here.", "username": "Suraj_Anand_Kupale" }, { "code": "", "text": "Hi Suraj,It’s probable that you’re exceeding the time constraint which is imposed on Functions.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Hi Mansoor Thanks for Reply.I know that, I am exceeding time constraint, but unable to find out how to increse the function time out. And also I tried “maxTimeMS()” function on find, update, aggregation queries.I really very confuse. I have to loop over on 1K-10K off data.", "username": "Suraj_Anand_Kupale" }, { "code": "", "text": "You cannot increase the function time constraint which is currently 120 seconds, this is globally set.If you’re able to raise a support ticket we can take a look at the queries and may be able to suggest indexes that could help. Otherwise look into the following:", "username": "Mansoor_Omar" } ]
Execution time limit exceeded
2022-07-12T13:45:46.077Z
Execution time limit exceeded
3,552
null
[ "connecting", "php" ]
[ { "code": "", "text": "I am using mongodb 5 with php.\nI didn’t found any option to close mongodb connection after successful execution in php mongodb library.\nPlease help me.", "username": "atanu_samanta" }, { "code": "", "text": "As far as I know, there is not one. I just unset the variable I was storing the connection in.", "username": "Jack_Woehr" } ]
PHP mongodb close connection
2022-07-13T14:00:55.743Z
PHP mongodb close connection
2,236
https://www.mongodb.com/…1_2_1024x512.png
[ "swift" ]
[ { "code": "**Logged in**\n\n**2022-07-12 13:55:29.722236-0400 O-FISH[2691:68876] Sync: Connection[5]: Session[5]: client_reset_config = false, Realm exists = true, client reset = false**\n\n**2022-07-12 13:55:29.770228-0400 O-FISH[2691:68876] Sync: Connection[5]: Connected to endpoint '34.227.4.145:443' (from '192.168.4.45:52739')**\n\n**2022-07-12 13:55:30.211263-0400 O-FISH[2691:68876] Sync: Connection[5]: Session[5]: Failed to transform received changeset: Schema mismatch: Link property 'user' in class 'DutyChange' points to class 'User' on one side and to 'DutyChange_user' on the other.**\n\n**2022-07-12 13:55:30.211458-0400 O-FISH[2691:68876] Sync: Connection[5]: Connection closed due to error**\n//\n// DutyChangeViewModel.swift\n//\n// Created on 25/03/2020.\n// Copyright © 2020 WildAid. All rights reserved.\n//\n\nimport SwiftUI\nimport RealmSwift\n\nclass DutyChangeViewModel: ObservableObject {\n\n @Published var id = ObjectId.generate().stringValue\n @Published var user = UserViewModel()\n @Published var status: Status = .notSelected\n @Published var date: Date = Date()\n\n enum Status: String {\n case notSelected = \"\"\n case onDuty = \"At Sea\"\n{\n \"database\": \"wildaid\",\n \"collection\": \"DutyChange\",\n \"schema\": {\n \"title\": \"DutyChange\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"agency\",\n \"date\",\n \"status\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"agency\": {\n \"bsonType\": \"string\"\n },\n \"user\": {\n", "text": "Let’s set the scene:We start with collections where the partition key was not in every collection. This was fine for 2 years or so. A few weeks ago, we paused sync, and couldn’t get it to start again, with an error that indicated that sync couldn’t start because not all the collections had the partition key. (I can’t find the details, we only have logs for the past 10 days unfortunately)We eventually got sync to start again, in developer mode. In the App Services UI, Device Sync is listed as “Enabled” with timestamps from today:Latest Sync Event 07/12/2022 17:40:44\nLast Cluster Event Processed 07/12/2022 17:40:43\nLag 1 secI can login to my app, probably because Realm authentication services are still working, but I don’t get any of my app’s data. Realm studio shows the schemas I expect, but there is no data in them.I’m getting this error message in Xcode when the sync is supposed to happen:This is open source code - the models can be seen at:\nandiOS app for the Officer's Fishery Information Sharing Hub (O-FISH). The mobile app allows fisheries officers to document and share critical information gathered during a routine vessel inspecti...In App Services UI, under the “Schema” tab, we had this:We tried deleting the DutyChange schema, and had the same issues with the app.Then we tried clicking the “Use Schema Dev Mode” on the Schema page for DutyChange, which sent us to the device sync page - which is already enabled, and in dev mode.Then we tried regenerating the DutyChange schema using the UI’s “Generate Schema” button, and had the same issues with the app.Any tips to get sync working again? Local objects are being made, and our logs show no errors - the relevant Sync->Other logs show things like:“Client bootstrap completed in 16.903915ms. Received 1 download(s) containing 2 changeset(s) (14.3 kB total).”The data does not get fully sync’d to the Atlas collections, though. And information that is supposed to sync to the device, based on the partition key, does not show up - we have some menus that are populated with data from Atlas, and those menus are empty.Any ideas?", "username": "Sheeri_Cabral1" }, { "code": "", "text": "Hi, can you send a link to your app in realm.mongodb.com and I can try to take a look? It seems to me like your json schema for the user-sub-object is missing the “title” field, since if we do not find that we give the sub-object a table name of “ParentTable_FieldName”.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": " \"user\": {\n \"title\": \"User\", <---\n \"bsonType\": \"object\",\n \"required\": [\n \"email\"\n ],\n \"properties\": {\n \"email\": {\n \"bsonType\": \"string\"\n },\n \"name\": {\n \"title\": \"Name\", <---\n \"bsonType\": \"object\",\n \"properties\": {\n \"first\": {\n \"bsonType\": \"string\"\n },\n \"last\": {\n \"bsonType\": \"string\"\n }\n }\n }\n }\n }\n", "text": "Hi @Tyler_Kaye thanks for the reply - indeed I was missing the “title” on both DutyChange.user and DutyChange.user.name.I changed it, but I’m still seeing the same problems with my app - I deleted it off my device and re-built it from Xcode, with the same issue. Normally I’d try to pause sync and restart it, but since that caused problems last time, I’m a bit shy about doing that.The URL for my realm schema - App ServicesHere’s a snippet of what I changed - I added the title in 2 places, as indicated by <— (the arrow is not actually in the code itself)", "username": "Sheeri_Cabral1" }, { "code": "class DutyChange: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var agency: String = \"\"\n @Persisted var date: Date = Date()\n @Persisted var status: String = \"\"\n @Persisted var user: DutyChange_user?\n}\nclass DutyChange: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var agency: String = \"\"\n @Persisted var date: Date = Date()\n @Persisted var status: String = \"\"\n @Persisted var user: User?\n}\n", "text": "Hi @Sheeri_Cabral1 ,It looks like you’ve connected two different Data Sources to your app, so your schema shows two different versions of the same objects: sometimes they match, sometimes they don’t.If you look at your Data Models, you’ll find that you have bothand a differentYou may want to double-check the setup, and make it consistent.", "username": "Paolo_Manna" }, { "code": "", "text": "Ooh! Thanks, I had no idea where to look for those! Just for anyone else reading, it was the same solution as Tyler - fix the JSON schema - except I had 2 places to change it.It’s still not working, I’m getting a more common error that the Name.first property is nullable on one side and not the other; I just have to figure out which collection’s schema that’s referring to, a lot have that property!Thanks.", "username": "Sheeri_Cabral1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Logical sync problems after sync broke and was restored
2022-07-12T18:06:21.587Z
Logical sync problems after sync broke and was restored
1,619
null
[ "queries", "indexes" ]
[ { "code": "$regexdb.collection_example.find({_id: {$regex: '^\\\\000versions\\\\000backup1,\\\\000vaa.*'}).explain(\"executionStats\")\"totalKeysExamined\" : 6431140db.collection_example.find({_id: {$regex: '^versions\\\\000backup1,\\\\000vaa.*'}).explain(\"executionStats\")\"totalKeysExamined\" : 0", "text": "Hi!\nI have a collection which holds _id (indexed field) that starts with null character and also has some of them in the middle.\nI noticed that when I try to $regex a prefix that starts with a null character, the query looks at all the keys in the collection.\nExample query (some names changed for simplicity, but the start and end of the regex is the same)\ndb.collection_example.find({_id: {$regex: '^\\\\000versions\\\\000backup1,\\\\000vaa.*'}).explain(\"executionStats\")I get: \"totalKeysExamined\" : 6431140 (this is the total number of documents in the collection)When I try the same query but omit the first null character I get the following:\ndb.collection_example.find({_id: {$regex: '^versions\\\\000backup1,\\\\000vaa.*'}).explain(\"executionStats\")I get: \"totalKeysExamined\" : 0 (of course there are no documents that start with this name, but the query still returns pretty fast and does not try the whole collection)Do you think my query is wrong for this case, or does mongo has some issue with indexing and searching values that start with a null character?Thanks for any reply!", "username": "Oded_Raiches" }, { "code": "", "text": "I noted that having other null characters in the indexed field makes the issue occur, when replacing with a different character that is not null the issued seem to be gone.\nWould still want to know if theirs a workaround or an explanation why this occurs.", "username": "Oded_Raiches" }, { "code": "_id : {\n \"version\" : 1\n \"backup\" : \"vaa\"\n}\n.*", "text": "I do not think it is a good idea to have null character.You could have an _id like:That would simplify your life as you could avoid using $regex and back-slashes and then having to scan your strings to find the parts that interest you.I do not think you need the .* at the end of your regex.", "username": "steevej" }, { "code": "", "text": "@steevej\nThanks for the reply!\nI have some limitations in my application that allow me to maybe change this to another character, but I still need the same format.\nDo you have some further explanation why the null character is not treated as any other?", "username": "Oded_Raiches" }, { "code": "", "text": "Do you have some further explanation why the null character is not treated as any other?I have no clue. Hopefully someone from MongoDB will pick up the thread.", "username": "steevej" } ]
Regex with null character prefix
2022-07-12T06:49:55.167Z
Regex with null character prefix
2,786
null
[ "java", "performance" ]
[ { "code": "{\"find\": \"concepts_6523793785566\", \"filter\": {\"_id\": {\"$in\": [\"_pt2_product-instance_34_06\", \"_pt2_product_globals-instance_34_06\"]}}, \"projection\": {\"_id\": 1, \"typeClass\": 1, \"concepts\": 1}MongoCursor<Document> cursor = blder.cursor();Stopwatchorg.bson.DocumentStarting application with args [[]]( 0)\n\n... standard start of a spring boot ....\n\n2021-09-07 12:15:10.802 INFO 884 --- [ main] c.n.t.t.CliRunnerTestMongoClient : Starting run [0]\n2021-09-07 12:15:10.906 INFO 884 --- [ main] c.n.t.t.CliRunnerTestMongoClient : Ended run [0]\n\n... Ignore run 0 as it includes opening connections ...\n\n... \n All tests run have the same layout:\n Start run [?]\n Logs from mongodb driver\n Resume of execution (via Spring stopwatch) of the various steps)\n Ended run [?]\n \n Below is an example of run[3].\n...\n\n2021-09-07 12:15:10.921 INFO 884 --- [ main] c.n.t.t.CliRunnerTestMongoClient : Starting run [3]\n2021-09-07 12:15:10.922 TRACE 884 --- [ main] org.mongodb.driver.connection : Checked out connection [connectionId{localValue:3, serverValue:6}] to server localhost:27017\n2021-09-07 12:15:10.923 DEBUG 884 --- [ main] org.mongodb.driver.protocol.command : Sending command '{\"find\": \"concepts_6523793785566\", \"filter\": {\"_id\": {\"$in\": [\"_pt2_product-instance_34_06\", \"_pt2_product_globals-instance_34_06\"]}}, \"projection\": {\"_id\": 1, \"typeClass\": 1, \"concepts\": 1}, \"comment\": \"testDirectMongoClientMultiWithProjectionsNoMapping\", \"$db\": \"testperf\", \"lsid\": {\"id\": {\"$binary\": {\"base64\": \"Rbl2Tk9vRxmnhVMX8X/wyw==\", \"subType\": \"04\"}}}}' with request id 10 to database testperf on connection [connectionId{localValue:3, serverValue:6}] to server localhost:27017\n2021-09-07 12:15:10.925 DEBUG 884 --- [ main] org.mongodb.driver.protocol.command : Execution of command with request id 10 completed successfully in 2.37 ms on connection [connectionId{localValue:3, serverValue:6}] to server localhost:27017\n2021-09-07 12:15:10.926 TRACE 884 --- [ main] org.mongodb.driver.connection : Checked in connection [connectionId{localValue:3, serverValue:6}] to server localhost:27017\n2021-09-07 12:15:10.927 WARN 884 --- [ main] c.n.t.t.CliRunnerTestMongoClient : Run3 Slow query time StopWatch 'Run3': running time = 5146400 ns\n---------------------------------------------\nns % Task name\n---------------------------------------------\n000345800 007% Run3 Get initial information\n000019500 000% Run3 Build query - Get Collection\n000005700 000% Run3 Build query - Find\n000002000 000% Run3 Build query - Projection with comment\n004767400 093% Run3 Open cursor\n000006000 000% Run3 Read data from cursor.\n\n2021-09-07 12:15:10.927 WARN 884 --- [ main] c.n.t.t.CliRunnerTestMongoClient : Run3 Slow query time . Max 3 ms got 5 ms\n2021-09-07 12:15:10.927 INFO 884 --- [ main] c.n.t.t.CliRunnerTestMongoClient : Ended run [3]\n\n... Closing down the application ...\n\n{\n \"_id\": \"fb25ecb9-7721-4864-b3c5-7439fad0180a\",\n \"typeClass\": \"CONCEPT_COLLECTION\",\n \"name\": {\n \"data\": \"Descrição de um acontecimento não obrigatório\",\n \"override\": false,\n \"_class\": \"com.nau21.metadata.domain.model.pojo.SimpleOverridableValue\"\n },\n \"label\": {\n \"data\": \"Descrição de um acontecimento não obrigatório\",\n \"override\": false,\n \"_class\": \"com.nau21.metadata.domain.model.pojo.SimpleOverridableValue\"\n },\n \"conceptType\": {\n \"data\": [\n \"CONCEPT\"\n ],\n \"override\": false\n },\n \"concepts\": {\n \"data\": [\n {\n \"referenceType\": \"ConceptRef\",\n \"uuid\": \"4207a37c-188c-4599-ad2c-0cd1697ff71e\",\n \"_class\": \"com.nau21.metadata.domain.model.mongodb.ConceptRefRecord\"\n },\n {\n \"referenceType\": \"ConceptRef\",\n \"uuid\": \"eb2cceb7-ca1d-42f7-9158-243e435fac4e\",\n \"_class\": \"com.nau21.metadata.domain.model.mongodb.ConceptRefRecord\"\n },\n {\n \"referenceType\": \"ConceptRef\",\n \"uuid\": \"0dfec7c0-0f73-4666-aad4-86674d3b151c\",\n \"_class\": \"com.nau21.metadata.domain.model.mongodb.ConceptRefRecord\"\n }\n ],\n \"override\": false,\n \"_class\": \"com.nau21.metadata.domain.model.mongodb.jackson.MixinConcept$SimpleOverridableValueListOrderedConceptRefData\"\n },\n \"basedOn\": {\n \"referenceType\": \"ConceptRef\",\n \"uuid\": \"98202924-5693-47a4-86be-e0a802679d22\"\n },\n \"definedIn\": {\n \"data\": [],\n \"override\": false,\n \"_class\": \"com.nau21.metadata.domain.model.pojo.SimpleOverridableValue\"\n },\n \"extraData\": {\n \"precalculated\": {\n \"areas\": {\n \"kindOfs\": {\n \"kindOfs\": [\n {\n \"referenceType\": \"ConceptRef\",\n \"uuid\": \"fb25ecb9-7721-4864-b3c5-7439fad0180a\"\n },\n {\n \"referenceType\": \"ConceptRef\",\n \"uuid\": \"98202924-5693-47a4-86be-e0a802679d22\"\n },\n {\n \"referenceType\": \"ConceptRef\",\n \"uuid\": \"5ab9800a-7411-408f-93f1-f8fb833b3419\"\n }\n ],\n \"_class\": \"com.nau21.metadata.domain.model.mongodb.PrecalculatedKindOfsRecord\"\n }\n },\n \"_class\": \"com.nau21.metadata.domain.model.mongodb.SimplePrecalculatedAreas\"\n }\n },\n \"_class\": \"com.nau21.metadata.domain.model.mongodb.ConceptRecord\"\n}\n", "text": "There is a significant overhead between the time a simple mongo query with projection that is reported to have been executed and it’s conversion and return to the application with mongodb java driver. After quite a lot of searching and debugging, I managed to isolate part of the performance problem.A query (extracted from log of the driver) {\"find\": \"concepts_6523793785566\", \"filter\": {\"_id\": {\"$in\": [\"_pt2_product-instance_34_06\", \"_pt2_product_globals-instance_34_06\"]}}, \"projection\": {\"_id\": 1, \"typeClass\": 1, \"concepts\": 1} executes (indication of mondgodb java driver) in 2.37 ms (which is slow, given that the collection has only 4 documents. See machine specs below, but similar results where obtain in other machines, inclusive Mac’s). But the query ( MongoCursor<Document> cursor = blder.cursor(); ) is encapsulated with Spring Stopwatch and it reports that it executed in 5.14 ms.Note with the database profiler on the mongodb server with level 2 active, it reports the query executed in 0 ms, so the problem doesn’t seem to be in the actual database. Anyway the main focus of this issue is the approximately 2 a 3 ms between when the database returns the result and it is returns to the caller. This query will be executed thousands of time as it is used to simulate the graphlookup. We cannot use graphlookup from Mongodb due to its memory constrains and it’s inability to limit what is loaded.Also that the query via mongo client returns a org.bson.Document in order to try and avoid the noise of conversion of POJO. In the real world example, an POJO representing the return is used and access is done with spring data (Repository and MongoTemplate)I can provide a reduced version of our code that reproduces the problem.Can anyone suggest why there is this overhead and how to reduce it. Or point me to where I may find this information.It seem that there is a problem with the deserialization of result to bson document.The output of execution test should be something like (it is a bit verbose as it activates various logs from various components)Operating system - Windows 10 Pro 64-bit Version: 19043.1165\nMicroprocessor - Intel(R) Core™ i7-10750H CPU @ 2.60GHz 6 core\nSystem memory - 32 GB\nMonogodb java driver - 4.3.1\nMongodb server - MongoDB 4.4.4 CommunityMongoDB server is running in docker container (linux), using WSL2.", "username": "Paulo_Nunes_de_Bastos" }, { "code": "", "text": "Hi Paulo_nunes_de_bastosI’m also using spring data mongoDB and came across similar issue of executing query and converting result to POJO, and have been watching this issue to see if some can answer this question.In my case my code looks like thisAggregationResult Result= mongotemplate.aggregate(aggregationQuery, collectionName, MyClass.class)When I run the aggregation query in studio3T it returns 500k results with each document has less than 10 fields,\nIn less than 2 seconds.When I run this from my application it takes 1 to 2 minutes.My query is Very simple, a match stage followed by a project stageMatch stage is using index provided on the collection.After a lot of research I’m also trying to see how to make this reading of data in Java application using spring data mongodb driver faster.Hope to see some one responds here.", "username": "Rakesh_Kotha" }, { "code": "", "text": "Hi Paulo, there are a lot of moving parts here.If you want to measure how fast a query run on a Java driver that runs on non-precompile mode (which likely is yours - if that’s HotSpot JVM which is the most commonly used), the fair measure would be to warm up the JVM by running many queries in a loop, e.g. 10000 times, before you start measuring how many milliseconds it runs for a single query.\nMeasure like 1000 times, and take the average.JVM is usually fast, but also usually slow the first time it runs due to by default it runs as a bytecode interpreter (which is slower), and only when it hits certain threshold, the JVM will halt the execution and compile that part into native code, after which it will be much faster.You might as well be measuring how long the JVM actually performing the (one-time) class loading, or probably how long it stops and compile from bytecode to native. By warming up the JVM, it will ensure that the class loading, the compilation, and all other overheads are excluded from your measurement.The first loading of the class may cause the 2-3 ms slowness you experience, because the class loader might also load some other class dependencies.Also if the code doesn’t follow best practices and produce heap memory leakage (e.g. keep allocating objects), you may also experience some stop world GC which will saturate your measurement.Hi Rakesh, thank you for the comment.The issue you stated may stem from the same issue, only that with Spring and probably the ORM framework might amplify the JVM overhead, and with Spring there will be more moving parts, and more overheads.\nFollow the same guideline above when measuring (warm up the JVM first).I would suggest however, to measure the performance of the pure Java driver first before measuring the performance of your whole stack that uses Spring. That way you will know exactly whether the ORM framework produces the high overhead.Another thing to watch, make sure that the code doesn’t perform login operation each time, and instead uses the Java driver connection pooling.", "username": "Daniel_Baktiar1" } ]
Slow simple limited query in MongoDb java driver with significant overhead in the returning result
2021-09-07T18:44:49.964Z
Slow simple limited query in MongoDb java driver with significant overhead in the returning result
6,833