image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "crud" ]
[ { "code": "", "text": "Hello,So I can’t find an answer for this anywhere, but I want to duplicate some records in a collection but with a different valueFor example, lets say I have the following recordsKEY: 1\nLOCATION: 500KEY: 2\nLOCATION: 500KEY: 3\nLOCATION: 500I’d like to copy these 3 records to look like this:KEY: 1\nLOCATION: 600KEY: 2\nLOCATION: 600KEY: 3\nLOCATION: 600Resulting in 6 recordsSurely this must be something easy, I just can’t figure it out.", "username": "Ahmed_Chaarani" }, { "code": "", "text": "I think I figured it out:db.tblTrainingCodeCategories.find({}).forEach(function(doc){ delete doc[’_id’]; doc.LOCATION = 2309; db.tblTrainingCodeCategories.insert(doc); } )This essentially does it…", "username": "Ahmed_Chaarani" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Copying records in collection within collection but with different value
2022-11-04T18:44:30.565Z
Copying records in collection within collection but with different value
1,328
null
[ "dot-net" ]
[ { "code": "", "text": "Is my understanding correct?\nIf I’m misunderstanding, please explain.\nthank you", "username": "lasidolasido" }, { "code": "", "text": "Welcome to the forums!There’s a template in the Getting Started Guide React To Changes that may be helpful but I am not sure the question is super-clear.What specifically are you trying to implement - how to ‘move’ objects or how to handle that event or…?", "username": "Jay" } ]
Inquiry about NotifyCollectionChangedAction of CollectionChanged event of IRealmCollection
2022-11-04T07:07:12.187Z
Inquiry about NotifyCollectionChangedAction of CollectionChanged event of IRealmCollection
1,067
null
[ "queries", "rust" ]
[ { "code": "#[serde(skip)]\npub out_relations: Vec<Rc<Relation>>,\nlet items = db.collection::<Item>(\"items\");\nlet mut cursor = items.find(Some(doc! { \"project_id\": pid }), None).await?;\nwhile let Some(item) = cursor.try_next().await? {\n item_map.insert(item.id, item);\n}\n", "text": "Hello everyone.Why RUST compiler could start complaining, that method try_next cannot be called on mongodb::Cursor<> due to unsatisfied trait bounds, when I added Vec<Rc> fields to my model?I added this field:And this code which used to be compiled without issues:Started producing errors like TryStreamExt trait is not implemented.\nPreviously I had same Vectors but with ObjectId and with reference to another struct - it was fine.\nOnce I tried Rc<> - it stopped compiling.Here is the same my question on Stack Overflow with more readable code and errors.", "username": "Yuri_Gor" }, { "code": "Cursor<T>StreamTDeserializeOwnedUnpinSendSyncRc!Send!SyncRcCursorStreamArcRcArcSendSync", "text": "Hi @Yuri_Gor , this is a similar issue to that described in this thread. In short, Cursor<T> type only implements Stream if the T implements DeserializeOwned , Unpin , Send , and Sync . Because Rc is !Send and !Sync, adding the Rc makes it so the Cursor no longer implements Stream.To work around this, we would suggest using an Arc rather than an Rc, since Arc implements Send and Sync.Relatedly, this has come up in the past and we may be able to relax these trait bounds in the future to avoid errors like this altogether; see RUST-1358 for details.", "username": "kmahar" }, { "code": "", "text": "Thank you! That explains everything.", "username": "Yuri_Gor" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Method try_next cannot be called on mongodb::Cursor<> due to unsatisfied trait bounds
2022-11-04T02:15:36.452Z
Method try_next cannot be called on mongodb::Cursor&lt;&gt; due to unsatisfied trait bounds
3,167
null
[ "aggregation" ]
[ { "code": "", "text": "Hi,I’m getting the below error when login to MongoDB Atlas from my custom iOS app.error handling “query” message: error updating hashes in client file store: (AtlasError) Pipeline length greater than 50 not supportedI haven’t run the aggregation query, just login and open a synced Realm and subscribe. The Sync Type is ‘Flexible’.\nAs far as I’ve tested, it seems to occur when there are a large number of schema.\nWhat does it mean Pipeline length and What factors affect Pipeline length? How can I solve the error?", "username": "Sumyong_Kim" }, { "code": "", "text": "Hi. This is happening because of an internal metadata operation where the number of stages is (as you stated) related to the number of tables. We can try to optimize this on our side (I have an idea), but one note is that the specific error you are getting is actually one from the Free-Tier Proxy in Atlas. So if you want to get around this issue immediately, there are 2 things you can do:", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi Tyler,\nThank you so much your help. Yes, this happened in M0 cluster. I understand and it’s solved.", "username": "Sumyong_Kim" }, { "code": "", "text": "As an update. I figured out a way to avoid generating a long pipeline update for this so the issue will go away in our next release (in 2 weeks).Thanks for bringing this to our attention ", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks. That’s good to know.", "username": "Sumyong_Kim" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What does it mean Pipeline length and What factors affect Pipeline length?
2022-11-04T08:29:24.615Z
What does it mean Pipeline length and What factors affect Pipeline length?
1,185
null
[ "queries", "python" ]
[ { "code": "", "text": "I have a mongo database in which i need to delete all the fields except the last one\nthe location of fields are :- Hello - collections - rules - flow - action, and inside the action there are many fields and i need to delete all of them except the last one\nCan anyone tell me the script to delete in python", "username": "Nilesh_kumar1" }, { "code": "", "text": "Have you got an example document?", "username": "chris" } ]
How to delete fields in mongodb except last ones
2022-11-02T12:01:47.902Z
How to delete fields in mongodb except last ones
1,287
https://www.mongodb.com/…6_2_1024x575.png
[ "node-js", "connecting" ]
[ { "code": "", "text": "I deployed my app to Heroku and set up my production database at Mongodb Atlas. However, whenever I try to open my app on Heroku, I get this error: Error connecting to db: connect ECONNREFUSED 127.0.0.1:27017. I have already set my MONGODB_URL and MONGODB_URI config values on Heroku but I am still getting this error. I have attached a screenshot of Heroku error logs below:\n\nh1366×768 107 KB\nPlease help me out I am going crazy here.", "username": "John_Cullen" }, { "code": "", "text": "Did you get any help or an update to your problem?", "username": "Javon_Ellis" } ]
Heroku Error connecting to db: connect ECONNREFUSED 127.0.0.1:27017
2022-05-19T13:20:49.544Z
Heroku Error connecting to db: connect ECONNREFUSED 127.0.0.1:27017
3,592
null
[ "node-js", "server", "storage" ]
[ { "code": "MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\n at Timeout._onTimeout (/.../node_modules/mongodb/lib/sdam/topology.js:293:38)\n at listOnTimeout (node:internal/timers:557:17)\n at processTimers (node:internal/timers:500:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) { '127.0.0.1:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\nconst { MongoClient } = require(\"mongodb\");\n\nconst uri = \"mongodb://127.0.0.1:27017/\"\nconst client = new MongoClient(uri);\nasync function run() {\n try {\n const database = client.db('sample_mflix');\n const movies = database.collection('movies');\n // Query for a movie that has the title 'Back to the Future'\n const query = { title: 'Back to the Future' };\n const movie = await movies.findOne(query);\n console.log(movie);\n } finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n }\n}\nrun().catch(console.dir);\n\nstorage:\n dbPath: C:\\data\\db\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: C:\\Program Files\\MongoDB\\Server\\6.0\\log\\mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n ipv6: true\n", "text": "Hi, could you help me?\nI keep getting this error when I launch node app.js.\nMongo shell connects, service MongoDB running (win11), me desperate.This is the code, tested fine on other machines:My config file looks like this, tried all possible combinations of paths and optional", "username": "Andrea_Spiga" }, { "code": "", "text": "Is your service up and running,?\nShow snapshot of your Windows service and shell connection you connected successfully", "username": "Ramachandra_Tummala" }, { "code": "", "text": "\nScreenshot 2022-10-25 1307161109×673 155 KB\n", "username": "Andrea_Spiga" }, { "code": "", "text": "Issue could be bindIp parameter\nCheck this link for Ipv6", "username": "Ramachandra_Tummala" }, { "code": "net:\n port: 27017\n bindIp: 127.0.0.1\n ipv6: true\n bindIpAll: true\n", "text": "This are the settings, I turned back on the bindIpAll optionI checked firewall and created rules for mongod/mongos, set the network to private, flushed the DNS.\nI have no idea what to do", "username": "Andrea_Spiga" }, { "code": "", "text": "You cannot use both bindIp & bindIp_all at same time\nUse only one of them in your cfg file\nTry 0.0.0.0 for bind_ip (not adviced on prod) and see if it works", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Also check this for exact format for Ipv6 with localhost", "username": "Ramachandra_Tummala" }, { "code": "net:\n port: 27017\n ipv6: true\n bindIpAll: true\n", "text": "Thank you Ramachandra, but still nothing, I had the configuration suggested in the post you linked, now I have it like this, still nothing.I switched to node 19 (from 16), same with both versions. MongoDB v 6.0.2", "username": "Andrea_Spiga" }, { "code": "", "text": "Is your IP whitelisted?\nDid you try without Ipv6(use ipv4)\nor\nDon’t use bindipall and try other format given for ipv6 with localhost,::\nIs your localhost mapped to 127.0.0.1?\nIs the machine where your node.app runs same as db server?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "My IP is whitelisted\nMy localhost goes like this in the hosts fileI tried all the options and combinations, still nothing.\nMongod runs as window service, but I also tried stopping the service and launching a mongod instance on the ubuntu terminal (what I use normally) and I have the same results. Mongosh connects but node app doesn’t.", "username": "Andrea_Spiga" }, { "code": "", "text": "If mongosh connect then your application should connect if you run your application on the same machine as mongosh. The following might indicates that you are using 2 machines or a VM:Mongod runs as window service, but I also tried stopping the service and launching a mongod instance on the ubuntu terminalIs the ubuntu terminal running on the same Windows where mongod is running as a service? When you connect with mongosh, do you connect from the same Windows machine where mongod is running? Is the app running on the same machine as the mongosh that can connect?You shared a couple of version of your configuration file. Please share the latest version. Have you restarted mongod after changing the configuration file?", "username": "steevej" }, { "code": "", "text": "Yeah same machine. I run mongosh on WSL terminal and it connects (with windows service running AND if I stop that service and run an instance of mongod on WSL).The latest configuration file. Mind that I’ve tried many combinations, this is from a MacOS where I tested and it worked fine.", "username": "Andrea_Spiga" }, { "code": "", "text": "I am not clear\nHow can Macos have Windows directory paths?\nI have suggested in above posts to use 127.0.0.1,::1\nLet’s wait for Steeves reply", "username": "Ramachandra_Tummala" }, { "code": "", "text": "The directory path for mac is different of course, but that’s not the point. Can it be something about windows security or similar?", "username": "Andrea_Spiga" }, { "code": "", "text": "Share the content of the log.I would try without ipv6 true and without ::1 for bindIp, just to simplify things.", "username": "steevej" }, { "code": "{\"t\":{\"$date\":\"2022-10-27T15:44:46.274+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.112+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.113+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.115+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.122+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.122+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.122+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.122+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23316, \"ctx\":\"thread1\",\"msg\":\"Trying to start Windows service\",\"attr\":{\"serviceName\":\"MongoDB\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.123+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":15020,\"port\":27017,\"dbPath\":\"C:/data/db\",\"architecture\":\"64-bit\",\"host\":\"LAPTOP-RJ5HAO5L\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.123+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.123+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.2\",\"gitVersion\":\"94fb7dfc8b974f1f5343e7ea394d0d9deedba50e\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.123+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 22621)\"}}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.123+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\6.0\\\\bin\\\\mongod.cfg\",\"net\":{\"bindIp\":\"127.0.0.1\"},\"service\":true,\"storage\":{\"dbPath\":\"C:\\\\data\\\\db\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\6.0\\\\log\\\\mongod.log\"}}}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.124+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"C:/data/db\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.124+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7294M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.369+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":245}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.369+02:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.372+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.374+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.375+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.375+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.375+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.639+02:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.642+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.642+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.644+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.644+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2022-10-27T15:44:47.644+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20555, \"ctx\":\"initandlisten\",\"msg\":\"Service running\"}\n\n", "text": "", "username": "Andrea_Spiga" }, { "code": "", "text": "Any success?\nIt says waiting for connections but SSL:off\nDid you try 0.0.0.0 inplace of localhost in your connect string.It is suggested as one of the fix in a forum thread", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Yes I tried it all. I think I’ll give up, thanks for your help anyways", "username": "Andrea_Spiga" }, { "code": "ss -tlnp\nps -aef | grep [m]ongo\n", "text": "Please share a screenshot that shows exactly how you start your application.If mongosh can connect, your application should connect.Are you using docker or some other container to start the application?I do not know exactly which commands are available inside WSL terminal, but the output of the following (if available)", "username": "steevej" }, { "code": "", "text": "I’m not using a container or docker I thinkthis is what I get running those commands\nimage1099×142 5.51 KB\nwhat is the second command trying to do?", "username": "Andrea_Spiga" } ]
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
2022-10-25T08:07:16.929Z
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
11,390
null
[ "java" ]
[ { "code": "return CodecRegistries.fromProviders(PojoCodecProvider\n\t\t\t\t.builder()\n\t\t\t\t.automatic(false)\n\t\t\t\t.register(Page.class, AdPage.class, CoverPage.class, ContentPage.class, ReviewPage.class)\n\t\t\t\t.build());\n\n@BsonDiscriminator(key = \"type\", value = \"ads\")\npublic class AdPage extends Page{\n}\n", "text": "Hi there, I’m using the PojoCodecRegistry and manually setting each class (for some reason the class path scanning does not work if not using class names as the values of the discriminator)So, when I run a query in the Pages collection I can see that I get the correct instance for each type of page, however the type property is set to null, and I need that property to be set to be used by the UI.Does mongo driver ignores the key property? Is there a way to force it to be set during the reads?", "username": "Vinicius_Carvalho" }, { "code": "", "text": "Hi @Vinicius_Carvalho,We’re not sure what’s going on here. Can you open an issue in our Jira with a minimal, reproducible example and we can investigate further?Thanks,\nJeff", "username": "Jeffrey_Yemin" } ]
BsonDiscriminator null value returned
2022-11-03T18:46:58.711Z
BsonDiscriminator null value returned
1,083
null
[ "connecting", "containers" ]
[ { "code": "", "text": "It looks like you are trying to access MongoDB over HTTP on the native driver port.", "username": "ebbe_AHMED" }, { "code": "", "text": "Hi @ebbe_AHMED and welcome to the MongoDB community forums. Can you share the following information:I remember seeing errors like that in the distant past, but I don’t think I’ve seen it for a number of years.", "username": "Doug_Duncan" }, { "code": "", "text": "thank you for your help my problem is solved", "username": "ebbe_AHMED" }, { "code": "", "text": "Hi, I have the same issue, how do u solve it?", "username": "Eason" }, { "code": "", "text": "Hello @Eason, and welcome to the MongoDB community forums! Without having any information on what you are doing when you get this message it’s hard to know how to help you, so I will ask you the same questions that I asked the original poster:Can you share the following information:The answers to these questions will help up us to help you.", "username": "Doug_Duncan" } ]
Hello guys i have a problem to access to mongodb in my container docker i have following error:
2022-08-26T09:52:50.632Z
Hello guys i have a problem to access to mongodb in my container docker i have following error:
2,727
https://www.mongodb.com/…956dda9b57e.jpeg
[ "aggregation" ]
[ { "code": "{\n$set:\n{\n childrenWithoutEndDate: {\n $reduce: {\n input:'$children',\n initialValue:[],\n in:{ \n $let:{\n vars:{\n hasEndDate: {$exists:['$$this.endDate',true]}\n },\n in:{\n $concatArrays:['$$value',{$cond:['$$hasEndDate',['$$this'],[]]}]\n }\n }\n }\n }\n }\n}\n}\n", "text": "Hi there,I try to use a $exists operator inside $reduce stage, but I get an error “Invalid $set :: caused by :: Unrecognized expression ‘$exists’”Here is a short example of a document from my collectionthanks", "username": "emmanuel_bernard" }, { "code": "**\n * newField: The new field name.\n * expression: The new field expression.\n */\n{\n lines: {$filter: {\n input: \"$lines\",\n cond: {$ne: [\"$$this\", \"\"]}\n }}\n}\n", "text": "Look at filter:First filter out the undesirable elements, then on the next stage of the aggregation use $reduce\nWhile it is one more step it is easier to follow. You can have a conditional in $reduce, but you need to have one output for each element in the array. In my example, create a string by concatenation using $reduce.\n$reduce", "username": "Ilan_Toren" }, { "code": "**\n * newField: The new field name.\n * expression: The new field expression.\n */\n{\n lines: {$filter: {\n input: \"$lines\",\n cond: {$ne: [\"$this\", \"\"]}\n }}\n}\n{\n \"_id\": \"fr_TYPOLOGIE_SPE\",\n \"children\": [\n {\n \"startDate\": ISODate(\"2014-02-10T10:50:42.389Z\") ,\n \"endDate\": ISODate(\"2014-03-08T08:51:42.389Z\") \n },\n {\n \"startDate\": ISODate(\"2014-02-10T10:50:42.389Z\")\n ]\n}\n", "text": "Hi @Ilan_Toren,The $filter aggregation is a good solution.I need to keep only the subset that contain a “endDate” field.\nWhen trying $exists operator I get an errorDo you know how to use $exists in $filter aggreagation ?For example in the document below, only keep the first element", "username": "emmanuel_bernard" } ]
How to use $exists operator inside $reduce stage?
2022-11-03T13:41:38.367Z
How to use $exists operator inside $reduce stage?
1,916
null
[]
[ { "code": "", "text": "Hello Community,One of my client is running MongoDB v3.6.23 community edition on windows server 2016.\nI searched but I don’t find any Documentation for Best Practices on Windows environment for running MongoDB. Customer have OS dependency causing them to run this older version as well as run it on Windows.\nSo can you guys help me with any link to best practices to run Mongodb on Windows ?thank you in advance !!Regards,\nJay", "username": "jay_87395" }, { "code": "", "text": "Hi @jay_87395 and welcome to the MongoDb community forum!!The following documentation on How to install MongoDB community version on Windows would be the recommended documentation to follow for installation.According to whats stated in the documentation, the latest community version MongoDB 6.0 works with the above said Windows server 2016 OS version.Below are a few recommended links which would be helpful to run MongoDB Community in production environment.Also, MongoDB version 3.6 is an older version of MongoDB. My recommendation would be to upgrade to the latest version for new features and bug fixes.Let us know if you face any issues while following the above documentation.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hello @aasawari_sahasrabuddhe .I will check the links provided and I have got few parameters related to windows environment.Thanks !!Regards,\nJay", "username": "jay_87395" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb Best Practices On Windows
2022-11-02T12:46:01.494Z
Mongodb Best Practices On Windows
1,342
null
[]
[ { "code": "", "text": "Hola tengo un problema.\nNo se como solucionar el que un registro en mongo me salga con caracteres raro, cuando en su lugar he metido una palabra con acento:\nEjemplo:\nMéxico = M%xico.Me podrian ayudar.", "username": "Lourdes_Nataly_Rojas_Hernandez" }, { "code": "éMéxicoM%xico", "text": "Hello @Lourdes_Nataly_Rojas_Hernandez ,It would be great if you can post future questions in English as it will be easy for most of the community to help you. I tried translating your text and please confirm if my understanding of your use case is correct, from the percent sign you posted I’m guessing you’re getting a percent-encoded character instead of é, is this accurate?If yes, can you please share below detailsRegards,\nTarun", "username": "Tarun_Gaur" } ]
Correccion de acentos en los registros//Correction of accents in registers
2022-11-01T16:11:01.679Z
Correccion de acentos en los registros//Correction of accents in registers
1,178
null
[ "replication" ]
[ { "code": "", "text": "I have a mongodb replica set with 3 nodes and running by user: mongod.\nafter I stopped the replica set using systemd, some storage files’ owner were changed to root.\nmaybe the shutdown is not graceful, and I have resolved this issue by changing the owner back to ‘mongod’.I want to know the owner-change mechanism and what is the graceful way to shutdown a replica set. Could someone help me out?", "username": "YJ_Zuo" }, { "code": "mongodsudo mongodrootsystemdsystemctl", "text": "Welcome to the MongoDB community @YJ_Zuo !What O/S are you using and how are you stopping and starting the mongod proceses?Usually a change of ownership on some files is a result of someone starting the process using sudo mongod (which would create new files as root) rather using service wrappers which will set a specific user (for example: Using systemd (systemctl) on Ubuntu).Regards,\nStennie", "username": "Stennie_X" } ]
Some Mongodb storage files' owner changs from `mongod` to `root` after stop the replica set
2022-11-03T03:59:28.630Z
Some Mongodb storage files&rsquo; owner changs from `mongod` to `root` after stop the replica set
1,126
https://www.mongodb.com/…e_2_1024x509.png
[ "vscode" ]
[ { "code": "", "text": "Hey I just started with MongoDB and started to use VSCode with the MongoDB extension.\nI dont have any problems connecting to my cluster but i for some reason it doesnt show my database under connections in VSCode. Also, if I try to run the playground script, then there is no .Resul file nor a update in the database. I dont get any error message.\nimage1589×790 77.7 KB\n", "username": "Daniel_Brunner" }, { "code": "Database AccessSecurity", "text": "Hello @Daniel_Brunner ,Welcome to The MongoDB Community Forums! Can you please confirm if the username you used to connect to this cluster is having required role to read the database? You can find the role in Database Access tab which is available on left side under Security.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "the security role was selected correctly; Today i tried it again and everything WORKS as intented. I dont know what the problem was", "username": "Daniel_Brunner" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas on VSCode doesnt show resultsdat
2022-11-03T19:03:51.935Z
MongoDB Atlas on VSCode doesnt show resultsdat
2,316
null
[ "queries", "performance", "graphql" ]
[ { "code": "", "text": "We have been looking in to the GraphQL API as an alternative to what we currently use. However, there are some strange performance issues which don’t seem to make sense.Our test setup is fairly simple:Custom JWT Authentication which populates token.sub in to user.data.tokenId\nOne collection: Users\nTwo Rules:There are only two records in the users collection at the moment and the query is only returning a single attribute which is about 32 chars.The results we are seeing are that the query consistently takes about 1.5-3 seconds round trip.To produce an auth error before the query runs, and get an idea of latency due to global location - we removed the jwtTokenString from the request header which drops the request time down to ~70msAny ideas?", "username": "AzC" }, { "code": "", "text": "We think we’ve found our issue, based off another few posts here, we took a look at the app service hosting location vs the cluster.As we set up our initial cluster as global in a shared GCP service, the chosen region was not compatible with the regions available in the available AppServices GCP regions. So a different global region was chosent and we thing that may have something to do with the latency.We’ll investigate further as time permits.", "username": "AzC" }, { "code": "", "text": "Hi @AzC, welcome to the community.we took a look at the app service hosting location vs the clusterThat’s correct. It is generally recommended that your App Service’s region is the same as your Database Deployment’s region to mitigate the impact of latency caused due to inter-region data transfer. This is considered as a Local Deployment.We’ll investigate further as time permits.Sure, please let us know if you find any other bottlenecks.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" } ]
Slow GraphQL Query with Custom JWT Auth and only two records without relationships
2022-11-03T22:10:04.347Z
Slow GraphQL Query with Custom JWT Auth and only two records without relationships
2,105
null
[ "serverless" ]
[ { "code": "", "text": "We are looking to deploy Cloud Run on GCP and use MongoDB Altas serverless instance.\nPer documentation, support for Private Endpoints on Google Cloud using Google Cloud Private Service Connect is coming soon. https://www.mongodb.com/docs/atlas/reference/serverless-instance-limitations/\nDo we know when will this be available? Do we need to connect using public endpoint now?\nThanks.", "username": "Josephine_Ng" }, { "code": "", "text": "Hi @Josephine_Ng,Do we know when will this be available? Do we need to connect using public endpoint now?Unfortunately, we would not be able to provide you with an estimate date on the support Google Cloud Private Service Connect feature for serverless instances. MongoDB plans to add support for more configurations and capabilities on serverless instances over time and as you have noted, the feature is being worked on and coming soon.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
When will serverless instances support Private Endpoints on Google Cloud?
2022-11-02T16:45:44.914Z
When will serverless instances support Private Endpoints on Google Cloud?
1,736
null
[ "production", "golang" ]
[ { "code": "Timeout", "text": "The MongoDB Go Driver Team is pleased to release version 1.11.0 of the MongoDB Go Driver.This release This release improves the Timeout API and behavior, reduces memory allocations when running most operations, and fixes several bugs. It also removes support for some legacy versions of MongoDB and Go. See below for more details. For more information please see the 1.11.0 release notes.You can obtain the driver source from GitHub under the v1.11.0 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.11.0 Released
2022-11-03T20:41:43.093Z
MongoDB Go Driver 1.11.0 Released
1,669
null
[ "aggregation", "queries", "data-modeling" ]
[ { "code": "{\n \"id\" : \"A\",\n \"province_V1\" : \"CA\",\n \"province_V2\" : \"CAL\",\n \"units\" : [\n {\n \"id\" : \"UA\",\n \"capabilities_V1\" : [\n \"REMOTE_START\",\n \"REMOTE_STOP\"\n ],\n \"capabilities_V2\" : [\n \"REMOTE_START\",\n \"REMOTE_STOP\",\n \"HEALTH_CHECK\"\n ],\n \"connectors\" : [\n {\n \"id\" : \"CA\",\n \"voltage_V1\" : 90.0,\n \"amperage_V1\" : 120.0,\n \"max_voltage_V2\" : 90.0,\n \"max_amperage_V2\" : 120.0\n }\n ]\n }\n ]\n}\ndb.getCollection(\"test\").find({}, {\n id: 1,\n province: '$province_V1',\n 'units.capabilities': '$units.capabilities_V1',\n 'units.connectors.voltage': '$units.connectors.voltage_V1',\n 'units.connectors.amperage': '$units.connectors.amperage_V1'\n})\n\nprovince'CA''CAL'$province_V1$province_V2\n{\n \"id\" : \"A\",\n \"units\" : [\n {\n \"connectors\" : [\n {\n \"voltage\" : [\n [\n 90.0\n ]\n ],\n \"amperage\" : [\n [\n 120.0\n ]\n ]\n }\n ],\n \"capabilities\" : [\n [\n \"REMOTE_START\",\n \"REMOTE_STOP\"\n ]\n ]\n }\n ],\n \"province\" : \"CA\"\n}\n'units.capabilities'", "text": "I have a case, where I have a collection of documents that have several fields that are either have duplicate field names with slightly different allowed values, or fields that are the same value, but with a different field name based on the version the user is requesting. I’d like to identify the fields by their version by appending _V1 or _V2 to the end of the field for ease of reading the data in the database, but I’d like to remove remove the _V1 or _V2 when the user queries the data.Here is an example document:With the document above I’d like to show of a specific version based on the user’s request and remove the _V1 appended to each fields name. This is the query I’m attempting to do this with:This works for top level fields such as province. When I query this data the province field is added and is either 'CA' or 'CAL', based on $province_V1 or $province_V2I have an issue with fields in the nested units and connectors array. The values are returning properly, but they are nested within another array.Here are the results that I’m getting:I’d like 'units.capabilities' to be an array, but my projection is creating a nested array. And for voltage and amperage, I’d like these fields to be a number, but my projection is also creating a nested array.Can a simple projection handle this? If not, which method do you think would be better to handle this use case? Would an aggregation be more suited?", "username": "Greg_Fitzpatrick-Bel" }, { "code": "$map$projectdb.getCollection(\"test\").find({}, {\n id: 1,\n province: '$province_V1',\n units: {\n $map: {\n input: '$units',\n as: 'unit',\n in: {\n capabilities: '$$unit.capabilities_V1,\n connectors: {\n $map: {\n input: '$$unit.connectors',\n as: 'connector',\n in: {\n voltage: '$$connector.voltage_V1',\n amperage: '$$connector.amperage_V1'\n }\n }\n }\n }\n }\n }\n})\n", "text": "I was able to solve this issue by using $map in my $project for each nested array, like so:", "username": "Greg_Fitzpatrick-Bel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Projecting Fields & Changing Names Based On User Context
2022-11-03T15:04:59.832Z
Projecting Fields &amp; Changing Names Based On User Context
1,002
null
[ "swift" ]
[ { "code": "struct ItemsList: View {\n @ObservedResults(Item.self) var items\n\n var body: some View {\n ForEach(items) { item in\n NavigationLink(destination: ItemDetailView(item: item)) {\n Text(item.name)\n }\n }\n}\n\nstruct ItemDetailView: View {\n @ObservedRealmObject var item: Item\n @Environment(\\.realm) var realm\n @Environment(\\.presentationMode) var presentationMode: Binding<PresentationMode>\n\n var body: some View {\n VStack{\n Text(item.name)\n Image(systemName: \"trash\")\n .onTapGesture{\n // delete the item\n realm.asyncWrite{\n realm.delete(item)\n }\n // pop back so as not to show the detail view anymore since it doesn't exist\n self.presentationMode.wrappedVaule.dismiss()\n }\n }\n }\n}\n", "text": "I’m running into an issue where I want to allow for the deletion of an object on the screen that is showing the details.Example:This is an oversimplified version of what I’m doing, but I can’t find anything in the docs that shows how to do this the “right” way, and I’m posting it here because I do assume it is a pretty common use case.I keep trying different strategies (thawing, passing the object back up the stack and deleting on the top screen, asnycWrite/write, etc.) and I happen to be getting this error a lot, but I don’t understand what it means:‘This method may only be called on RLMArray instances retrieved from an RLMRealm’Is there a documented best/right way to do this?", "username": "Kurt_Libby1" }, { "code": "", "text": "I am having a similar issue", "username": "Sean_O_Donnell" }, { "code": "@ObservedRealmObjects", "text": "@ObservedRealmObjects are frozen - they need to be thawed of you want to modify them within a write.", "username": "Jay" }, { "code": "", "text": "Thanks @Jay. I know this, but I guess I wasn’t clear because it doesn’t address what I’m asking.When you want to delete the @ObservedRealmObject while on the detail view, what’s the best practice on how to do that?Documentation and examples are with lists, but not on the detail screen.", "username": "Kurt_Libby1" }, { "code": "", "text": "Same issue for me as well. Does Realm actually support SwiftUI, or should we start the transition to CoreData?", "username": "Matt_Krueger" }, { "code": "let thawedItem = item.thaw()\n\nif thawedItem.invalidated == false { //ensure it's a valid item\n\n let thawedRealm = thawedItem!.realm! //get the realm it belongs to\n\n try! thawedRealm.write {\n thawedRealm.delete(thawedItem)\n }\n}\n", "text": "@Matt_Krueger Realm supports SwiftUI! - in fact there’s an entire section in the documentation just for Getting Started with SwiftUIThe ‘detail’ view vs the ‘main’ view vs any other view doesn’t really factor in to how you work with Realm. What does factor in is what types of (and how) objects are being passed around to those views; are you passing an object? Is it an Environment value? Or perhaps it’s the realm itself to when you can then read the object in question.So there is no ‘right way’ because it depends on the status of the object.When you want to delete the @ObservedRealmObject while on the detail viewIt appears in this, based on your example code, the object is frozen, so thawing it would allow deletion.But - that leads to a question; are you wanting to delete the object from it’s parent objects List or delete the object entirely from Realm, never to be heard from again?Here’s some example code we use in other views to delete an ObservedRealmObjectI added the code in your question to a project and it seems to work - I am not getting the mentioned error so perhaps it’s thrown by another section of code?", "username": "Jay" }, { "code": ".invalidated has been changed to .isInvalidated\n", "text": "Thanks Jay.A couple of things:You’re going to need a bang on that first thawedItem as well as in the delete.Other than that, this seems to work. ", "username": "Kurt_Libby1" }, { "code": ".invalidated has been changed to .isInvalidated", "text": ".invalidated has been changed to .isInvalidatedWhoops. You are so correct. I grabbed that code snippet from a project we did last year. Thanks!", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Deleting a Realm Object on the Detail View in SwiftUI
2022-11-01T15:42:10.578Z
Deleting a Realm Object on the Detail View in SwiftUI
3,019
null
[ "aggregation" ]
[ { "code": "[\n {\n \t\"_id\": 1,\n \t\"name\": \"account_1\",\n \t\"account\": \"abc123\"\n\t},\n {\n \"_id\": 2,\n \"name\": \"account_2\",\n \"account\": \"def456\"\n }\n]\n[\n {\n \"_id\": 1,\n \"username\": \"jsmith\",\n \"accounts\": [\"abc123\", \"def456\"]\n }\n]\n[\n {\n \"_id\": 1,\n \"name\": \"widget_1\",\n \"user_id\": 1,\n \"account_id\": 1,\n \"expires\": \"2022-11-30T00:00:00.000-05:00\"\n },\n {\n \"_id\": 2,\n \"name\": \"widget_2\",\n \"user_id\": 1,\n \"account_id\": 2,\n \"expires\": \"2022-12-31T14:00:00.000-05:00\"\n }\n]\n[\n {\n \"_id\": 1,\n \"username\": \"jsmith\",\n \"accounts\": [\n {\n \"_id\": 1,\n \"name\": \"account_1\",\n \"account\": \"abc123\"\n }\n ],\n \"widgets\": [\n {\n \"_id\": 1,\n \"name\": \"widget_1\",\n \"expires\": \"2022-11-30T00:00:00.000-05:00\"\n }\n ]\n }\n]\n_idaccountpipeline$lookupdb.getCollection(\"widgets\").aggregate([\n {$match: {\n expires: {\n $lt: ISODate(\"2022-12-01T20:00:00.000-05:00\"),\n $gt: ISODate(\"2022-11-29T18:59:59.999-05:00\")\n }\n }},\n {$lookup: {\n from: \"users\",\n localField: \"user_id\",\n foreignField: \"_id\",\n as: \"user\"\n }},\n {$lookup: {\n from: \"accounts\",\n localField: \"account_id\",\n foreignField: \"_id\",\n as: \"account\"\n }},\n {$unwind: \"$user\"},\n {$unwind: \"$account\"}\n])\n", "text": "I have three collections: “widgets”, “users”, and “accounts”. The documents within each look like:accounts:users:widgets:Notice that a single “user” can be a member of multiple “accounts”, and that a single user can have multiple “widgets”. My goal is to write a query such that results look like:In other words the query:I have successfully written an aggregate query that finds all widgets expiring on a certain date and rolls the associated user and account documents into the token document. But it’s not clear to me if it will be possible to do as I have described above. I think this involves the pipeline option on the $lookup function, but I am not sure.For reference, the query I wrote (that I am trying to flip) which returns tokens with rolled-up documents is:Note: my target database version is 4.4.", "username": "James_Sumners" }, { "code": "{ \"$group\" : {\n \"_id\" : \"$user_id\" ,\n \"username\" : { \"$first\" : \"$user.username\" } ,\n \"accounts\" : { \"$addToSet\" : \"$account\" } ,\n \"widgets\" : { \"$addToSet\" : { \"_id\" : \"$_id\" , \"name\" : \"$name\" , \"expires\" : \"$expires\" } }\n} }\n", "text": "Thanks for publishing your sample documents, expected results and pipeline.Next time vary your _id from one collection to the others, it helps to see which _id are which in your expected results. Ex: accounts _id could be 101, 102, users _id could be 201 and widgets could be 301, 302.It looks like your pipeline supplies all the correct information and you just want it to be reorganized. Personally, I prefer to do this reorganization on the application back end or front end rather than on the data server. Specially with $unwind as it increases the amount of data.You may get an output close to your expected result with a $group stage that would look like the following untested code:", "username": "steevej" }, { "code": "db.getCollection(\"users\").aggregate([\n // Get all user documents.\n {$match: {\n _id: { $exists: true }\n }},\n \n // Join all widget documents on user._id => widget.user_id.\n {$lookup: {\n from: \"widgets\",\n localField: \"_id\",\n foreignField: \"user_id\",\n as: \"widgets\"\n }},\n \n // Return individual result documents for each joined widget.\n {$unwind: \"$widgets\"},\n \n // Replace the \"widgets\" array field in each document with a singlar widget field.\n {$project: {\n _id: true,\n email: true,\n username: true,\n widget: {\n _id: \"$widgets.id\",\n name: \"$widgets.name\",\n account_id: \"$widgets.account_id\",\n expires: \"$widgets.expires\"\n }\n }},\n \n // Filter results by widgets that expire on a given day.\n {$match: {\n \"widget.expires\": {\n $gt: ISODate('2022-11-14T23:59:59.999-05:00'),\n $lt: ISODate('2022-11-16T00:00:00.000-05:00')\n }\n }},\n \n // Get the account associated with each widget.\n {$lookup: {\n from: \"accounts\",\n localField: \"widget.account_id\",\n foreignField: \"_id\",\n as: \"widget.account\"\n }},\n \n // Each widget is for a singluar account, so use $unwind to\n // replace the array field with a singular field.\n {$unwind: \"$widget.account\"},\n \n {$match: {\n \"widget.account.account\": {$ne: null}\n }},\n \n \n // Remove unwanted fields from the result set.\n {$project: {\n widget: {\n account_id: false,\n account: {\n created: false,\n externalid: false,\n meta: false,\n plan: false\n }\n }\n }},\n \n // Undo the \"$widgets\" unwind by grouping all documents by the user doc fields.\n {$group: {\n _id: { _id: \"$_id\", email: \"$email\", username: \"$username\" },\n widgets: {$push: \"$widget\"}\n }},\n \n // Remap the group results into documents that make a bit more sense.\n {$project: {\n _id: \"$_id._id\",\n email: \"$_id.email\",\n username: \"$_id.username\",\n widgets: \"$widgets\"\n }}\n])\n", "text": "Thank you for the reply and suggestion. What I ended up doing was… a lot:", "username": "James_Sumners" }, { "code": "", "text": "One simple improvement to try in your solution.You may add a pipeline inside your $lookup:{from:widgets} to do your match on widget.expires before $unwind. See https://www.mongodb.com/docs/manual/reference/operator/aggregation/lookup/#correlated-subqueries-using-concise-syntax", "username": "steevej" }, { "code": "", "text": "I’ve read that document several times and I’m just not getting it. I think I tried to do that, and was getting “cannot use ‘$’ paths in field names” or something. Anyway, as I have it written, it’s easy to adjust the filters in my application by working with the individual clause objects.", "username": "James_Sumners" } ]
Rolling up documents by identifiers?
2022-10-31T17:23:18.260Z
Rolling up documents by identifiers?
1,566
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "$ nodemon server.js\n[nodemon] 2.0.20\n[nodemon] to restart at any time, enter `rs`\n[nodemon] watching path(s): *.*\n[nodemon] watching extensions: js,mjs,json\n[nodemon] starting `node server.js`\nserver started on port 8080\n/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/lib/connection.js:807\n const serverSelectionError = new ServerSelectionError();\n ^\n\nMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\n at NativeConnection.Connection.openUri (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/lib/connection.js:807:32)\n at /home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/lib/index.js:340:10\n at /home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/lib/index.js:1140:10)\n at Mongoose.connect (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/lib/index.js:339:20)\n at file:///home/overlord/github_sleepywakes_thunderroost/server.js:28:10\n at ModuleJob.run (node:internal/modules/esm/module_job:175:25)\n at async Loader.import (node:internal/modules/esm/loader:178:24) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'cluster0-shard-00-00.geujl.mongodb.net:27017' => ServerDescription {\n _hostAddress: HostAddress {\n isIPv6: false,\n host: 'cluster0-shard-00-00.geujl.mongodb.net',\n port: 27017\n },\n address: 'cluster0-shard-00-00.geujl.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 634944,\n lastWriteDate: 0,\n error: MongoNetworkError: connection <monitor> to 35.184.107.131:27017 closed\n at Connection.handleIssue (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:122:23)\n at TLSSocket.<anonymous> (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:63:39)\n at TLSSocket.emit (node:events:394:28)\n at node:net:662:12\n at TCP.done (node:_tls_wrap:580:7)\n },\n 'cluster0-shard-00-01.geujl.mongodb.net:27017' => ServerDescription {\n _hostAddress: HostAddress {\n isIPv6: false,\n host: 'cluster0-shard-00-01.geujl.mongodb.net',\n port: 27017\n },\n address: 'cluster0-shard-00-01.geujl.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 636016,\n lastWriteDate: 0,\n error: MongoNetworkError: connection <monitor> to 35.232.161.204:27017 closed\n at Connection.handleIssue (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:122:23)\n at TLSSocket.<anonymous> (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:63:39)\n at TLSSocket.emit (node:events:394:28)\n at node:net:662:12\n at TCP.done (node:_tls_wrap:580:7)\n },\n 'cluster0-shard-00-02.geujl.mongodb.net:27017' => ServerDescription {\n _hostAddress: HostAddress {\n isIPv6: false,\n host: 'cluster0-shard-00-02.geujl.mongodb.net',\n port: 27017\n },\n address: 'cluster0-shard-00-02.geujl.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 634879,\n lastWriteDate: 0,\n error: MongoNetworkError: connection <monitor> to 35.192.112.82:27017 closed\n at Connection.handleIssue (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:122:23)\n at TLSSocket.<anonymous> (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:63:39)\n at TLSSocket.emit (node:events:394:28)\n at node:net:662:12\n at TCP.done (node:_tls_wrap:580:7)\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-67keal-shard-0',\n logicalSessionTimeoutMinutes: undefined\n }\n}\n[nodemon] app crashed - waiting for file changes before starting...\nmongoose.connect(\"mongodb+srv://overlord:mt5E%[email protected]/ThunderDB\", { useNewUrlParser: true});\n", "text": "Greetings. This is my first implementation of a web app, and I’m not able to successfully deploy from GCP using a Compute Engine instance.From mongodb, I have whitelisted the instance’s external IP address. Are there other reasons I could be getting this error?This is the line from my code, using the cluster copied from mongodb.Thank you in advance for any advice!", "username": "SleepyWakes" }, { "code": "", "text": "I have whitelisted the instance’s external IP address.Allow access from Anywhere first and let us know the results.Try connecting with mongosh using the same URI.", "username": "steevej" }, { "code": "", "text": "When I allow full access, I no longer crash the application. And I can connect to mongodb through Mongodb Compass. However, I still get “This site can’t be reached” in my browser when I go to the URL of the application.", "username": "SleepyWakes" }, { "code": "", "text": "This site can’t be reachedIs not MongoDB related. Stackoverflow might be a better place to ask. What is the URL of the application?", "username": "steevej" }, { "code": "", "text": "Thanks, I’m certain I’m missing something simple given this is my first go at this. GCP is able to send a packet to the site, per their troubleshooting, but the URL gives me this error. Anyway, I will try Stackoverflow, thanks very much for the help here, Steeve!35.222.10.241 took too long to respond.", "username": "SleepyWakes" }, { "code": "", "text": "@SleepyWakes, you might want to remove access from everywhere and change your password as you’ve put your actual credentials in the connection string you posted and anyone who reads this, and wants to, can now access your system.Granted it looks like you’re in early stages of work as there are only two collections with one document each, but still, you don’t want someone wreaking havoc on your system.", "username": "Doug_Duncan" }, { "code": "", "text": "I cannot ping or traceroute to this IP35.222.10.241May be you have some firewall rules than prevents traffic. You mentioned GCP in your title. A Google Cloud Platform specific forum might exists for that.", "username": "steevej" }, { "code": "", "text": "Thanks for your patience with me, I know this must be frustrating. It turns out I gave you the wrong IP (I gave the external troubleshooting IP from GCP, ugh). It’s actually:\n34.173.96.254and appears to be pingable. I cannot figure out how I can run my server with 0.0.0.0/0 open but when I only whitelist the GCP external IP (the one above) it crashes. It’s obviously a whitelist issue, but that process is pretty simple. Could it be that my code is not properly accessing my Mongodb database?", "username": "SleepyWakes" }, { "code": "", "text": "To know which IP to open use https://www.whatismyip.com/ on the machine where you application is running.You might be interested with Announcing Google Private Service Connect (PSC) Integration for MongoDB Atlas | MongoDB Blog.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to connect to mongodb from GCP
2022-10-26T22:41:44.659Z
Unable to connect to mongodb from GCP
3,960
null
[]
[ { "code": "sudo sytemctl start mongodmongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2022-10-19 20:11:41 WAT; 47min ago\n Docs: https://docs.mongodb.org/manual\n Process: 5770 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=48)\n Main PID: 5770 (code=exited, status=48)\n\nOct 19 20:11:40 systemd[1]: Started MongoDB Database Server.\nOct 19 20:11:41 systemd[1]: mongod.service: Main process exited, code=exited, status=48/n/a\nOct 19 20:11:41 systemd[1]: mongod.service: Failed with result 'exit-code'.\n", "text": "i have been using sudo sytemctl start mongod for while without having issues but lately whenever i run the same command to start mongodb and try to check the status i keep getting this failed message.how do i fix this?", "username": "Victor_Adedeji" }, { "code": "ps -aef | grep [m]ongo\nss -tlnp\n", "text": "Please share the content of the log file and configuration file.Also share the output of:If I remember correctly status=48 means you have another process listening at the same address/port.", "username": "steevej" }, { "code": "{\"userId\":\"626bc8fbd5796d38ef89118f\",\"telemetryAnonymousId\":\"626bc8fbd5796d38ef89118f\",\"enableTelemetry\":true,\"disableGreetingMessage\":true}\nps -aef | grep [m]ongo outputmongodb 2122 1 0 11:12 ? 00:00:07 /usr/bin/mongod --config /etc/mongodb.conf\nss -tlnpState Recv-Q Send-Q Local Address:Port Peer Address:Port Process \nLISTEN 0 70 127.0.0.1:33060 0.0.0.0:* \nLISTEN 0 4096 127.0.0.1:27017 0.0.0.0:* \nLISTEN 0 151 127.0.0.1:3306 0.0.0.0:* \nLISTEN 0 511 0.0.0.0:80 0.0.0.0:* \nLISTEN 0 32 192.168.122.1:53 0.0.0.0:* \nLISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* \nLISTEN 0 5 127.0.0.1:631 0.0.0.0:* \nLISTEN 0 244 127.0.0.1:5432 0.0.0.0:* \nLISTEN 0 244 127.0.0.1:5433 0.0.0.0:* \nLISTEN 0 244 127.0.0.1:5434 0.0.0.0:* \nLISTEN 0 244 127.0.0.1:5435 0.0.0.0:* \nLISTEN 0 50 [::ffff:127.0.0.1]:9614 *:* users:((\"java\",pid=2715,fd=17)) \nLISTEN 0 511 [::]:80 [::]:* \nLISTEN 0 5 [::1]:631 [::]:* \n", "text": "here’s my config file content Steeveps -aef | grep [m]ongo outputss -tlnpplease, how do i go about handling the status=48 ?\nthanks.", "username": "Victor_Adedeji" }, { "code": "sudo fuser -k 27017/tcp\n", "text": "I have been able to fixed the issue.\n@steevej\nThe mongod service process tries to run on the same port and this was as a result of previously running mongod service with root privileged on ubuntu.\ni had to kill the process on the port mongod service is listening to without worrying if the service has been started by another user.", "username": "Victor_Adedeji" }, { "code": "", "text": "I am sorry but I did missed the post you made 14 days ago.Thanks for sharing the solution.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongod.service: Failed with result 'exit-code'
2022-10-19T20:09:45.574Z
Mongod.service: Failed with result &lsquo;exit-code&rsquo;
13,984
null
[ "node-js" ]
[ { "code": "", "text": "Hi,\nI read in the docs, “you can define multiple JavaScript functions in a single function file. The file must export a single JavaScript function from to serve as the entrypoint for incoming calls.”Is there a way to call the “other” functions directly via context.functions.execute(), or do I have to call the single exported function and supply the subfunction name as a parameter and then execute the other function?\nFor instance it would be nice if this worked but it doesn’t\ncontext.functions.execute( ‘entrypointFunction.otherFunction’, param1);So would I need context.functions.execute( ‘entrypointFunction’, ‘otherFunction’, param1); and then the entrypointFunction does an execute of otherFunction?I’d like to group similar functions into a single file rather than having one file each.\nThanks!\nRainer", "username": "Rainer_Richter" }, { "code": "context", "text": "Hello @Rainer_Richter ,Its been a while since we heard from you, glad to have you back It appears that the multiple functions in the same file will have a starting function called via context as explained in the “Write a Function” section. The other functions can be called within the starting function or sequential calls.However, allow me some time to confirm the details with the team and I will update you.Your patience is much appreciated. Please don’t hesitate to ask if you have any more questions.Cheers, \nHenna", "username": "henna.s" } ]
Export multiple functions from a single file
2022-09-30T00:21:53.578Z
Export multiple functions from a single file
1,961
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "", "text": "MongoError: unknown top level operator: $in. If you have a field name that starts with a ‘$’ symbol, consider using $getField or $setField.\nat MessageStream.messageHandler (/app/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:299:20)\nat MessageStream.emit (node:events:513:28)\nat processIncomingData (/app/node_modules/mongoose/node_modules/mongodb/lib/cmap/message_stream.js:144:12)\nat MessageStream._write (/app/node_modules/mongoose/node_modules/mongodb/lib/cmap/message_stream.js:42:5)", "username": "ramanjaneya_karnati" }, { "code": "MongoError: unknown top level operator: $in$infindaggregate$in", "text": "MongoError: unknown top level operator: $inYou’re likely trying to use the $in operator directly without wrapping the operation correctly to perform either a find or and aggregate.Can you share the code where you’ve specified an $in?", "username": "alexbevi" }, { "code": "$infindaggregateI have function like this that returns the query:\nfunction getMovieFilterQuery(filters = {}){\n const query = {}\n\n if (filters.moviesIds !== undefined) {\n query.movieId = { $in: filters.movieIds };\n }\n\n return query;\n}\n\n", "text": "You’re likely trying to use the $in operator directly without wrapping the operation correctly to perform either a find or and aggregate .", "username": "ramanjaneya_karnati" }, { "code": "getMovieFilterQuerydb.foo.find({ $in: [ 'movie1', 'movie2' ] })\ndb.foo.find({ movieId: { $in: [ 'movie1', 'movie2' ] }})\n", "text": "@ramanjaneya_karnati you’ll likely need to trace the caller of getMovieFilterQuery to see how the function result is being passed to a query, but given the error you shared you’re likely sending a command similar to the following to the server:The server is expecting you to send:Stepping through your logic should surface the issue pretty quickly.", "username": "alexbevi" } ]
MongoError: unknown top level operator: $in. If you have a field name that starts with a '$' symbol, consider using $getField or $setField
2022-11-02T16:43:55.877Z
MongoError: unknown top level operator: $in. If you have a field name that starts with a &lsquo;$&rsquo; symbol, consider using $getField or $setField
8,510
null
[ "aggregation", "indexes" ]
[ { "code": "collection_threshold: [\n {\n _id: abcd,\n attributeId: id_for_threshold_1\n threshold: 10\n },\n {\n _id: abcde,\n attributeId: id_for_threshold_2\n threshold: 5\n }\n]\n\ncollection_events: [\n {\n uuid: id_for_threshold_1,\n timestamp: 2022-04-01T11:00:00:00.000\n ...\n },\n {\n uuid: id_for_threshold_1,\n timestamp: 2022-03-31T11:00:00:00.000\n ...\n },\n {\n uuid: id_for_threshold_2,\n timestamp: 2022-04-01T11:00:00:00.000\n ...\n },\n {\n uuid: id_for_threshold_2,\n timestamp: 2022-03-31T11:00:00:00.000\n ...\n }\n]\n", "text": "We have a requirement to remove older documents based on an attribute when the count of the documents containing the said attribute goes over a threshold (which can be individually defined).For eg:In the above example the collection_events may hold only 10 documents with the uuid: id_for_threshold_1, when the 11th event gets written the oldest one (based on the timestamp) should automatically get deleted, similarly for the documents with the uuid: id_for_threshold_2, this should happen already for the 6th insert.Is there a way to achieve this using mongo indexes?", "username": "Jayesh_Sarswat" }, { "code": "", "text": "Hi @Jayesh_Sarswat ,For this complex logic you will need to implement some scheduled code which will at least set an expired time on the documents to be used by expiration delay of 0:Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Auto deletion of older documents based on count
2022-11-03T10:10:44.520Z
Auto deletion of older documents based on count
1,249
null
[ "data-modeling", "indexes", "bucket-pattern" ]
[ { "code": "", "text": "Hello My team has a collection of ~45 million documents, each containing a timestamp and a geojson field.\nThe main use case is querying across both the timestamp and the geojson.The team had a few solutions:Using a compound index for the two fields - performance wasn’t on par with product expectations - on some cases search time was over 10 seconds.Using the bucket pattern (bucket per time range) for reducing search time, but because the query is across two fields we couldn’t properly use the geojson index.Bucketing by collection, i.e. a collection for every time range. This enables the use of the geojson index while still using buckets. I feel like that’s not a standard solution and requires more implementation at the application layer. This solution however, has the best performance - sub 1 second search timeI would like to hear from you your opinions about the solutions and whether there are some modeling patterns that I am missing", "username": "Matan_Shaked" }, { "code": "", "text": "Hi @Matan_Shaked ,Can you share some document samples, indexes and queries so we can get a better feel of what you are describing.If you found a performant solution while still with manageable coding overhead than that is perfectly fine and shows how mongoDB allows you to do whats best for you.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \"_id\" : ObjectId(\"63639134eff35d0dcb9b0aae\"),\n \"geoJson\" : {\n \"type\" : \"Feature\",\n \"properties\" : {\n \"timestamp\" : 1667469522345.0,\n \"sensorId\" : \"12345\"\n },\n \"geometry\" : {\n \"type\" : \"Point\",\n \"coordinates\" : [ \n 9.1594001938029, \n 45.4863023702389\n ]\n }\n }\n}\n{\n \"geoJson.properties.sensorId\" : 1.0,\n \"geoJson.properties.timestamp\" : 1.0,\n \"geoJson.geometry\" : \"2dsphere\"\n}\ndb.getCollection('telemetries').aggregate([\n {\n $match: {\n $and: [\n {\n \"geoJson.geometry\": {\n $geoWithin: {\n $center: [\n [\n 9.159400193802895,\n 45.48630237023892\n ],\n 0.3\n ]\n }\n }\n }, {\n $and: [\n { \"geoJson.properties.timestamp\": { $gt: 1647469522345 } },\n { \"geoJson.properties.timestamp\": { $lt: 1687469522345 } }\n ]\n }\n ]\n }\n }, {\n $group: {\n _id: \"$geoJson.properties.sensorId\"\n }\n }\n]);\n", "text": "Here is a minimal representation of our documents:This is one of the indexes that we tried:And here is an example query:", "username": "Matan_Shaked" }, { "code": "{\n \"geoJson.geometry\" : \"2dsphere\",\n \"geoJson.properties.sensorId\" : 1.0,\n \"geoJson.properties.timestamp\" : 1.0\n}\n", "text": "Hi @Matan_Shaked ,It looks as an index order of fields might work better if you index:Consider that since it follows better the ESR rule:Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Searching over a huge geojson collection
2022-11-03T08:30:57.141Z
Searching over a huge geojson collection
1,959
null
[ "node-js" ]
[ { "code": "TypeError: value.value is not a function\n at serializeBinary (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:567:20)\n at serializeInto (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:979:17)\n at serializeObject (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:347:18)\n at serializeInto (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:947:17)\n at serializeObject (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:347:18)\n at serializeInto (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:729:17)\n at serializeObject (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:347:18)\n at serializeInto (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:947:17)\n at serializeObject (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:347:18)\n at serializeInto (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:947:17)\n at serializeObject (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:347:18)\n at serializeInto (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:947:17)\n at serializeObject (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:347:18)\n at serializeInto (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:947:17)\n at serializeObject (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:347:18)\n at serializeInto (PROJECT_PATH/node_modules/mongodb/node_modules/bson/lib/bson/parser/serializer.js:947:17)\n\n", "text": "NPM package from mongodb - npm (npmjs.com)There is an unhandled reject for typeErrorPlease kindly have a look", "username": "H_W" }, { "code": "", "text": "Hi @H_W welcome to the community!Could you post some details into what command you ran that results in this error?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
TypeError: value.value is not a function happen on Nodejs Mongodb
2022-10-28T07:48:24.114Z
TypeError: value.value is not a function happen on Nodejs Mongodb
1,742
null
[ "atlas-cluster" ]
[ { "code": "", "text": "So I’m trying to attack an issue regarding data transfer costs to a cluster running on AWS.Recently we are getting hit with high data transfer charges for data transferred across AZs over a peered VPC connection.Now this would be pretty simple to solve if there was some kind of API call/information log that tells us what AZ the cluster has been deployed to. But I’ve google for hours and I can’t find any answers.Is there", "username": "Emmanuel_Kyeyune" }, { "code": "\"AZ1\"\"AZ1\"", "text": "Hi @Emmanuel_Kyeyune - Welcome to the communityUnfortunately you cannot choose the specific AZ in which the cluster’s nodes are deployed in. There is a similar feedback post you could vote for in terms of choosing AZ’s.Currently there is currently no way to determine which AZ the cluster’s primary and replica nodes are deployed in. There is also currently a feedback post which is similar to the request in your question.You may want to follow up with the Atlas support team to see if the VPC Flow logs are enabled or can be enabled for troubleshooting purposes.Just to clarify, AWS maps the physical Availability Zones randomly to the Availability Zone names for each AWS account. So \"AZ1\" for one particular account may not be the same physical AZ as \"AZ1\" for another.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there any way to determine the availability zone (AZ) of a cluster/nodes?
2022-10-31T22:47:26.328Z
Is there any way to determine the availability zone (AZ) of a cluster/nodes?
2,100
null
[ "queries" ]
[ { "code": "", "text": "Hi,\nWe have more than 100 dbs and I would like to know if there is a way to find what is the size of the max doc present here (1 max size doc and details of the document like object id, db name etc present in the whole of all db’s, not one max size for each db)\nRegards,\nChitra", "username": "chitra_Lakshminarayanan" }, { "code": "$bsonSize_iddb.mycollection.aggregate([\n\t{ $addFields: {\n\t\tbsonsize: { $bsonSize: \"$$ROOT\" }\n\t}},\n\t{ $sort: { bsonsize: -1 }},\n\t{ $limit: 1 },\n\t{ $project: {\n\t\t_id: 1,\n\t\tbsonsize: 1\n\t}}\n])\n", "text": "Welcome to the MongoDB Community @chitra_Lakshminarayanan !Finding the maximum document size across your whole deployment will require iterating through every document. The logic to do so is straightforward, but the impact on your deployment’s working set could be significant as it would be a full document scan for every collection.The general approach would be:To get you started, here is an example aggregation to find the size and _id of the largest document in a collection:Regards,\nStennie", "username": "Stennie_X" } ]
How to find the size of the largest document in all db's
2022-11-02T06:19:09.475Z
How to find the size of the largest document in all db&rsquo;s
4,218
https://www.mongodb.com/…e_2_1023x497.png
[ "serverless" ]
[ { "code": "", "text": "I have AWS Lambdas connecting to MongoAtlas serverless instance to run a simple findById query in one collection which contains only one record. I’m already reusing the Mongo connection across Lambdas as recommended but once I started a small load tests performance, I’m seeing the number of connections increasing on MongoAtlas Metrics and then the query is taking too much time to execute which results in Lambda timeouts (6s).I want to better understand the metrics, mainly the Read Units capacity because I’m reaching around 100K of read units during load tests and want to know what is the limit for Mongo.Some results:\nExecution Time (Average command operational latency): 2.6s\nHigh peak of connections: around 450\nNetwork (num of requests): around 283/sNote 1) I have index configured on the collection.\nNote 2) I’m not connecting to MongoAtlas through PrivateLink, I’m using AWS Nat Gateway for that. Wondering if PrivateLink could improve my test performance.Could you please help me understanding if these metrics explain my issue?Metrics:\n\nimage1635×794 65.3 KB\n\n\nimage1602×790 51.3 KB\n", "username": "Thiago_Scodeler" }, { "code": "iConnectionsdb.collection.getIndexes().explain(\"executionStats\")", "text": "Hi @Thiago_Scodeler,I am curious about the following:I have AWS Lambdas connecting to MongoAtlas serverless instance to run a simple findById query in one collection which contains only one record.Just to clarify, the findById, I presume you mean the mongoose function Model.findById(). Please correct me if I am wrong here. It seems this returns only a singular document :Finds a single document by its _id field.Can you confirm:but once I started a small load tests performanceCan you provide more details regarding the load tests being performed?In terms of the connection peak, based off the information provided, the Serverless instances can support up to 500 simultaneous connections as of the time of this message. More information on the Serverless - Operational Limitations and Considerations page.I’m seeing the number of connections increasing on MongoAtlas Metrics and then the query is taking too much time to execute which results in Lambda timeouts (6s).Have you tried altering the timeout to see whether the number of connections recorded increases?I want to better understand the metrics, mainly the Read Units capacity because I’m reaching around 100K of read units during load tests and want to know what is the limit for Mongo.Regarding the Serverless Instance metrics, you may wish to review the Review Serverless Metrics page for more information on the available metrics. However, in saying so, you can also view detailed information on the specific values recorded on the metrics charts by going to the metrics page and selecting the information button i as shown in the below example for the Connections metric:\nimage944×1190 46.6 KB\n\nimage1554×872 63.8 KB\nI do not believe there is a hardcoded upper limit value regarding the Read Units. You can review the following documentation for more information regarding Read Units and it’s pricing:Note 1) I have index configured on the collection.Can you provide the following details:In the meantime, you may also wish to review the Manage Connections with AWS Lambda documentation as well.Regards,\nJason", "username": "Jason_Tran" }, { "code": "\n{\n_id: ObjectId(\"6359c4a3783fe200522d4c44\"),\nabcd: 'AAAAA',\ntype: 'game',\ndescription: 'description',\nimage: 'image',\nproductUrl: 'productURL',\ntags: [],\nactive: false,\ncreatedAt: ISODate(\"2022-10-26T23:37:07.300Z\"),\nupdatedAt: ISODate(\"2022-10-26T23:37:07.300Z\"),\n__v: 0\n}\n[ { v: 2, key: { _id: 1 }, name: '_id_' } ]\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'dev.products',\n indexFilterSet: false,\n parsedQuery: { _id: { '$eq': ObjectId(\"6359cfa63e858ad8d44b1780\") } },\n queryHash: '740C02B0',\n planCacheKey: 'E351FFEC',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: { stage: 'IDHACK' },\n rejectedPlans: []\n },\n\n executionStats: {\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 0,\n totalKeysExamined: 1,\n totalDocsExamined: 1,\n executionStages: {\n stage: 'IDHACK',\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 1,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keysExamined: 1,\n docsExamined: 1\n }\n },\n command: {\n find: 'products',\n filter: { _id: ObjectId(\"6359cfa63e858ad8d44b1780\") },\n '$db': 'dev'\n },\n serverInfo: {\n host: 'serverlessinstance.abcde.mongodb.net',\n port: 27017,\n version: '6.1.0',\n gitVersion: ''\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 16793600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 33554432,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1666884172, i: 14 }),\n signature: {\n hash: Binary(Buffer.from(\"\", \"hex\"), 0),\n keyId: Long(\"\")\n }\n },\n operationTime: Timestamp({ t: 1666884172, i: 13 })\n}\n", "text": "Hi @Jason_Tran thanks for replying.Yes, your assumption about mongoose findById is correct.The collection where this operation is run on only contains 1 document\nYesThe operation using findById returns the same document in 1. above\nYesThe size of the document in the collection\nVery simple json:The full findById operation being used\ndb.products.find({_id: ObjectId(‘6359cfa63e858ad8d44b1780’)});Output of db.collection.getIndexes()By increasing the lambda timeout I got the same behavior.Can you provide more details regarding the load tests being performed?\nI’m using [artillery.io] Artillery to run a couple of requests to a API which goes to a lambda connecting to MongoAtlas.\nGetting the issue once I reach around 20req/s for a period of 20s. 400 requests in total.", "username": "Thiago_Scodeler" }, { "code": "", "text": "Thanks for providing those details Thiago.Just to clarify - Is your issue specific to performance, pricing or both? Or is it specific to the lambda timeouts being experienced during the load testing.Could you share more details of the load testing and what the goals of it are? Is this to flood the database with as much work as possible or is it simulation of expected workload in future?Lastly, is the ultimate goal to select between serverless or a shared (M2/M5) or dedicated instances (M10+) to see which will suit your workload?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_TranMy issue is specific to performance since queries and connections to Mongo Atlas are taking too much time when there a bunch of connections ongoing.My load test goals is to have the amount of requests my API (lambda) is able to handle during high peaks of usage. Basically the load test runs around 400 to 700 req/s for a period of time (5 minutes for instance) . This is the expected workload for the future.I already tested with serverless and dedicated instance (M30) and in both instances I got the same behavior (slow queries and slow new connections) when reaching too many connections to the database.", "username": "Thiago_Scodeler" } ]
MongoAtlas Metrics understanding
2022-10-26T11:15:26.264Z
MongoAtlas Metrics understanding
2,497
null
[ "queries" ]
[ { "code": "\"a\":[\n\t{\n \t\"a_id\":1,\n \t\"b\":[\n\t\t\t{\n\t\t\t\t\"b_id\":2,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":3,\n\t\t\t\t\t\tsomeOtherMetadata...\n\t\t\t\t\t},...\n\t\t\t\t],\n\t\t\t\tsomeOtherMetadata...\n\t\t\t},...\n\t\t],\n\t\tsomeOtherMetadata...\n },...\n]\n\n\"a\":[\n\t{\n \t\"a_id\":1,\n\t\t\"count\":1,\n \t\"b\":[\n\t\t\t{\n\t\t\t\t\"b_id\":2,\n\t\t\t\t\"count\":1,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":3,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t},...\n\t\t\t\t],\n\t\t\t},...\n\t\t],\n },...\n]\n\n{\"a_id\":1, \"b_id\":1, \"c_id\":1}\"a\":[\n\t{\n \t\"a_id\":1,\n\t\t\"count\":1,\n \t\"b\":[\n\t\t\t{\n\t\t\t\t\"b_id\":1,\n\t\t\t\t\"count\":1,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":1,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t]\n }\n]\n\n {\"a_id\":1, \"b_id\":2, \"c_id\":1}\"a\":[\n\t{\n \t\"a_id\":1,\n\t\t\"count\":2,\n \t\"b\":[\n\t\t\t{\n\t\t\t\t\"b_id\":1,\n\t\t\t\t\"count\":1,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":1,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"b_id\":2,\n\t\t\t\t\"count\":1,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":1,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t]\n }\n]\n\n\"a\":[\n\t{\n \t\"a_id\":1,\n\t\t\"count\":3,\n \t\"b\":[\n\t\t\t{\n\t\t\t\t\"b_id\":1,\n\t\t\t\t\"count\":2,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":1,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":2,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"b_id\":2,\n\t\t\t\t\"count\":1,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":1,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t]\n }\n]\n\"a\":[\n\t{\n \t\"a_id\":1,\n\t\t\"count\":4,\n \t\"b\":[\n\t\t\t{\n\t\t\t\t\"b_id\":1,\n\t\t\t\t\"count\":3,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":1,\n\t\t\t\t\t\t\"count\":2\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":2,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"b_id\":2,\n\t\t\t\t\"count\":1,\n\t\t\t\t\"c\":[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"c_id\":1,\n\t\t\t\t\t\t\"count\":1\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t]\n }\n]\n", "text": "Hello,\nI’m relatively new to MongoDB.\nI have to make a backend, where i can count a multi level object structure items. Every time a data arrive i have to insert into the db or if it’s exists in the db then increment the counter. But i have to do it in every level.Example:\nOn an another database i have a lot of data with this structure:With a lot of “A”-s and a lot of “B”-s inside every “A” and a lot of “C”-s inside every “B”.And i have to store and count it in MongoDB if somebody select one “C” (the deepest level) item. I want to store the same structure without metadata and with a counter in the db but only add item on the first call.For example i have an initially empty db and the first call arrive with: {\"a_id\":1, \"b_id\":1, \"c_id\":1} (The call is always contains the deepest level id and all id above)And it inserts to the db:And the next call with: {\"a_id\":1, \"b_id\":2, \"c_id\":1}Then the db become this:The “A” counter incremented and the new “B” inserted into the A’s array.The next call with: {“a_id”:1, “b_id”:1, “c_id”:2}Then the db become this:The next call with: {“a_id”:1, “b_id”:1, “c_id”:1}Then the db become this:The count of an item is always the sum of all nested items count inside it.If “A” exists then increment, otherwise insert. If “B” exists inside the “A” then increment, otherwise insert and so on with “C”…I don’t have any idea how i can achieve this.", "username": "Tamas_Szabo" }, { "code": "", "text": "Hi @Tamas_Szabo ,Welcome to The MongoDB Community Forums! The schema design that you described seems to be very hard to work with. As designed, I think this schema would be difficult to index, thus query performance would suffer. Typically schema design in MongoDB follows how the data would be used. Could you elaborate on the use case so maybe there are some suggestions that can be made? I believe these documentations and blogs below would be useful for your with regard to schema designRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi @Tarun_Gaur,Thanks for the answer.The reason why i choose this schema is the querying of the data.\nThe write speed of the data is not important. I’m not even check the result of the add operation on the frontend. If some of the requests fails, due to the network or other reasons, it’s not a problem for me. I don’t need the exact numbers (count), this is only statistical data. The important part is the scale and the ratio.The important and time critical part is the querying.\nWhen i need the data:And the list of “A” objects is not to big. (few thousand) Because this “A” objects only used for a short period of time and after that time i delete it from the db.That’s why i used this schema.I tought, this schema is perfect for this type of queries, but if you know a better solution, please tell me.p.s.: A few hours ago (after a lot of reading) finally i was able to write an update pipeline to solve this problem. I will post it here, maybe it will help somebody. But before that, i want to write a description to it.", "username": "Tamas_Szabo" }, { "code": "{ \"items\":[...], \"updated\":true }{ \"items\":[...], \"updated\":true }\"updated\":true\"updated\":falsedb.collection.update({\n \"a_id\": A_ID\n},\n[\n {\n $set: {\n count: {\n $add: [\n {\n $ifNull: [\n \"$count\",\n 0\n ]\n },\n 1\n ]\n },\n b: {\n $let: {\n vars: { \"data\":\n {\n $reduce: {\n input: {\n $ifNull: [\n \"$b\",\n []\n ]\n },\n initialValue: {\n \"items\": [],\n \"updated\": false\n },\n in: {\n $cond: [\n {\n $eq: [\n \"$$this.b_id\",\n B_ID\n ]\n },\n {\n \"items\": {\n $concatArrays: [\n \"$$value.items\",\n [\n {\n \"b_id\": \"$$this.b_id\",\n \"count\": {\n $add: [\n \"$$this.count\",\n 1\n ]\n },\n \"c\": {\n $let: {\n vars: { \"data\":\n {\n $reduce: {\n input: {\n $ifNull: [\n \"$$this.c\",\n []\n ]\n },\n initialValue: {\n \"items\": [],\n \"updated\": false\n },\n in: {\n $cond: [\n {\n $eq: [\n \"$$this.c_id\",\n C_ID\n ]\n },\n {\n \"items\": {\n $concatArrays: [\n \"$$value.items\",\n [\n {\n \"c_id\": \"$$this.c_id\",\n \"count\": {\n $add: [\n \"$$this.count\",\n 1\n ]\n }\n }\n ]\n ]\n },\n \"updated\": true\n },\n {\n \"items\": {\n $concatArrays: [\n \"$$value.items\",\n [\n \"$$this\"\n ]\n ]\n },\n \"updated\": \"$$value.updated\"\n }\n ]\n }\n }\n }\n },\n in: {\n $cond: [\n {\n $eq: [\n \"$$data.updated\",\n false\n ]\n },\n {\n $concatArrays: [\n \"$$data.items\",\n [\n {\n \"c_id\": C_ID,\n \"count\": 1\n }\n ]\n ]\n },\n {\n $concatArrays: [\n \"$$data.items\",\n []\n ]\n }\n ]\n }\n }\n }\n }\n ]\n ]\n },\n \"updated\": true\n },\n {\n \"items\": {\n $concatArrays: [\n \"$$value.items\",\n [\n \"$$this\"\n ]\n ]\n },\n \"updated\": \"$$value.updated\"\n }\n ]\n }\n }\n }\n },\n in: {\n $cond: [\n {\n $eq: [\n \"$$data.updated\",\n false\n ]\n },\n {\n $concatArrays: [\n \"$$data.items\",\n [\n {\n \"b_id\": B_ID,\n \"count\": 1,\n \"c\": [\n {\n \"c_id\": C_ID,\n \"count\": 1,\n \n }\n ]\n }\n ]\n ]\n },\n {\n $concatArrays: [\n \"$$data.items\",\n []\n ]\n }\n ]\n }\n },\n \n }\n }\n }\n],\n{\n \"upsert\": true\n})\n", "text": "As i writed in the previous comment, i found a solution.\nAnd now i post it with some description, maybe it will help someone.Big thanks to @Prasad_Saya, who made a solution to a similar problem in here: Prasad_Saya’s post\nThe only difference is that, this solution works only with two level. And i needed a three level solution.I used a same $reduce method to produce this object: { \"items\":[...], \"updated\":true } but after i transformed the object to the original array with a $let and not with a new aggregation step.The process is:And the full update pipeline is:", "username": "Tamas_Szabo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upsert multilevel nested object in every level
2022-10-27T20:28:35.391Z
Upsert multilevel nested object in every level
1,633
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.8.6 of the MongoDB Go Driver.This release fixes a severe bug in SRV polling which may prevent changes in SRV records from updating the servers that the Go Driver attempts to connect to when the MongoDB connection string includes a username and password. For more information please see the 1.8.6 release notes.You can obtain the driver source from GitHub under the v1.8.6 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team", "username": "Qingyang_Hu1" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.8.6 Released
2022-11-02T20:17:30.887Z
MongoDB Go Driver 1.8.6 Released
1,664
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.9.3 of the MongoDB Go Driver.This release fixes a severe bug in SRV polling which may prevent changes in SRV records from updating the servers that the Go Driver attempts to connect to when the MongoDB connection string includes a username and password. For more information please see the 1.9.3 release notes.You can obtain the driver source from GitHub under the v1.9.3 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team", "username": "Qingyang_Hu1" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.9.3 Released
2022-11-02T19:45:10.320Z
MongoDB Go Driver 1.9.3 Released
1,561
null
[ "queries", "atlas-search" ]
[ { "code": "textautocomplete{\n\t\"index\": \"test_index\",\n\t\"compound\": {\n\t\t\"filter\": [\n\t\t\t{\n\t\t\t\t\"text\": {\n\t\t\t\t\t\"query\": [\n\t\t\t\t\t\t\"111111111111\"\n\t\t\t\t\t],\n\t\t\t\t\t\"path\": \"ProductId\"\n\t\t\t\t}\n\t\t\t},\n\t\t],\n\t\t\"must\": [\n\t\t\t{\n\t\t\t\t\"autocomplete\": {\n\t\t\t\t\t\"query\": [\n\t\t\t\t\t\t\"word1\"\n\t\t\t\t\t],\n\t\t\t\t\t\"path\": \"fieldA\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"autocomplete\": {\n\t\t\t\t\t\"query\": [\n\t\t\t\t\t\t\"stopWord\",\n\t\t\t\t\t],\n\t\t\t\t\t\"path\": \"fieldA\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"autocomplete\": {\n\t\t\t\t\t\"query\": [\n\t\t\t\t\t\t\"word2\"\n\t\t\t\t\t],\n\t\t\t\t\t\"path\": \"fieldA\"\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t},\n\t\"count\": {\n\t\t\"type\": \"lowerBound\",\n\t\t\"threshold\": 500\n\t}\n}\n \"Content\": [\n {\n \"analyzer\": \"lucene.swedish\",\n \"minGrams\": 4,\n \"tokenization\": \"nGram\",\n \"type\": \"autocomplete\"\n },\n {\n \"analyzer\": \"lucene.swedish\",\n \"type\": \"string\"\n }\n ],\n", "text": "Hello!Our team is trying to implement search using MongoDB Atlas and got a problem with getting expected results when our query contains a stop word. The problem occurs when we want to use AND condition via compound operator.\nBy default it seems that when you use text or autocomplete operator then Atlas looks for a match for each term in the string separately. So it means if you type more words you will get more results, so this is kinda OR condition.\nTo handle that we are splitting each term into separate autocomplete operator, this gives us an AND condition behavior, but this is not working when stop-word occurs as one of the terms.\nFor example, we want all words (with/without stop word) to be included in found document:\nword1 stopWord word2Our query looks like:Expected result: documents containing “word1”, “word2” (with/without stopWord)\nActual result: no any document foundOur index uses “lucene.swedish” analyzer:The question is how to get all documents containing all words with/without a stop word?", "username": "Jelena_Arsinova" }, { "code": "", "text": "Hi there, could you provide a concrete example or sample document we can use to try to replicate the issue ourselves?", "username": "Elle_Shwer" }, { "code": "{\n \"_id\": \"111111111122\",\n \"ProductId\": \"111111111111\",\n \"Name\": \"Testdokument Jelena\",\n \"Url\": \"/test-portal/test-page-jelena\",\n \"Content\": \"Testdokument Jelena Vidare vill regeringen införa ändringar som medför skyldighet för Försäkringskassan och kommunerna att informera Inspektionen för vård och omsorg när en enskild kan antas bedriva verksamhet för personlig assistans utan tillstånd.\",\n \"Description\": \"Testdokument Jelena Vidare vill regeringen införa ändringar som medför skyldighet för Försäkringskassan och kommunerna att informera Inspektionen för vård och omsorg när en enskild...\",\n \"AccessItems\": [\n \"Admin\",\n ],\n \"FilterRoute\": \"test-page-jelena\",\n \"TypeOfContent\": \"page\"\n}\n", "text": "Hello!Yes, here is the example of document:I can find this document if I search for: Testdokument kommuner\nBut cannot find it if I search for:\nTestdokument kommuner att\nTestdokument kommuner och\nTestdokument kommuner förWe search in Content field using index that is mentioned in the first post.", "username": "Jelena_Arsinova" }, { "code": "{\n\t\"mappings\": {\n\t\t\"dynamic\": false,\n\t\t\"fields\": {\n\t\t\t\"ProductId\": {\n\t\t\t\t\"type\": \"string\"\n\t\t\t},\n\t\t\t\"Content\": [\n\t\t\t\t{\n\t\t\t\t\t\"analyzer\": \"lucene.swedish\",\n\t\t\t\t\t\"minGrams\": 4,\n\t\t\t\t\t\"tokenization\": \"nGram\",\n\t\t\t\t\t\"type\": \"autocomplete\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t}\n}\n compound: {\n\t\t\tfilter: [\n\t\t\t\t{\n\t\t\t\t\ttext: {\n\t\t\t\t\t\tquery: [\"111111111111\"],\n\t\t\t\t\t\tpath: \"ProductId\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t],\n\t\t\tmust: [\n\t\t\t\t{\n\t\t\t\t\tautocomplete: {\n\t\t\t\t\t\tquery: \"Testdokument kommuner <anything here>\",\n\t\t\t\t\t\tpath: \"Content\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t],\n\t\t},\n", "text": "If the requirement is as written in the post, a simple index like:Allows a search like:To match the given document.\nNote that the query doesn’t need to be split beforehand in separate terms.If there are more requirements like:Please let us know, but in testing this should work for you!Also thanks to @Alan_Reyes for helping with this one.", "username": "Elle_Shwer" }, { "code": "", "text": "Thank you for your fast reply, but we are splitting in separate terms to have AND condition behavior, it means we want all words to be in the document (with/without stop word). With your query we find the documents with OR condition, that means a found document contains at least one from the words.", "username": "Jelena_Arsinova" }, { "code": "must", "text": "must works like an AND statement – see docs here, does this work when we don’t take stop words into consideration? I wonder if that is where the issue is specifically. In other words, that you specifically WANT to index stop words?", "username": "Elle_Shwer" }, { "code": "", "text": "As is mentioned in the example above, with must we can find documents if not to use stop word in the search, but if we use it then document is not found even though it has all words including a stop word.\nLooks like we need to index stop words. Is it possible and how?", "username": "Jelena_Arsinova" }, { "code": "", "text": "If you want to match also stopwords, you can use the simple , whitespace or even keyword analyzers (If you are already storing the individual words as a string)", "username": "Elle_Shwer" } ]
Search including stop words in MongoDB Atlas
2022-10-28T10:21:18.440Z
Search including stop words in MongoDB Atlas
2,821
null
[ "queries", "replication", "serverless", "storage" ]
[ { "code": ":~$ mongod --version\ndb version v6.0.2\nBuild Info: {\n \"version\": \"6.0.2\",\n \"gitVersion\": \"94fb7dfc8b974f1f5343e7ea394d0d9deedba50e\",\n \"openSSLVersion\": \"OpenSSL 1.1.1f 31 Mar 2020\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu2004\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n\n:~$ sudo service mongod start\n{\"t\":{\"$date\":\"2022-11-01T19:54:10.439+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-11-01T19:54:10.447+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-11-01T19:54:10.453+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-11-01T19:54:10.464+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-11-01T19:54:10.465+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-11-01T19:54:10.465+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\nInvalid command: start\nOptions:\n --networkMessageCompressors arg (=snappy,zstd,zlib)\n Comma-separated list of compressors to\n use for network messages\n\nGeneral options:\n -h [ --help ] Show this usage information\n --version Show version information\n -f [ --config ] arg Configuration file specifying\n additional options\n --configExpand arg Process expansion directives in config\n file (none, exec, rest)\n --port arg Specify port number - 27017 by default\n --ipv6 Enable IPv6 support (disabled by\n default)\n --listenBacklog arg (=4096) Set socket listen backlog size\n --maxConns arg (=1000000) Max number of simultaneous connections\n --pidfilepath arg Full path to pidfile (if not set, no\n pidfile is created)\n --timeZoneInfo arg Full path to time zone info directory,\n e.g. /usr/share/zoneinfo\n --nounixsocket Disable listening on unix sockets\n --unixSocketPrefix arg Alternative directory for UNIX domain\n sockets (defaults to /tmp)\n --filePermissions arg Permissions to set on UNIX domain\n socket file - 0700 by default\n --fork Fork server process\n -v [ --verbose ] [=arg(=v)] Be more verbose (include multiple times\n for more verbosity e.g. -vvvvv)\n --quiet Quieter output\n --logpath arg Log file to send write to instead of\n stdout - has to be a file, not\n directory\n --syslog Log to system's syslog facility instead\n of file or stdout\n --syslogFacility arg syslog facility used for mongodb syslog\n message\n --logappend Append to logpath instead of\n over-writing\n --logRotate arg Set the log rotation behavior\n (rename|reopen)\n --timeStampFormat arg Desired format for timestamps in log\n messages. One of iso8601-utc or\n iso8601-local\n --setParameter arg Set a configurable parameter\n --bind_ip arg Comma separated list of ip addresses to\n listen on - localhost by default\n --bind_ip_all Bind to all ip addresses\n --noauth Run without security\n --transitionToAuth For rolling access control upgrade.\n Attempt to authenticate over outgoing\n connections and proceed regardless of\n success. Accept incoming connections\n with or without authentication.\n --slowms arg (=100) Value of slow for profile and console\n log\n --slowOpSampleRate arg (=1) Fraction of slow ops to include in the\n profile and console log\n --profileFilter arg Query predicate to control which\n operations are logged and profiled\n --auth Run with security\n --clusterIpSourceAllowlist arg Network CIDR specification of permitted\n origin for `__system` access\n --profile arg 0=off 1=slow, 2=all\n --cpu Periodically show cpu and iowait\n utilization\n --sysinfo Print some diagnostic system\n information\n --noscripting Disable scripting engine\n --notablescan Do not allow table scans\n --shutdown Kill a running server (for init\n scripts)\n --keyFile arg Private key for cluster authentication\n --clusterAuthMode arg Authentication mode used for cluster\n authentication. Alternatives are\n (keyFile|sendKeyFile|sendX509|x509)\n\nReplication options:\n --oplogSize arg Size to use (in MB) for replication op\n log. default is 5% of disk space (i.e.\n large is good)\n\nReplica set options:\n --replSet arg arg is <setname>[/<optionalseedhostlist\n >]\n --enableMajorityReadConcern [=arg(=1)] (=1)\n Enables majority readConcern.\n enableMajorityReadConcern=false is no\n longer supported\n\nServerless mode:\n --serverless arg Serverless mode implies replication is\n enabled, cannot be used with replSet or\n replSetName.\n\nSharding options:\n --configsvr Declare this is a config db of a\n cluster; default port 27019; default\n dir /data/configdb\n --shardsvr Declare this is a shard db of a\n cluster; default port 27018\n\nStorage options:\n --storageEngine arg What storage engine to use - defaults\n to wiredTiger if no data files present\n --dbpath arg Directory for datafiles - defaults to\n /data/db\n --directoryperdb Each database will be stored in a\n separate directory\n --syncdelay arg (=60) Seconds between disk syncs\n --journalCommitInterval arg (=100) how often to group/batch commit (ms)\n --upgrade Upgrade db if needed\n --repair Run repair on all dbs\n --restore This should only be used when restoring\n from a backup. Mongod will behave\n differently by handling collections\n with missing data files, allowing\n database renames, skipping oplog\n entries for collections not restored\n and more.\n --journal Enable journaling\n --nojournal Disable journaling (journaling is on by\n default for 64 bit)\n --oplogMinRetentionHours arg (=0) Minimum number of hours to preserve in\n the oplog. Default is 0 (turned off).\n Fractions are allowed (e.g. 1.5 hours)\n\nWiredTiger options:\n --wiredTigerCacheSizeGB arg Maximum amount of memory to allocate\n for cache; Defaults to 1/2 of physical\n RAM\n --zstdDefaultCompressionLevel arg (=6)\n Default compression level for zstandard\n compressor\n --wiredTigerJournalCompressor arg (=snappy)\n Use a compressor for log records\n [none|snappy|zlib|zstd]\n --wiredTigerDirectoryForIndexes Put indexes and data in different\n directories\n --wiredTigerCollectionBlockCompressor arg (=snappy)\n Block compression algorithm for\n collection data [none|snappy|zlib|zstd]\n --wiredTigerIndexPrefixCompression arg (=1)\n Use prefix compression on row-store\n leaf pages\n\nFree Monitoring Options:\n --enableFreeMonitoring arg Enable Cloud Free Monitoring\n (on|runtime|off)\n --freeMonitoringTag arg Cloud Free Monitoring Tags\n\nAWS IAM Options:\n --awsIamSessionToken arg AWS Session Token for temporary\n credentials\n\nTLS Options:\n --tlsOnNormalPorts Use TLS on configured ports\n --tlsMode arg Set the TLS operation mode\n (disabled|allowTLS|preferTLS|requireTLS\n )\n --tlsCertificateKeyFile arg Certificate and key file for TLS\n --tlsCertificateKeyFilePassword arg Password to unlock key in the TLS\n certificate key file\n --tlsClusterFile arg Key file for internal TLS\n authentication\n --tlsClusterPassword arg Internal authentication key file\n password\n --tlsCAFile arg Certificate Authority file for TLS\n --tlsClusterCAFile arg CA used for verifying remotes during\n inbound connections\n --tlsCRLFile arg Certificate Revocation List file for\n TLS\n --tlsDisabledProtocols arg Comma separated list of TLS protocols\n to disable [TLS1_0,TLS1_1,TLS1_2,TLS1_3\n ]\n --tlsAllowConnectionsWithoutCertificates\n Allow client to connect without\n presenting a certificate\n --tlsAllowInvalidHostnames Allow server certificates to provide\n non-matching hostnames\n --tlsAllowInvalidCertificates Allow connections to servers with\n invalid certificates\n --tlsLogVersions arg Comma separated list of TLS protocols\n to log on connect [TLS1_0,TLS1_1,TLS1_2\n ,TLS1_3]\n", "text": "Ubuntu 22.04 and MongoDB 6.0I followed guides and tried installing mongodb-org many times in my Ubuntu 22.04. I have the following error when I do “sudo service mongod start/stop/status”. It says “Invalid command: start/stop/status”. Does anyone know the reason or fix for this? Any help is appreciated. Thanks.", "username": "Herman_TAM" }, { "code": "", "text": "I used to have this error ‘mongod: unrecognized service’ when I do “sudo service mongod start/stop/status”. Somehow the next day, the error becomes ‘Invalid command: start/stop/status’.", "username": "Herman_TAM" }, { "code": ":~$ mongod -f /etc/mongod.conf\n{\"t\":{\"$date\":\"2022-11-02T11:34:24.761Z\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20574, \"ctx\":\"-\",\"msg\":\"Error during global initialization\",\"attr\":{\"error\":{\"code\":38,\"codeName\":\"FileNotOpen\",\"errmsg\":\"Can't initialize rotatable log file :: caused by :: Failed to open /var/log/mongodb/mongod.log\"}}}\n", "text": "", "username": "Herman_TAM" }, { "code": "", "text": "I do sudo chmod 777 -R /var/log/mongodb and it returns nothing now.", "username": "Herman_TAM" }, { "code": "{\"t\":{\"$date\":\"2022-11-02T13:58:09.898+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-11-02T13:58:09.907+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalC>{\"t\":{\"$date\":\"2022-11-02T13:58:09.943+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSiz>{\"t\":{\"$date\":\"2022-11-02T13:58:10.290+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDo>{\"t\":{\"$date\":\"2022-11-02T13:58:10.290+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrati>{\"t\":{\"$date\":\"2022-11-02T13:58:10.290+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:10.292+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-11-02T13:58:10.294+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":102,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-BED>{\"t\":{\"$date\":\"2022-11-02T13:58:10.294+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.2\",\"gitVersion\":\"94fb7dfc8b974f1f5343e7ea394d0d9deedba50e\",\"openSSLVersi>{\"t\":{\"$date\":\"2022-11-02T13:58:10.294+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:10.295+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"p>{\"t\":{\"$date\":\"2022-11-02T13:58:10.303+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\">{\"t\":{\"$date\":\"2022-11-02T13:58:10.309+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7475M,session_max=33000,eviction=(threads_min=4,threads_max=4),c>{\"t\":{\"$date\":\"2022-11-02T13:58:10.473+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":22,\"message\":{\"ts_sec\":1667397490,\"ts_usec\":468902,\"thread\":\"102:0x7f22fff1124>{\"t\":{\"$date\":\"2022-11-02T13:58:10.473+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":-31804,\"message\":{\"ts_sec\":1667397490,\"ts_usec\":473892,\"thread\":\"102:0x7f22fff>{\"t\":{\"$date\":\"2022-11-02T13:58:10.475+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23089, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50853,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp\",\"line\":652}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:10.475+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23090, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2022-11-02T13:58:10.475+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"initandlisten\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.179+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"initandlisten\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7F2306521564\",\"b\":\"7F230177E000\",\"o\":\"4DA3564\",\"s\":\"_ZN5mongo18stack_trace_d>{\"t\":{\"$date\":\"2022-11-02T13:58:11.179+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2306521564\",\"b\":\"7F230177E000\",\"o\":\"4DA3564\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL_>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2306523AA9\",\"b\":\"7F230177E000\",\"o\":\"4DA5AA9\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"C\":\"mong>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230651D786\",\"b\":\"7F230177E000\",\"o\":\"4D9F786\",\"s\":\"abruptQuit\",\"s+\":\"66\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2300F82520\",\"b\":\"7F2300F40000\",\"o\":\"42520\",\"s\":\"__sigaction\",\"s+\":\"50\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2300FD6A7C\",\"b\":\"7F2300F40000\",\"o\":\"96A7C\",\"s\":\"pthread_kill\",\"s+\":\"12C\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2300F82476\",\"b\":\"7F2300F40000\",\"o\":\"42476\",\"s\":\"raise\",\"s+\":\"16\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2300F687F3\",\"b\":\"7F2300F40000\",\"o\":\"287F3\",\"s\":\"abort\",\"s+\":\"D3\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230366C2F6\",\"b\":\"7F230177E000\",\"o\":\"1EEE2F6\",\"s\":\"_ZN5mongo25fassertFailedWithLocationEiP>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2303221A10\",\"b\":\"7F230177E000\",\"o\":\"1AA3A10\",\"s\":\"_ZN5mongo12_GLOBAL__N_141mdb_handle_err>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F23041C0471\",\"b\":\"7F230177E000\",\"o\":\"2A42471\",\"s\":\"__eventv\",\"s+\":\"E61\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2303235BF5\",\"b\":\"7F230177E000\",\"o\":\"1AB7BF5\",\"s\":\"__wt_panic_func\",\"s+\":\"13A\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230418D23F\",\"b\":\"7F230177E000\",\"o\":\"2A0F23F\",\"s\":\"__posix_sync\",\"s+\":\"4F\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230418D36F\",\"b\":\"7F230177E000\",\"o\":\"2A0F36F\",\"s\":\"__posix_directory_sync\",\"s+\":\"10F\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230418E54A\",\"b\":\"7F230177E000\",\"o\":\"2A1054A\",\"s\":\"__posix_open_file\",\"s+\":\"44A\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2304189E15\",\"b\":\"7F230177E000\",\"o\":\"2A0BE15\",\"s\":\"__wt_open\",\"s+\":\"2C5\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F23041F53EC\",\"b\":\"7F230177E000\",\"o\":\"2A773EC\",\"s\":\"__wt_block_manager_create\",\"s+\":\"6C\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F23041A5EF2\",\"b\":\"7F230177E000\",\"o\":\"2A27EF2\",\"s\":\"__schema_create\",\"s+\":\"16D2\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F23041A34A7\",\"b\":\"7F230177E000\",\"o\":\"2A254A7\",\"s\":\"__wt_schema_create\",\"s+\":\"67\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2304186DBE\",\"b\":\"7F230177E000\",\"o\":\"2A08DBE\",\"s\":\"__wt_turtle_init\",\"s+\":\"155E\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230413788D\",\"b\":\"7F230177E000\",\"o\":\"29B988D\",\"s\":\"wiredtiger_open\",\"s+\":\"1B1D\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F23040DE8B9\",\"b\":\"7F230177E000\",\"o\":\"29608B9\",\"s\":\"_ZN5mongo18WiredTigerKVEngine15_openWir>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F23040EC83D\",\"b\":\"7F230177E000\",\"o\":\"296E83D\",\"s\":\"_ZN5mongo18WiredTigerKVEngineC2ERKNSt7_>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2303909827\",\"b\":\"7F230177E000\",\"o\":\"218B827\",\"s\":\"_ZNK5mongo12_GLOBAL__N_117WiredTigerFac>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230478B7FE\",\"b\":\"7F230177E000\",\"o\":\"300D7FE\",\"s\":\"_ZN5mongo23initializeStorageEngineEPNS_>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F23038880AE\",\"b\":\"7F230177E000\",\"o\":\"210A0AE\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListen>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230388B8DD\",\"b\":\"7F230177E000\",\"o\":\"210D8DD\",\"s\":\"_ZN5mongo11mongod_mainEiPPc\",\"C\":\"mongo>{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230367DA5E\",\"b\":\"7F230177E000\",\"o\":\"1EFFA5E\",\"s\":\"main\",\"s+\":\"E\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2300F69D90\",\"b\":\"7F2300F40000\",\"o\":\"29D90\",\"s\":\"__libc_init_first\",\"s+\":\"90\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F2300F69E40\",\"b\":\"7F2300F40000\",\"o\":\"29E40\",\"s\":\"__libc_start_main\",\"s+\":\"80\"}}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:11.180+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F230388619E\",\"b\":\"7F230177E000\",\"o\":\"210819E\",\"s\":\"_start\",\"s+\":\"2E\"}}}\n\n{\"t\":{\"$date\":\"2022-11-02T13:58:14.375+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-11-02T13:58:14.376+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalC>{\"t\":{\"$date\":\"2022-11-02T13:58:14.402+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-11-02T13:58:14.404+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSiz>{\"t\":{\"$date\":\"2022-11-02T13:58:14.481+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDo>{\"t\":{\"$date\":\"2022-11-02T13:58:14.481+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrati>{\"t\":{\"$date\":\"2022-11-02T13:58:14.481+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-11-02T13:58:14.481+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-11-02T13:58:14.482+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":110,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-BED>\n", "text": "This is my mongod.log", "username": "Herman_TAM" }, { "code": ":/$ sudo mongod --config /etc/mongod.conf\nAborted (core dumped)\n", "text": "", "username": "Herman_TAM" }, { "code": "", "text": "Most of the issues you are facing are related to permissions\nAs per Mongo documentation to start mongod you should use\nsudo systemctl start mongod\nbut you are using sudo\nSometimes you are running as sudo and othertime as normal user\nYou should not use sudo.Once you run mongod with sudo all files will be owned by root and next time when you run mongod as normal user it will fail\nIf you want to test/bring up your own mongod use different port,dbpath,logpath and do not pass /etc/mongod.conf as all your various attempts try to use same dirschmod 777 is not correct.You should give just needed permissions for mongod user\nAfter you chmod logdir did mongod come up?\nDid you try to connect with mongo\nCore dump in your last attempt may be again due to another instance already up on same address/port/dirpath etc\nCheck doc for more details\nhttps://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/#:~:text=Create%20a%20list%20file%20for,host%20and%20execute%20lsb_release%20-dc%20.", "username": "Ramachandra_Tummala" } ]
Command "sudo service mongod start/stop/status" throws long error "Invalid command: start/stop/status" - Ubuntu 22.04 and MongoDB 6.0
2022-11-02T09:48:17.243Z
Command &ldquo;sudo service mongod start/stop/status&rdquo; throws long error &ldquo;Invalid command: start/stop/status&rdquo; - Ubuntu 22.04 and MongoDB 6.0
3,429
null
[ "indexes" ]
[ { "code": "DATABASE_NAMECOLLECTIONdynamicdynamic", "text": "Hey everyoneI had a use case for getting the list of indices and getting mappings on a per index basis. I have few questions regarding that:Is there a way to get all the indices in a cluster?I know about this doc but it points to requiring DATABASE_NAME and COLLECTION, but MongoDB cloud is able to query this without needing this. Is there a public / non-documented endpoint that can be used to do the same through REST API?How to get all the fields + types (i.e. mappings) in an Atlas Search index?I am aware of this endpoint but when the mappings is set to dynamic, it doesn’t return the fields in the response. I would like to have the fields returned as well. Is there a way to force returning fields even when dynamic is true?", "username": "Deepjyoti_Barman" }, { "code": "DATABASE_NAMECOLLECTIONmongoshdynamicdynamic", "text": "Hi @Deepjyoti_Barman - Welcome to the community.I had a use case for getting the list of indices and getting mappings on a per index basis.Could you provide further details regarding the use case?Is there a way to get all the indices in a cluster?I know about this doc but it points to requiring DATABASE_NAME and COLLECTION , but MongoDB cloud is able to query this without needing this. Is there a public / non-documented endpoint that can be used to do the same through REST API?My interpretation of this question is that you want to get all the search index details for a particular cluster via the Atlas Administration API without needing to specify database or collection. Is this correct? If so, this is currently not possible as far as I am aware. However, as previously mentioned, it would be good to understand the use case for this. You could possibly create a script to connect using a driver (or possibly mongosh) and obtain a list of the databases and collections before passing this through to the Get All Atlas Search Indexes for a Collection API.How to get all the fields + types (i.e. mappings) in an Atlas Search index?I am aware of this endpoint but when the mappings is set to dynamic , it doesn’t return the fields in the response. I would like to have the fields returned as well. Is there a way to force returning fields even when dynamic is true?Do you have an example output that you could provide of what you are currently receiving in the response and what you’re expecting it to include?Regards,\nJason", "username": "Jason_Tran" }, { "code": "https://cloud.mongodb.com/nds/clusters/60cb67c0848c036fd17a281f/Cluster1/fts/indexes[\n {\n \"analyzer\":null,\n \"analyzers\":null,\n \"collectionName\":\"shipwrecks\",\n \"database\":\"sample_geospatial\",\n \"deleteRequestedDate\":null,\n \"indexID\":\"621f431f31ed037fb9790aa4\",\n \"lastUpdateDate\":null,\n \"mappings\":{\n ...\n },\n \"name\":\"geo\",\n \"searchAnalyzer\":null,\n \"stats\":{\n ...\n },\n \"status\":\"STEADY\",\n \"storedSource\":null,\n \"synonyms\":null\n },\n ...\n]\n{\n \"collectionName\" : \"movies\",\n \"database\" : \"sample_mflix\",\n \"indexID\" : \"5d1268a980eef518dac0cf41\",\n \"mappings\" : {\n \"dynamic\" : true\n },\n \"name\" : \"SearchIndex1\",\n \"status\" : \"STEADY\"\n}\n{\n \"collectionName\" : \"movies\",\n \"database\" : \"sample_mflix\",\n \"indexID\" : \"5d1268a980eef518dac0cf41\",\n \"mappings\" : {\n \"dynamic\" : false,\n \"fields\" : {\n \"genres\" : {\n \"analyzer\" : \"lucene.standard\",\n \"type\" : \"string\"\n },\n \"plot\" : {\n \"analyzer\" : \"lucene.standard\",\n \"type\" : \"string\"\n }\n }\n },\n \"name\" : \"SearchIndex1\",\n \"status\" : \"STEADY\"\n}\n", "text": "Could you provide further details regarding the use case?We’re building an integration on top of Atlas Search, where we’re allowing an Atlas Search user to build a search UI against any index. It would be ideal to have one endpoint to get all the indices of the cluster v/s having to ask a user to specify their DB + Collection info additionally.I see this working as part of cloud.mongodb.com, but it’s an undocumented endpoint and I would like to know what Auth mechanism this supports to use it programatically.The call is a GET against: https://cloud.mongodb.com/nds/clusters/60cb67c0848c036fd17a281f/Cluster1/fts/indexes With a response that looks like:Do you have an example output that you could provide of what you are currently receiving in the response and what you’re expecting it to include?Following is what we are getting now:Following is what we expect:", "username": "Deepjyoti_Barman" }, { "code": "mongosh", "text": "Hi @Deepjyoti_Barman , we currently do not support a command which returns all search indexes within the same cluster. Can you help me understand if @Jason_Tran 's suggested workaround (below) will work for you? If not, why? We’re working on making improvements to this experience and your feedback is very valuableYou could possibly create a script to connect using a driver (or possibly mongosh ) and obtain a list of the databases and collections before passing this through to the Get All Atlas Search Indexes for a Collection API.Thanks!", "username": "amyjian" }, { "code": "", "text": "@amyjian @Jason_Tran’s suggested solution is possible: It’s complexity being O(mxn), in practice this might mean a minute before the entire list of search indexes for a cluster can be retrieved. Our use-case is to display a UI selector for search indexes.Having a direct endpoint that returns search indexes for a cluster would keep our implementation simple.And it does seem you have it resolving as one endpoint in the MongoDB cloud UI where search indexes are displayed.\nScreenshot 2022-11-02 at 7.33.03 PM2566×1258 305 KB\nThis UI is rendered based on this endpoint call: Cloud: MongoDB Cloud which returns all the indexes for the cluster in a ~1s time.Is something similar possible as an API user as well? Currently, the best option I see is what Jason is suggesting, however having one endpoint to get all the search indexes if possible would solve for a quick response time.", "username": "Siddharth_Kothari" } ]
API endpoints for index list and mappings
2022-10-21T10:53:11.145Z
API endpoints for index list and mappings
3,283
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.10.4 of the MongoDB Go Driver.This release contains several bugfixes. One of the bugfixes removes a severe bug in SRV polling which may prevent changes in SRV records from updating the servers that the Go Driver attempts to connect to when the MongoDB connection string includes a username and password. For more information please see the 1.10.4 release notes.You can obtain the driver source from GitHub under the v1.10.4 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team", "username": "benjirewis" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.10.4 Released
2022-11-02T13:10:35.701Z
MongoDB Go Driver 1.10.4 Released
1,588
https://www.mongodb.com/…e_2_1024x512.png
[ "aggregation", "node-js", "atlas-search" ]
[ { "code": "full_name: {\n type: String,\n text: true,\n },\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"full_name\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\nconst users = await User.aggregate([\n\n {\n\n $search: {\n\n index: 'fullName',\n\n compound: {\n\n should: [\n\n {\n\n autocomplete: {\n\n path: 'full_name',\n\n query: term,\n\n score: { boost: { value: 3 } },\n\n },\n\n },\n\n {\n\n text: {\n\n path: 'last_name',\n\n query: term,\n\n fuzzy: { maxEdits: 1 },\n\n },\n\n },\n\n ],\n\n },\n\n },\n\n },\n\n {\n\n $project: {\n\n full_name: 1,\n\n score: { $meta: 'searchScore' },\n\n },\n\n },\n\n ]);\n", "text": "Hi,I’m trying to implement a search feature in my NodeJS project.\nThe field I want to search:I want the search to return results where the search term make up only a part of the value. Is autocomplete the only way to achieve this?\nFor example: “Hen” or “sing” should return “Henry Kissinger”.I also want to return results within 1-2 typos and I want the results to be scored.\nFor example: “Hen” should return “Henry Johnson” and “Herman Johnson” but with a better score for the former.Going with this tutorial/workaround:Use the autocomplete operator to predict words as you type.\nI have created a search index like this:And I query like this:It seems like I have to choose between scoring and maxEdits. Applying maxEdits to the text part has no effect. Applying it to autocomplete causes the scores to be the same across the board.Is there a solution to this?", "username": "Andy_O" }, { "code": "", "text": "Hi there, we have a tutorial for a handful of ways to do partial matching, check it out here. Will need to think more about your question regarding maxEdits vs. scoring…", "username": "Elle_Shwer" } ]
Autocomplete search
2022-11-02T05:06:00.510Z
Autocomplete search
2,080
null
[]
[ { "code": "", "text": "MongoDB Atlas VPC Peering with AWS VPC only allows CIDR starting with 10… (ex: CIDR 10.0.0.0/23 worked) while using 11.0.0.0/23 is giving error as “Route table CIDR “11.0.0.0/23” is not in private range”.Note: There is no CIDR conflict with Mongo VPC and AWS VPC.Can anyone help to resolve this?", "username": "Yash_Panchal" }, { "code": "", "text": "@Pablo_Iglesias Can you share your thoughts on this?", "username": "Yash_Panchal" }, { "code": "", "text": "Hi @Yash_Panchal,while using 11.0.0.0/23 is giving error as “Route table CIDR “11.0.0.0/23” is not in private range”.As per the step 3 of the Configure Network Peering for an AWS-backed Cluster procedure, the configured VPC CIDR block range must be within the ranges (as of the time of this message) :\nimage1614×630 69.9 KB\nRegards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran , I get that. Thank you for response.Is there any alternative to peer other CIDR? This one is required because my production VPC CIDR starts with 11 and that can’t be changed.", "username": "Yash_Panchal" }, { "code": "", "text": "Hi @Yash_Panchal,Is there any alternative to peer other CIDR? This one is required because my production VPC CIDR starts with 11 and that can’t be changed.Depending on your use case(s) or requirements, a possible alternative would be to use Private Endpoint connection rather than a VPC peering connection. For your reference in terms of deciding whether this may suit your requirements and as included in the documentation linked above:Connections to Atlas database deployments using private endpoints offer the following advantages over other network access management options:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you @Jason_Tran for response. DirectConnect is the option but Its very costly for our use-case.I have an observation, I have two mongo atlas cluster and both has same VPC CIDR. Now I want to achieve peering from AWS VPC with both atlas cluster.\nHowever, I’m not able to achieve it because the AWS side route table has already entry of first mongo cluster and its not allowing to do for second cluster peering because of the same CIDR.Can you share thoughts on this issue? Is there any other way to achieve this?", "username": "Yash_Panchal" }, { "code": "Project1Cluster1192.168.0.0/21Project2Cluster2192.168.0.0/21", "text": "Hi @Yash_Panchal,I have an observation, I have two mongo atlas cluster and both has same VPC CIDR.Just to clarify, are these two MongoDB Atlas clusters you mention each in a different project with the same Atlas VPC CIDR? E.g.:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Yes you’re correct. I saw both clusters are having same CIDR.", "username": "Yash_Panchal" }, { "code": "", "text": "Unfortunately at this point, it is not possible to modify the Atlas VPC CIDR block for an existing VPC if there are resources (E.g. M10+ cluster nodes, peering connections, etc.) deployed in that VPC. As per the steps mentioned in the Set Up a Network Peering Connection documentation Atlas locks the value of the Atlas VPC CIDR if an M10+ tier cluster or a Network Peering connection exists. It also mentions:To modify the CIDR block, the target project cannot have:I am assuming you do not want to get rid of the data on either of the projects just to change the CIDR. You can do the following to set up peering with a different CIDR block and migrate your clusters:Create a new project.Set up VPC peering before adding any clusters. This will allow you to set the CIDR for the clusters in this project in the peered region. Ensure the CIDR does not overlap with:Ensure the CIDR must be in one of the following IP ranges:\nimage1380×538 55.5 KB\nAdd a new cluster to the project.(Depending on your requirements and the environment being migrated), either:Test connection from your application to the newly created cluster on the new Atlas VPC CIDR block range.Hope this helps!Regards,\nJason", "username": "Jason_Tran" } ]
MongoDB Atlas with AWS VPC Peering issue
2022-10-28T16:24:51.183Z
MongoDB Atlas with AWS VPC Peering issue
5,118
null
[ "containers", "storage" ]
[ { "code": "2022-10-28T17:49:26.597+0000 E STORAGE [thread2] WiredTiger error (5) [1666979366:596967][1:0x7f187d54c700], log-server: /datastore/db/journal: directory-list: opendir: Input/output error\n2022-10-28T17:49:26.597+0000 E STORAGE [thread2] WiredTiger error (5) [1666979366:597060][1:0x7f187d54c700], log-server: log pre-alloc server error: Input/output error\n2022-10-28T17:49:26.597+0000 E STORAGE [thread2] WiredTiger error (5) [1666979366:597072][1:0x7f187d54c700], log-server: log server error: Input/output error\n2022-10-28T17:49:31.001+0000 W FTDC [ftdc] Uncaught exception in 'FileNotOpen: Failed to open interim file /datastore/db/diagnostic.data/metrics.interim.temp' in full-time diagnostic data capture subsystem. Shutting down the full-time diagnostic data capture subsystem.\n\n2022-10-28T17:51:53.882+0000 E STORAGE [WTJournalFlusher] WiredTiger error (5) [1666979513:882390][1:0x7f187ad48700], WT_SESSION.log_flush: /datastore/db/journal/WiredTigerLog.0000000671: handle-write: pwrite: failed to write 768 bytes at offset 11904: Input/output error\n2022-10-28T17:51:53.882+0000 I - [WTJournalFlusher] Invariant failure: s->log_flush(s, \"sync=on\") resulted in status UnknownError: 5: Input/output error at src/mongo/db/storage/wiredtiger/wiredtiger_session_cache.cpp 229\n2022-10-28T17:51:53.882+0000 I - [WTJournalFlusher] \n\n***aborting after invariant() failure\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] MongoDB starting : pid=41 port=27020 dbpath=/data/db 64-bit host=mongodb-0\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] db version v3.4.0\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] git version: f4240c60f005be757399042dc12f6addbc3170c1\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1t 3 May 2016\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] allocator: tcmalloc\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] modules: none\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] build environment:\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] distmod: debian81\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] distarch: x86_64\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] target_arch: x86_64\n2022-10-28T18:47:30.513+0000 I CONTROL [initandlisten] options: { net: { bindIp: \"0.0.0.0\", port: 27020 }, repair: true, storage: { dbPath: \"/data/db\" } }\n2022-10-28T18:47:30.517+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1425M,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false),\n2022-10-28T18:47:30.630+0000 I CONTROL [initandlisten] \n2022-10-28T18:47:30.630+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2022-10-28T18:47:30.630+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2022-10-28T18:47:30.630+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.\n2022-10-28T18:47:30.630+0000 I CONTROL [initandlisten] \n2022-10-28T18:47:30.631+0000 I STORAGE [initandlisten] finished checking dbs\n2022-10-28T18:47:30.631+0000 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2022-10-28T18:47:30.631+0000 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27020.sock\n2022-10-28T18:47:30.631+0000 I NETWORK [initandlisten] shutdown: going to flush diaglog...\n2022-10-28T18:47:30.631+0000 I STORAGE [initandlisten] WiredTigerKVEngine shutting down\n2022-10-28T18:47:30.642+0000 I STORAGE [initandlisten] shutdown: removing fs lock...\n2022-10-28T18:47:30.642+0000 I CONTROL [initandlisten] now exiting\n2022-10-28T18:47:30.642+0000 I CONTROL [initandlisten] shutting down with code:0\n\n", "text": "We have a mongodb 3.4.0 pod running in a kubernetes cluster, we are trying to upgrade however this is blocking us. This pod runs for about 7 minutes before we see the following error:This is the file it is referencing: mongo/wiredtiger_session_cache.cpp at r3.4.0 · mongodb/mongo · GitHubWe have ran a repair against MongoDB using the following command:kubectl exec -it mongodb-0 – mongod --dbpath /data/db --port 27020 --bind_ip 0.0.0.0 --repairHere is the output of that command:I am unsure if this is a successful repair output or not, but would really appreciate some guidance in next steps or things to check if anyone has seen or dealt with a failure like this before. Any input would be greatly appreciated.Thank you.", "username": "Austin_Hauer" }, { "code": "2022-10-28T17:49:31.001+0000 W FTDC [ftdc] Uncaught exception in 'FileNotOpen: Failed to open interim file /datastore/db/diagnostic.data/metrics.interim.temp' in full-time diagnostic data capture subsystem. Shutting down the full-time diagnostic data capture subsystem.\n2022-10-28T17:51:53.882+0000 E STORAGE [WTJournalFlusher] WiredTiger error (5) [1666979513:882390][1:0x7f187ad48700], WT_SESSION.log_flush: /datastore/db/journal/WiredTigerLog.0000000671: handle-write: pwrite: failed to write 768 bytes at offset 11904: Input/output error\nInput/output error", "text": "Hi @Austin_Hauer and welcome to the MongoDB community forum!!In most cases, Input/output error in the logs was caused by issues in the hardware or OS layer instead of from MongoDB itself. From the log snippet you posted, to me it appears that WiredTiger tried to write a journal file, but failed to do so.It would be very helpful if you could share a few more details on the concerns that you have mentioned above:this pod runs for about 7 minutes before we see the following error:Could you help in understanding if the pod is in the running state for 7 mins and the error comes up or this comes up during the upgrade process.Are you following any documentation to upgrade the MongoDB on kubernetes setup ? and which version are you trying to upgrade to?Could you also share the pod logs from the begining upto 7 mins till the failure has been observedLet us know if you have any further questions.Best Regards\nAasawari", "username": "Aasawari" } ]
Error on Container running MongoDB
2022-10-28T18:55:21.508Z
Error on Container running MongoDB
2,323
null
[ "data-modeling", "indexes" ]
[ { "code": "Step# A lives under \"x\" scope\n{\n run_id: 123,\n step_name: \"A\",\n scopes: [x:{<x_details>}] \n}\n\n# A lives under \"x\" and specific \"y\" scope.\n{\n run_id: 123,\n step_name: \"A\",\n scopes: [{\"x\": <x_data>}, {\"y\": <y1_data>} ] \n}\n\n# A lives under \"x\" and a specific (defferent y) \"y\" scope.\n{\n run_id: 123,\n step_name: \"A\",\n scopes: [{\"x\": <x_data>}, {\"y\": <y2_data>} ] \n}\nWriteError (code=11000 \"dup key\" )unique=True", "text": "Hello all,Short intro for my question:\nIn our product we have a Step object that identify a specific step inside a bigger Run.\nStep is identified uniquely by 3 fields : run_id, step_name(string), scopes(array).So here is the thing - “scopes” is an array that describes the scope of the specific step.\nThese 2 objects are should be able to live together:My question: I need my index to be unique by the 3 keys - run_id, step_name, scopes ,\nBUT mongodb will raise a WriteError (code=11000 \"dup key\" ) when trying to insert any 2 of the 3 in the exmaple, when the collection is indexed with unique=True.How can I get the effect I want ?Thank you in advance. ", "username": "Uri_Grinberg" }, { "code": "run_id: 123,\n step_name: \"A\",\n scopes: [{\"x\": <x_data>}\nscopes : {\n x : ... ,\n y : ....\n}\n\n", "text": "Hi @Uri_Grinberg ,A multikey index that created on arrays create an index entry per array element combination, meaning document 2,3 will both have an entry for :Therefore this will break the uniqueness. Can you enforce uniqueness by doing check on application side? If not perhaps change the data model to be:This will not create multikey index…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "scopes : {\n x : ... ,\n y : ....\n}\nscopes : {\n 0: {x : ... },\n 1: {y : ....}\n}\nscopes_hash", "text": "First, thank you very much for the answer, @Pavel_DuchovnyI really prefer the uniqueness to be kept by the schema iteself.Your suggestion of:won’t work for us, as order of scopes will be lost this way. some like:could solve the order issue - but smart queries on scopes won’t be possible, I afraid.That led me to the inevitable solution - keeping a scopes_hash field next to the scopes field - to be used only for unique indexing purpose.I will love to have a review and other suggestions. ", "username": "Uri_Grinberg" } ]
Dup_key error when trying to index an array field in mongodb
2022-10-31T07:01:29.994Z
Dup_key error when trying to index an array field in mongodb
1,456
null
[ "queries", "node-js" ]
[ { "code": "await client.connect();const { MongoClient, ServerApiVersion } = require(\"mongodb\");\n\nconst url =\n \"mongodb+srv://mrcool:[email protected]/?retryWrites=true&w=majority\";\n\nconst client = new MongoClient(url, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n serverApi: ServerApiVersion.v1,\n});\n\nconst doStuff = async () => {\n console.log(\"start\");\n await client.connect();\n console.log('connected');\n const db = client.db(\"test\");\n const collection = db.collection(\"cats\");\n const cats = await collection.find({}).toArray();\n console.log(cats);\n return cats;\n};\n\nconst handler = async () => {\n doStuff()\n .catch((err) => {\n console.error(err);\n })\n .finally(() => client.close());\n};\n\nexports.handler = handler;\n", "text": "I’m honestly not sure if this is a Mongo question or an AWS question, but I’m having difficulty connecting to an Atlas instance from a Lambda function. The code posted below works exactly as expected when I run it from a local instance, but when I try to run it from the Lambda the “start” log fires but it never reaches the “connected” line after await client.connect();. There is no error - it just silently fails and thinks it succeeded. I’m almost completely new to Mongo so I’m not sure where I should start trying to investigate this. Any help is greatly appreciate.", "username": "Kellen_Barber" }, { "code": "await client.connect();try {\n await client.connect();\n \n } \ncatch (e) {\n console.error(e);\n }\n", "text": "Hi @Kellen_Barber ,Welcome to The MongoDB Community Forums! I think there could be some issue with your setup or code but as you are not catching the error in logs hence you are not able to see it. Please consider using a try catch block with await client.connect(); as belowAdditionally, I would recommend you to go through this doc on Manage Connections with AWS Lambda.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Silent fail connecting to Atlas from AWS Lambda
2022-10-29T00:57:27.226Z
Silent fail connecting to Atlas from AWS Lambda
1,352
null
[ "crud" ]
[ { "code": "", "text": "Hi all, newish to mongo.Im fetching a bunch of strings (350K+) and initially populating the collection with a schema with just the name property:\nSchema({\nname: String,\ndata: [{ fetch1: someRandomObjectType, fetch2: someRandomObjectType }], // <— initially empty\n})Afterwards, for each record I am using the name to fetch the additional data in chunks of about 2.5k. This additional data can be fetched from different sources, hence fetch1/fetch2 etc.\nSo once I get the data I would like to update the records where name === result of fetch1(name).What would be the best way to do this? Do I have to just loop over my 2.5 results and updateOne({ name: name}) or is there some clever filter I can do with updateMany?The names are of course unique but I cant use them as ObjectId since the strings are too long.", "username": "Tomasz_Jakubek" }, { "code": "", "text": "Hi @Tomasz_Jakubek ,Welcome to The MongoDB Community Forums! To understand your use case better, could you please provide below details?Regards,\nTarun", "username": "Tarun_Gaur" } ]
How to update many entries efficiently
2022-10-28T16:48:04.524Z
How to update many entries efficiently
1,146
null
[ "transactions", "storage" ]
[ { "code": "{\"t\":{\"$date\":\"2022-11-01T17:34:36.384+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.419+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.423+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.430+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.430+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.430+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.430+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.430+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1587,\"port\":27017,\"dbPath\":\"/Users/labikemmy/Library/Application Support/MongoDB/Data\",\"architecture\":\"64-bit\",\"host\":\"LabikedeMacBook-Pro.local\"}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.431+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.0\",\"gitVersion\":\"e61bf27c2f6a83fed36e5a13c008a32d563babe2\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.431+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.2.0\"}}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.431+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"127.0.0.1\",\"unixDomainSocket\":{\"enabled\":false}},\"storage\":{\"dbPath\":\"/Users/labikemmy/Library/Application Support/MongoDB/Data\"},\"systemLog\":{\"destination\":\"file\",\"path\":\"/Users/labikemmy/Library/Application Support/MongoDB/Logs/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.433+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.448+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/Users/labikemmy/Library/Application Support/MongoDB/Data\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.455+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3584M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.914+08:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.914+08:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"45: Operation not supported\"}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.914+08:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":702}}\n{\"t\":{\"$date\":\"2022-11-01T17:34:36.914+08:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "mongodbapprefusei upgrade [email protected], robo3T not connect. i’m not found problem;connect refuse logs:", "username": "ohayo_mmy" }, { "code": "mongodumpmongorestoredbPathdbPathmongod", "text": "Welcome to the MongoDB community @ohayo_mmy !Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.What version of MongoDB server were you running prior to upgrading to MongoDB 6.0?In-place data file upgrades are only supported for adjacent major release series (eg 5.0 => 6.0). The error you encountered indicates you have skipped one or more major release upgrades and the existing data files are not compatible with the current version of MongoDB server you using.Your options are:Reinstall the original version of MongoDB server and follow the documented upgrade procedures for your deployment type (standalone, replica set, or sharded cluster). For example, if you were starting from MongoDB 4.4 standalone: Upgrade a Standalone to 5.0 then Upgrade a Standalone to 6.0.iI your original server version requires several major version upgrades to get to your desired version (for example, if you are starting from 4.2 and need to go 4.2 => 4.4, 4.4 => 5.0, 5.0 =>6.0), you could use mongodump and mongorestore to recreate your deployment in a single step. You may encounter some (fixable) errors as the documented and tested upgrade path is via in-place upgrades through successive major releases.If your existing data isn’t important, you could move (or remove) the contents of your current dbPath. When a MongoDB server restarts with an empty dbPath, it will initialise a fresh set of data files based on the current mongod version.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "ok, i’m reinstall, thanks!", "username": "ohayo_mmy" } ]
Macos When i upgrade mongodb6.0.0-build, connect refuse
2022-11-02T01:51:55.232Z
Macos When i upgrade mongodb6.0.0-build, connect refuse
2,060
null
[ "aggregation" ]
[ { "code": "//\n// Paste one or more documents here\n[{\n \"creatorId\": null,\n \"creatorName\": null,\n \"updateTime\": null,\n \"updaterId\": null,\n \"updaterName\": null,\n \"deleteTime\": null,\n \"deleterId\": null,\n \"deleterName\": null,\n \"isDelete\": false,\n \"companyId\": {\n \"$oid\": \"635c70892e8cfaf4a7d49a3f\"\n },\n \"memberId\": {\n \"$oid\": \"635d30b60c381f79913848a9\"\n },\n \"isActive\": false,\n \"name\": \"JOHN DOE\",\n \"profileImage\": \"string\",\n \"email\": \"[email protected]\",\n \"phone\": \"0812345678\",\n \"mainClassId\": {\n \"$oid\": \"635d2e398e7b6138e9a65111\"\n },\n \"classId\": {\n \"$oid\": \"635d2f6e6804b95ce6a9e5d0\"\n },\n \"tags\": [],\n \"datas\": [\n {\n \"id\": {\n \"$oid\": \"635dd385f2435ea848fa33fe\"\n },\n \"classId\": {\n \"$oid\": \"6303479db2c68583135ec55f\"\n },\n \"lessonId\": {\n \"$oid\": \"635d2bf3a78013b9dcae9a6f\"\n },\n \"year\": \"2023\",\n \"report\": [\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 3,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"NILAI HARIAN\",\n \"value\": 70\n }\n ]\n },\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 4,\n \"isPresent\": false,\n \"scores\": [\n {\n \"key\": \"NILAI HARIAN\",\n \"value\": 70\n }\n ]\n },\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 6,\n \"isPresent\": false,\n \"scores\": [\n {\n \"key\": \"NILAI HARIAN\",\n \"value\": 80\n }\n ]\n },\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 8,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"PENILAIAAN TENGAH SEMESTER\",\n \"value\": 80\n }\n ]\n },\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 9,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"NILAI HARIAN\",\n \"value\": 50\n }\n ]\n },\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 12,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"NILAI HARIAN\",\n \"value\": 90\n }\n ]\n },\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 13,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"NILAI HARIAN\",\n \"value\": 100\n }\n ]\n },\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 14,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"NILAI HARIAN\",\n \"value\": 85\n }\n ]\n },\n {\n \"activityId\": {\n \"$oid\": \"6308350b119027c236c8ce92\"\n },\n \"meet\": 15,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"PENILAIAAN AKHIR SEMESTER\",\n \"value\": 73\n }\n ]\n }\n ]\n }\n ]\n}\n]\n", "text": "I have a document, which contains an array of obj. each first level array contains lessonId and year and report. in the report array there are activityid, meeting, and score. in the score there are keys or types of lessons (daily grades, mid-semester assessments, END-SEMESTER ASSESSMENTS) and grades.How to find sums for NILAI HARIAN, PENILAIAAN TENGAH SEMESTER and PENILAIAAN AKHIR SEMESTER in each lessonId.i’am tring to aggregate and sum value from “NILAI HARIAN” didn’t work.\nthis is my sample codeMongo playground: a simple sandbox to test and share MongoDB queries online", "username": "Nuur_zakki_Zamani" }, { "code": "db.collection.aggregate([\n {\n \"$match\": {\n \"companyId\": ObjectId(\"635c70892e8cfaf4a7d49a3f\")\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$datas\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$datas.report\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$datas.report.scores\"\n }\n },\n {\n \"$group\": {\n _id: \"$datas.report.scores.key\",\n sum: {\n $sum: \"$datas.report.scores.value\"\n }\n }\n }\n])\n", "text": "Hi @Nuur_zakki_ZamaniIts fairly an easier Pipeline :Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "alhamdulillah, tankyou sir @Pavel_Duchovny ", "username": "Nuur_zakki_Zamani" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to sum and count in 3rd level nested array
2022-11-01T03:29:47.335Z
How to sum and count in 3rd level nested array
3,288
null
[ "queries" ]
[ { "code": "", "text": "I am using M40 Cluster on ATLAS. I have a few collections with >65 Million records & it’s keep increasing.When I am updating the records (by setting batchsize) it is taking around 2-4 mins, I am ok with it.\nWhile updating the records CPU & Disk Util is <80%.My Node application have a queue of queries/commands to execute, & there are multiple instances running of Node app. When two or more queries/commands are getting executed simultaneously, the CPU & DiskUtil is exausting.So, I just want to make sure that CPU & Disk utilisation are under threshold (e.g. <40%) before hitting next Query/Command to MongoDB.Is there any way to get the CPU, Disk Util & RAM status by hitting query/command using MongoDB Driver through node app? or any other solution to this problem?", "username": "Ashish_Zanwar" }, { "code": "mongostatmongotop", "text": "Hi @Ashish_Zanwar - Welcome to the community!When two or more queries/commands are getting executed simultaneously, the CPU & DiskUtil is exausting.I believe we should investigate the issue CPU & DiskUtil exhaustion first rather than attempting to try query for particular resource metric values to determine whether an operation can or cannot be executed based off a threshold. (Note: it could be that you’ve optimised the workload / queries as much as possible but please provide those details if possible)Although there may be ways you could query for the metrics, there’s quite a few scenarios where this could cause further issues. For some examples, lets say you query the server and get the response at a particular point in time and find that the server is under your required threshold. What if by the time the operation is executed, the server is beyond the threshold or what if multiple processes get this response and then bombard the server all at once etc.To further assist with this, could you provide more context surrounding the workload or queries that are being executed that cause the resource exhaustion mentioned as well as the effect of the exhaustion (total stall, etc)?In saying the above, the following Atlas documentation may be of use when investigating the queries in question:When I am updating the records (by setting batchsize) it is taking around 2-4 mins, I am ok with it.\nWhile updating the records CPU & Disk Util is <80%.Regarding the above, is the update being performed across all documents in the collection? Additionally, what would be the average document size?Is there any way to get the CPU, Disk Util & RAM status by hitting query/command using MongoDB Driver through node app? or any other solution to this problem?As mentioned above, doing this may lead to a “race condition” in which maybe multiple processes receive a particular value which is under threshold and bombard the server all at once thus leading to the resource exhaustion again.However, in saying so, you could possibly integrate some of the following tools to assist:I am using M40 Cluster on ATLAS. I have a few collections with >65 Million records & it’s keep increasing.Have you considered perhaps upgrading to a higher tier cluster to see if the resource exhaustion is eliminated or at least reduced? If a cluster tier upgrade resolves the issue without needing to perform any other changes such as a locking mechanism or querying for hardware metrics then perhaps attempting to optimise the operations may help.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get the status of CPU, RAM & Disk Util from MongoDB Atlas
2022-10-14T05:45:12.444Z
Get the status of CPU, RAM &amp; Disk Util from MongoDB Atlas
1,585
null
[]
[ { "code": "", "text": "I tried above command to solve issue but not able to solve it.\nit showing me below error. Please help me to solve error. (ventura 13 is install in Intel Mac)==> Installing [email protected] from mongodb/brewError: Your Xcode (14.0.1) is too outdated.Please update to Xcode 14.1 (or delete it).Xcode can be updated from the App Store.", "username": "Code_Entic" }, { "code": "launchctlmongodbrew", "text": "Welcome to the MongoDB Community @Code_Entic !Error: Your Xcode (14.0.1) is too outdated.Please update to Xcode 14.1 (or delete it).This error message is coming from Homebrew: you need to update Xcode (or the Xcode Command Line Tools) after a major O/S upgrade. See: Installation — Homebrew Documentation.Xcode 14.1 is still at the release candidate stage so you will either have to wait for the GA release or download a release candidate from Apple’s developer download page if you want to use Homebrew. Related discussion: macOS 13 (Ventura) and gcc-12 · Issue #113968 · Homebrew/homebrew-core · GitHub.As a workaround (if you don’t want to install RCs), you can also get MongoDB binaries from the MongoDB Downloads page. This is slightly less convenient than a Homebrew install as you would have to manually set up a launchctl definition if you want to start/stop mongod as a service, however you can always move to a brew install later.Per discussion on the Homebrew GitHub issue above, it is unusual for a new macOS SDK not to be realised simultaneously with the O/S but it would be reasonable to expect that very soon.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Code_Entic,I noticed Command Line Tools for Xcode 14.1 are now available, so you should be able to install this Ventura Homebrew prerequisite via macOS Software Update.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error installing with Homebrew: Your Xcode (14.0.1) is too outdated
2022-10-29T10:47:32.518Z
Error installing with Homebrew: Your Xcode (14.0.1) is too outdated
8,706
null
[ "python", "production" ]
[ { "code": "", "text": "We are pleased to announce the 3.13.0 release of PyMongo - MongoDB’s Python Driver.Version 3.13 provides an upgrade path to PyMongo 4.x. Most of the API changes\nfrom PyMongo 4.0 have been backported in a backward compatible way, allowing\napplications to be written against PyMongo >= 3.13, rather then PyMongo 3.x or\nPyMongo 4.x. See the PyMongo 4 Migration Guide for detailed examples.See the changelog for a high level summary of what’s new and improved or see the 3.13 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongo 3.13 Documentation\nChangelog: Changelog\nSource: GitHubThank you to everyone who contributed to this release!", "username": "Steve_Silvester" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
PyMongo 3.13.0 Released
2022-11-01T19:21:09.818Z
PyMongo 3.13.0 Released
2,725
https://www.mongodb.com/…f79d732e5e13.png
[ "compass" ]
[ { "code": "", "text": "Hi folks, I’m not really sure why my simple query is not working. I just loaded a new collection into my atlas database. And a very simple filter inside compass is not working.my collection has a nested object platformwhen I use { “platform” : { “_id” : 1} as a filter, no results are found. It does not matter what property I use.Filters are also not matching any other array elements such as genres. Am I missing something very obvious and silly here?\nThanks\nScreen Shot 2022-11-01 at 10.55.44 AM748×231 17.9 KB\n", "username": "Vinicius_Carvalho" }, { "code": "", "text": "Just wanted to add another point, the data is being loaded via java with the mongo driver using pojo codec.If I insert one document via the atlas ui, using the exact same structure, then the query finds the data. Not sure if there’s a problem with the pojo codec, but the data entered manually does not appear to be any different from the one in the collection.", "username": "Vinicius_Carvalho" }, { "code": "{ \"platform\" : { \"_id\" : 1 } }\n{ \"platform._id\" : 1 }\n", "text": "You need to use dot notation.When you write:what you really mean is that you want platform to be equal to the object { “_id” : 1 }.An object will be equal to another if and only if, it has the same fields and values, in the same order.With the dot notation you are able to writewhich indicate that you want an object named platform which has a field named _id equals to 1.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Querying nested array "not working"
2022-11-01T14:57:23.818Z
Querying nested array &ldquo;not working&rdquo;
2,224
null
[]
[ { "code": "[\n {\n \"sampleData\": 123,\n \"sampleData2\": 1234,\n \n },\n {\n \"sampleData\": 123\n }\n]\ndb.collection.update({ \"sampleData2\": { $exists: true }},{ $set: { sampleData3: \"$sampleData2\" }})\n", "text": "I try to move my data e.G: from attribute “attr1” to “subdocument.attribute1” but can’t find a query to do so.\nE.G. I try to copy sampleData2 into a new attribute called sampleData3, if the attribute sampleData2 exists.I tried this query:but the resulting value is “sampleData3” : “$sampleData2” whereas “sampleData3” : 1234 was expected.Here the sample in the playgroundMongo playground: a simple sandbox to test and share MongoDB queries online", "username": "Dietmar_Enghauser" }, { "code": "MongoDB Enterprise M040:PRIMARY> db.products.insertOne({\"sampleData2\":123})\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"63613bf818cb6cb9c0c2fb4a\")\n}\n\nMongoDB Enterprise M040:PRIMARY> db.products.find()\n{ \"_id\" : ObjectId(\"63613bf818cb6cb9c0c2fb4a\"), \"sampleData2\" : 123 }\n\nMongoDB Enterprise M040:PRIMARY> db.products.update({ \"sampleData2\": { $exists: true }},[{$set: {\"sampleData3\": \"$sampleData2\"}}])\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\n\nMongoDB Enterprise M040:PRIMARY> db.products.find()\n{ \"_id\" : ObjectId(\"63613bf818cb6cb9c0c2fb4a\"), \"sampleData2\" : 123, \"sampleData3\" : 123 }\n", "text": "Hello,Please note that starting 4.2 you can use aggregation pipelines for updates and the use $set is supported.I checked the snippet you shared and I see $set is not enclosed in which makes it a pipeline.Please check the example below which I tested in my environment showing the correct value of $sampleData2 being assigned instead of the literal value:Please also check to the different aggregation pipeline stages you can use for updates in this documentation link.I hope you find this helpful.Regards,\nMohamed Elshafey", "username": "Mohamed_Elshafey" }, { "code": "[{$set: {\"sampleData3\": \"$sampleData2\"}}]", "text": "[{$set: {\"sampleData3\": \"$sampleData2\"}}]You are a live saver. Thanks so much.", "username": "Dietmar_Enghauser" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issue when using $set - does not use existing attribute data
2022-11-01T11:56:25.278Z
Issue when using $set - does not use existing attribute data
886
null
[ "queries" ]
[ { "code": "import { getWeek } from 'date-fns'\nimport { addWeeks } from 'date-fns'\nimport { getYear } from 'date-fns'\n\n\n\nexports = function(barcode){\n \n const date = addWeeks(new Date(), -10)\n const week = getWeek(date).toString()\n const year = getYear(date).toString()\n const recallTarget = Number(year + week)\n const collection = context.services.get(\"mongodb-atlas\").db(\"xNews\").collection(\"products\");\n const data = collection.find({ barcode: barcode, recallWeek: { $gte:recallTarget }})\n \n return data\n};\n", "text": "Hi,I’m attempting to save the following function which runs successfully in the console but produces the message: Changes could not be saved. Please resolve the errors above → runtime error during function validation. I have insatlled the dependenceies. Can anyone exlain the error (and how to resolve it).Thanks.", "username": "David_Lacey" }, { "code": "", "text": "Hi,\nCheck this link. It might helps to resolve the issue.Thanks", "username": "Shubham_Gupta4" } ]
Runtine error when attemptin to save atlas function
2022-09-02T02:03:16.722Z
Runtine error when attemptin to save atlas function
2,079
null
[ "node-js", "app-services-cli" ]
[ { "code": "Node.jsnpmrealm-cli login --api-key=\"MyPublicKey\" --private-api-key=\"MyPrivateKey\"\nrealm-cli apps create --name=restricted-feed --template=flex-sync-permissions.restricted-feed\napp create failed: template 'flex-sync-permissions.restricted-feed' not found\nrealm-cli", "text": "Hey there!I’m trying to check out and test the example for “Restricted News Feed”. This reflects nearly the same situation we are currently have in our brand new SwiftUI project.I have installed the latest Node.js and npm version already, running on macOS Ventura. I also successfully logged in via:Unfortunately I’m failing at the very beginning when it comes to:I always get the error:Did I miss something?\nWhere does this template comes from?\nIs it realm-cli which retrieves it from the MongoDB servers?Thanks for any help.\n-Frank", "username": "phranck" }, { "code": "restricted-feedrealm-cli apps create --name=restricted-feed --template=flex-sync-guides.restricted-feed\n", "text": "restricted-feedLooks like the documentation isn’t quite correct - try this command:", "username": "Sudarshan_Muralidhar" }, { "code": "", "text": "Sorry about that! It looks like we updated the template names at some point but missed updating the documentation. I’ve now updated the Flexible Sync Permissions Guide page, so all the names and commands should be correct.", "username": "Dachary_Carey" } ]
Resatricted News Example
2022-10-27T10:23:36.477Z
Resatricted News Example
2,466
null
[ "node-js", "crud", "mongoose-odm" ]
[ { "code": "const iLevelPlanning = new mongoose.Schema({\n istep: String,\n maturity,\n milestone: [milestone],\n buildingPhase: [buildingPhase]\n});\n\nconst planningOverview = new mongoose.Schema({\n brv: String,\n introduction: String,\n pl: String,\n iLevelPlanning: [iLevelPlanning],\n comment: [comment],\n modified: Date,\n});\n\n attempt 2)\n\nThe first one did not work. But the second one worked.\n\nNow I have to do this from my node application.\n\nI tried something like this here also\n\nattempt 1) \n\nattempt 2) \nOf course, the first one did not work, however, the second code also not updating in the database.\nIt's not finding matched object.\n\nAnd I have also imported mongoose as well \nIf anybody knows the answer, kindly reply", "text": "Hi all,\nI am facing some problems while removing a nexted document.schema looks like thisFor debugging purpose I tried 2 senarios in mongo shellattempt 1)```db.getCollection(‘planningOverview’).updateOne({ _id: “62ecfcd1caae1e0000beb8ee” }, { $pull: { iLevelPlanning: { _id: “63172667f1242f000012b93f” } } })db.getCollection(‘planningOverview’).updateOne({ _id: ObjectId(“62ecfcd1caae1e0000beb8ee”) }, { $pull: { iLevelPlanning: { _id: ObjectId(“63172667f1242f000012b93f”) } } })getPlanningOverviewModel().updateOne({ _id: id}, { $pull: { iLevelPlanning: { _id: iLPId} } }).exec();getPlanningOverviewModel().updateOne({ _id: mongoose.Types.ObjectId(id)}, { $pull: { iLevelPlanning: { _id: mongoose.Types.ObjectId(iLPId) } } }).exec();import mongoose from ‘mongoose’;", "username": "Gana_Kumar" }, { "code": "const filter = { \"_id\": ObjectID(\"636123c47df559ae2d137ff7\") };\nReferenceError: ObjectID is not defined\nimport { MongoClient } from \"mongodb\";\nimport { ObjectID } from \"mongodb\";\n\n// Replace the uri string with your MongoDB deployment's connection string.\nconst uri = \"mongodb://127.0.0.1:27017/\";\n\nconst client = new MongoClient(uri);\n\nasync function run() {\n try {\n const database = client.db(\"sample_mflix\");\n const movies = database.collection(\"movies\");\n // create a filter for a movie to update\n const filter = { \"_id\": ObjectID(\"636123c47df559ae2d137ff7\") };\n\n // this option instructs the method to create a document if no documents match the filter\n const options = { upsert: true };\n\n // create a document that sets the plot of the movie\n const updateDoc = {\n $set: {\n plot: `A harvest of random numbers, such as: ${Math.random()}`\n },\n };\n\n const result = await movies.updateOne(filter, updateDoc, options);\n console.log(\n `${result.matchedCount} document(s) matched the filter, updated ${result.modifiedCount} document(s)`,\n );\n } finally {\n await client.close();\n }\n}\nrun().catch(console.dir);\n", "text": "Hello,I tried to do sample update using ObjectID as filter, I used the sample provided int he documentation here and I just passed the ObjectID in the filter as below:Please note that you will need to import ObjectID for this to work, otherwise you will receive the error below:You can find the full snippet that worked in my environment here, you can adapt it as needed for your use case:I hope you find this helpful.Regards,\nMohamed Elshafey", "username": "Mohamed_Elshafey" } ]
Removing nexted Objecte is not working because of ObjectId is not rendering correctly
2022-10-30T19:07:38.638Z
Removing nexted Objecte is not working because of ObjectId is not rendering correctly
1,719
null
[ "node-js", "data-modeling", "mongoose-odm" ]
[ { "code": "Schema.UserSchema.Pattern// ./models/user.model.ts\n\nconst userSchema = new mongoose.Schema({\n email: {\n type: String,\n unique: true\n },\n ...\n})\n\nconst UserModel mongoose.model('User', userSchema)\nexport { UserModel }\n// ./models/pattern.model.ts\nconst patternSchema = new mongoose.Schema({\n user: { type: Schema.Types.ObjectId},\n ...\n})\n\nconst PatternModel mongoose.model('Pattern', patternSchema)\nexport { PatternModel }\nUser// controllers/pattern.controller.ts\nconst PatternController - {\n create: async (req, res, next) => {\n try {\n const userId = req.params.userId\n req.body.user = userId\n const newPattern = await PatternModel.create(req.body)\n res\n .status(StatusCodes.CREATED)\n .json({ data: newPattern, message: ReasonPhrases.CREATED })\n } catch (error) {\n next(error)\n }\n },\n}\n message: 'E11000 duplicate key error collection: mycraftypal.patterns index: email_1 dup key: { email: null }',\nemailpattern.model.ts", "text": "I have a Schema.User and a Schema.Pattern which is a one-to-many relationship. I decided to split instead of embed because of the way the data is going to be queried. Also, patterns are going to have handful of projects as a one-to-many as well.At this point I’m just playing around with the concept of one-to-many. I have an express app and a controller that CRUDs the User collect without problem. My problem is when I create the first pattern. I have the following route for creating patterns.I can create one pattern for a user. When I try to create the 2nd pattern for the same user I getThe weird thing about the error is that I don’t have an email in the pattern.model.ts at all.Someone can guide me in the right direction?", "username": "redeemefy" }, { "code": "db.pattern.dropIndex({email : 1})\n", "text": "Hi @redeemefy ,Well its obvious that the model doesn’t have email field but for some reason the collection does have a unique index created on that field name. If you have this index then the database will assume a null value form a non existing email field document and that cannot be duplicated…I suspect at at some moment you might accidentally created it , just runThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "email.unique: truepattern.email", "text": "The index, if any, should be in the user collection since that’s the collection that has the email.unique: true. I don’t have a pattern.email property at all and I never ran any mongo command to create indexes.", "username": "redeemefy" }, { "code": "db.patterns.getIndexes()\n", "text": "Hi @redeemefy ,If you run :It will tell otherwise…", "username": "Pavel_Duchovny" }, { "code": "Schema.UserSchema.Patternconst Person = {\n name: \"Person\",\n properties: {\n name: \"string\",\n birthdate: \"date\",\n dogs: \"Dog[]\"\n }\n};\nconst Dog = {\n name: \"Dog\",\n properties: {\n name: \"string\",\n age: \"int\",\n breed: \"string?\"\n }\n};\n", "text": "I have a Schema.User and a Schema.Pattern which is a one-to-many relationship.So… While that’s what the question states, the code doesn’t exactly match that statement so perhaps that can be clarified a bit?e.g. Here’s Person with a one-to-many relationship with Dog objectsDo you want your a User to have multiple Patterns or a Pattern to have multiple Users? Or maybe something else? Or and I overlooking something?", "username": "Jay" }, { "code": "", "text": "@Jay – I want a user to have many patterns.\n@Pavel_Duchovny – indeed the patterns collections has the index. I’m not sure how that got there since I never got the email property in the patterns collection.", "username": "redeemefy" }, { "code": "", "text": "I had to drop the database and run the REST endpoint through postman again and that solved the issue. Now the patterns collection doesn’t have that index. Somehow that index got there.", "username": "redeemefy" }, { "code": "", "text": "Hi @redeemefy ,Its possible that you just accedently named the user element with pattern and the index got created even with one run if code .Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "db.pattern.dropIndex({email : 1})", "text": "db.pattern.dropIndex({email : 1})Probably would be the correct solution. I just decided to drop the entire database just to make sure nothing else wrong is set.", "username": "redeemefy" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Learning about One-To-Many
2022-10-28T20:23:03.547Z
Learning about One-To-Many
4,414
https://www.mongodb.com/…_2_1024x576.jpeg
[ "atlas", "serverless" ]
[ { "code": "", "text": "\nPATNA MUG1920×1080 88.5 KB\n\nPatna MongoDB User Group and HackSlash Club NIT are excited to host a workshop filled with an interactive session, quizzes, and fun at the NIT Patna Campus on 10th October 2022.The workshop will start with an introductory session and demo that will help provide a jumpstart to those new to MongoDB and will conclude with a session on deploying Serverless App on Google Cloud. The session will cover the basics of Serverless and Nextjs and then walk you through deploying your Nextjs-MongoDB App on Google Cloud.The day will also include Trivia Swags and Snack times to help you have fun while you learn!The sessions being planned are focused on intermediate database operations. If you are a beginner or have some experience with MongoDB already, there is something for all of you! Based on your quiz performance and participation in Hands On, you may be nominated to be our next Patna MUG Leader! Event Type: In-Person\nLocation: NIT Patna CampusTo RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.\nKushagra - HeadShot1920×2175 172 KB\nKushagra Kesav,\nSoftware Engineer, MongoDBJoin the Patna Group to stay updated with upcoming meetups and discussions", "username": "Kushagra_Kesav" }, { "code": "", "text": "Also, here’s a list of all the useful things that we will also be talking about:That’s it. See you all at the event! Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Q.1 A customer with email: “[email protected]” has forgot their username, help them find it using sample_analytics.customers\nAns → username: “millerrenee”,\nquery → use sample_analytics\ndb.customers.find({“email”: “[email protected]”})Q.2 In sample_mflix.movies, what is the runtime of the movie title “The Kid Brother”?\nAns → 82\nquery → use sample_mflix\ndb.movies.find({“title”: “The Kid Brother”})Q.3 In sample_airbnb.listingsAndReviews, how many listings let you accommodates 15 folks?\nAns → 3\nquery → use sample_airbnb\ndb.listingsAndReviews.countDocuments({“accommodates”: 15})", "username": "RITWIK_SINGH1" }, { "code": "", "text": "Question 1:)\nA customer with email : “[email protected]” has forgot their username, please find it using sample_analytics.customers.\nAnswer 1:)\nQuery: {email: “[email protected]”}\nusername:“millerrenee”Question 2:)\nIn sample_mflix.movies, what is the runtime of the movie title “The Kid Brother”?\nAnswer 2:)\nQuery: {title: “The Kid Brother”}\nruntime:82Question 3:)\nIn sample_airbnb.listingsAndReviews, how many listing let you accommodates 15 folks?\nAnswer 3:)\nQuery: { accommodates: 15}\n3(Three) Listings accommodates 15 folks.", "username": "MUSKAN_JHA" }, { "code": "", "text": "A customer with email: \"[email protected]\"has forgot their username, help them find it using sample_analytics.customers?\nAns → db.sample_analytics.customers.find({email:“[email protected]”},{username:1})\nthis will project the username → username: “millerrenee”In sample_mflix.movies, what is the runtime of the movie title “The Kid Brother”?\nAns → db.sample_mflix.movies.find({title:“The Kid Brother”},{runtime:1})\n82In sample_airbnb.listingsAndReviews, how many listings let you accommodates 15 folks?\nAns → db.sample_airbnb.listingsAndReviews({accommodates:15}).count()\n3", "username": "Rahul_Kumar8" }, { "code": "", "text": "Q1:) A customer with email: “[email protected]” has forgot their username ,using sample_analytics.customers?\nAns:) username: “millerrenee”\nQuery:) db.customers.find({email: “[email protected]”})Q2:) In sample_mflix.movies,what is runtime of the movie title “The Kid Brother”?\nA2:) 82\nQuery:) db.movies.find({titile: “The Kid Brother” })Q3:) In sample_airbnb.listingsAndReviews ,how many listing let you accomodates 15 folks?\nA3:) 3\nQuery:) db.listingsAndReviews.countDocuments({accommodates: 15})", "username": "Gopal_Ji" }, { "code": "", "text": "q1:) A customer with email: “[email protected]” has forgot their username ,using sample_analytics.customers?\nans:) username : “millerrenee”\nquery:) db.customers.find({email: “[email protected]”})q2:) In sample_mflix.movies,what is runtime of the movie title “The Kid Brother”?\nans:) runtime : 82\nquery: db.movies.find({titile: “The Kid Brother” })q3:) In sample_airbnb.listingsAndReviews ,how many listing let you accomodates 15 folks?\nans:) 3\nquery: db.listingsAndReviews.countDocuments({accommodates: 15})", "username": "Shubham_Maurya" }, { "code": "", "text": "Ques1:) A customer with email: “[email protected]” has forgot their username ,using sample_analytics.customers?\nAnswer:) username : “millerrenee”\nQuery:) db.customers.find({email: “[email protected]”})Ques2:) In sample_mflix.movies,what is runtime of the movie title “The Kid Brother”?\nAnswer:) runtime : 82\nQuery: db.movies.find({Title: “The Kid Brother” })Ques3:) In sample_airbnb.listingsAndReviews ,how many listing let you accomodates 15 folks?\nAnswer:) 3\nQuery: db.listingsAndReviews.countDocuments({accommodates: 15})", "username": "Om_pandit489" }, { "code": "", "text": "Ques1:) A customer with email: “[email protected]” has forgot their username ,using sample_analytics.customers?\nSolution:) username : “millerrenee”\nQuery:) db.customers.find({email: “[email protected]”})Ques2:) In sample_mflix.movies,what is runtime of the movie title “The Kid Brother”?\nSolution:) runtime : 82\nQuery: db.movies.find({Title: “The Kid Brother” })Ques3:) In sample_airbnb.listingsAndReviews ,how many listing let you accomodates 15 folks?\nSolution:) 3\nQuery: db.listingsAndReviews.countDocuments({accommodates: 15})", "username": "Vishesh_Verma1" }, { "code": "", "text": "Query 1:\nuse sample_analytics\ndb.customers.find({email:“[email protected]”},{username:1, _id:0})\nOutput: [{ username: “millerrenee”}]Query 2:\nuse sample_mflix\ndb.movies.find({title:“The Kid Brother”},{runtime:1, _id:0})\nOutput:\n[{runtime:82}]Query 3:\nuse sample_airbnb\ndb.listingsAndReviews.countDocuments({accommodates:15})\nOutput:\n3", "username": "Pulkit_Kumar_Agarwal" }, { "code": "", "text": "Ques 1:A customer with email: “[email protected]” has forgot their username, please help them find it using sample_analytics.customers?\nOutput: [{username: “millerrenee”}]\nQuery:\nuse sample_analytics\ndb.customers.find({email:“[email protected]”})Ques 2:In sample_mflix.movies,what is runtime of the movie title “The Kid Brother”?\nOutput:\n[{runtime:82}]\nQuery:\nuse sample_mflix\ndb.movies.find({title:“The Kid Brother”})Ques 3:In sample_airbnb.listingsAndReviews ,how many listing let you accomodates 15 folks?\nOutput:\n3\nQuery:\nuse sample_airbnb\ndb.listingsAndReviews.countDocuments({accommodates:15})", "username": "Ayush_Gautam" }, { "code": "", "text": "Question 1:\nA customer with email : “[email protected]” has forgot their username, please find it using sample_analytics.customers.\nAnswer:\nQuery: use sample_analytics\ndb.customers.find({“email”: “[email protected]”}) → username:“millerrenee”Question 2:\nIn sample_mflix.movies, what is the runtime of the movie title “The Kid Brother”?\nAnswer:\nQuery: use sample_mflix\ndb.movies.find({“title”: “The Kid Brother”})–>runtime:82Question 3:\nIn sample_airbnb.listingsAndReviews, how many listing let you accommodates 15 folks?\nAnswer:\nQuery: use sample_airbnb\ndb.listingsAndReviews.countDocuments({“accommodates”: 15})–>3(Three) Listings accommodate 15 folks.", "username": "Srijan_Shovit" }, { "code": "", "text": "Question 1:\nA customer with email : “[email protected]” has forgot their username, please find it using sample_analytics.customers.\nAnswer:\nQuery: use sample_analytics\ndb.customers.find({“email”: “[email protected]”})OUTPUT→ username:“millerrenee”Question 2:\nIn sample_mflix.movies, what is the runtime of the movie title “The Kid Brother”?\nAnswer:\nQuery: use sample_mflix\ndb.movies.find({“title”: “The Kid Brother”})OUTPUT–>runtime:82Question 3:\nIn sample_airbnb.listingsAndReviews, how many listing let you accommodates 15 folks?\nAnswer:\nQuery: use sample_airbnb\ndb.listingsAndReviews.countDocuments({“accommodates”: 15})OUTPUT–>3", "username": "VIKASH_KUMAR9" }, { "code": "", "text": "Question 1:\nuse sample_analytics\ndb.customers.find({email:“[email protected]”},{username:1, _id:0})\nOutput:\n[{ username: “millerrenee”}]Question 2:\nuse sample_mflix\ndb.movies.find({title:“The Kid Brother”},{runtime:1, _id:0})\nOutput:\n[{runtime:82}]Question 3:\nuse sample_airbnb\ndb.listingsAndReviews.countDocuments({accommodates:15})\nOutput:\n3", "username": "Paras_Punjabi" } ]
Patna MUG: MongoDB Workshop, Speaker Session and Fun!
2022-10-04T06:11:11.429Z
Patna MUG: MongoDB Workshop, Speaker Session and Fun!
4,263
null
[ "queries" ]
[ { "code": "{\n \"timestamp\": {\n \"$date\": {\n \"$numberLong\": \"284065200000\"\n }\n },\n \"temp_min\": 5.84,\n \"sunset\": 0,\n \"deg\": 39,\n \"description\": \"overcast clouds\",\n \"temp_max\": 6.58,\n \"icon\": \"04n\",\n \"temp\": 6.21,\n \"_id\": {\n \"$oid\": \"630f86dda4d0f75b111babee\"\n },\n \"visibility\": 0,\n \"gust\": 0,\n \"sea_level\": 0,\n\n \"humidity\": 50,\n \"weather_id\": 804,\n\n\n\n \"feels_like\": 2.94,\n \"sunrise\": 0,\n \"main\": \"Clouds\",\n \n \"grnd_level\": 0,\n \"timezone\": 3600,\n \"id\": 0,\n \"type\": 0,\n \"speed\": 4.81,\n\n \"pressure\": 995\n}\n", "text": "Hi All,Been looking for a solution for a while now but no luck, i have a database with 400k documents and i’m looking to make a query to retrieve result for a specific day of every year.\nSo for example every document with date 27 oct and it should return a document for 2022, 2021, 2020…Any idea on how i should approach this?here’s a sample of one of my documents", "username": "marco_ferrari" }, { "code": "", "text": "Hi @marco_ferrari ,You can use $dateToParts to extract the day and month as “27oct” string and search for it overall documents:Let me know if you run into difficultyThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{ \"$in\" : [ 2017-10-27, 2018-10-27, 2019-10-27, 2020-10-27, 2021-10-27, 2022-10-27 ] }\n", "text": "Let me know if I am wrong.I am worry that a $match with $dateToParts to get day and month from the stored document’s date field would not be able to leverage an index on the date field.It is not really an issue if this is a infrequent use-case. But what if it is.One solution could be to use $in with a list of calculated exact dates. The caveat is that you need to limit the years so that the array of date is limited. Using Oct-27, the query, assuming data starts in 2017, could beI think the chances for an IXSCAN are higher. But I could be wrong.An alternative I see is to update the collection with the $dateToParts result and have an index on month:1,day:1,year:1. I include year:1 in the index so we can sort by year using the index.", "username": "steevej" }, { "code": "", "text": "thanks everyone for your response. i’m still trying your suggestions and not yet sure what is the best solution tbh.i’ll try and share my final solution here for future references", "username": "marco_ferrari" }, { "code": "", "text": "Sorry if I confused you.The best solution at first is the simplest and it is the one provided by @Pavel_Duchovny.Any optimization, like the ones I mentioned, should not be implemented unless you have performance problems. Do not make your code or data more complex before you need to.", "username": "steevej" }, { "code": "", "text": "Thanks for your clarification.", "username": "marco_ferrari" } ]
Query specific date and time for every year
2022-10-27T13:23:59.937Z
Query specific date and time for every year
2,053
null
[]
[ { "code": "", "text": "Any sample for a simple login?", "username": "AfterFood_Contact" }, { "code": "", "text": "@AfterFood_Contact : We have a couple of sample apps, feel free to check them out.Password Manager App based on KMM. Contribute to mongodb-developer/PassKeeper development by creating an account on GitHub.Demo Conference Manager App using Flexible Sync. Contribute to mongodb-developer/mongo-conference development by creating an account on GitHub.Conference Talk/Session Queries Management App. Contribute to mongodb-developer/Conference-Queries-App development by creating an account on GitHub.", "username": "Mohit_Sharma" }, { "code": "", "text": "Hello @Mohit_Sharma , did you used customer user data for your UserInfo?\nBecause my db.userInfo is always empty after making a registration:\n\nimage1070×347 90.2 KB\n", "username": "AfterFood_Contact" }, { "code": "exports = async function({user}){\n\n\n /*\n Accessing application's values:\n var x = context.values.get(\"value_name\");\n\n Accessing a mongodb service:\n var collection = context.services.get(\"mongodb-atlas\").db(\"dbname\").collection(\"coll_name\");\n collection.findOne({ owner_id: context.user.id }).then((doc) => {\n // do something with doc\n });\n\n To call other named functions:\n var result = context.functions.execute(\"function_name\", arg1, arg2);\n\n Try running in the console below.\n */\n\n console.log(\"user calling --- \" + JSON.stringify({user}));\n\n //Accessing a mongodb service:\n var collection = context.services.get(\"mongodb-atlas\").db(\"conference-db\").collection(\"UserInfo\");\n var count = await collection.count({\n email: user.data.email\n });\n\n\n console.log('count ---'+ count)\n\n if(count == 0){\n const userDoc = {\n _id: user.id,\n email: user.data.email,\n isAdmin: false,\n name: \"\",\n orgName: \"\",\n phoneNumber: null\n\n }\n\n collection.insertOne(userDoc)\n .then (result => {\n console.log(` document for _id: ${result}`);\n }, error => {\n console.log(`Failed to write ${error}`);\n });\n\n }else{\n console.log('This is a duplicate user');\n // Do nothing duplicate user\n }\n\n return {user}\n};\n", "text": "@AfterFood_Contact : I have added tigger on user registration something like", "username": "Mohit_Sharma" }, { "code": "appservice.emailPasswordAuth.registerUser(email = email, password = password)\n", "text": "Thank you for the reply!One more thing, can I add on the registration call another parameter? Like name, phone, address, etc, currently I can pass change the mail and password:", "username": "AfterFood_Contact" }, { "code": "", "text": "@AfterFood_Contact: With Kotlin SDK currently, the answer is no as support for Custom data is still in development. But there is a workaround like saving the info into a collection with a function and then merging it with the user collection after user confirmation.", "username": "Mohit_Sharma" }, { "code": "val userInfo: LiveData<UserInfo?> = liveData {\n emitSource(repo.getUserProfile().flowOn(Dispatchers.IO).asLiveData(Dispatchers.Main))\n }\n", "text": "Thank you!From the samples that you sent that with KMM, you use LiveData for the android version, it helps getting the object or implementing features, like already logged in feature that uses the LiveData Object.android liveData with flow:What I should use for the Swift?", "username": "AfterFood_Contact" } ]
Login sample with KMM and Realm
2022-10-28T14:20:55.392Z
Login sample with KMM and Realm
2,243
null
[]
[ { "code": "", "text": "Hi, My Collections are not appearing in the dashboard, they were working fine when i first connected them using local host. After a few days i get an email saying mongo db will suspend the connection if not connected to a cluster, The point is, i was conencted to a cluster from a local host. And now, looking at it, all the data is gone. is there a particular reason why this has happened?", "username": "YARABABUGARI_JAVEED" }, { "code": "", "text": "Where is your mongodb hosted?\nHow did you connect before like connect string etc\nWhen you say dashboard are you referring to Atlas?\nCan you show screenshots from when it was working/showing collections", "username": "Ramachandra_Tummala" }, { "code": "M0M0M0", "text": "Hi @YARABABUGARI_JAVEED - Welcome to the community After a few days i get an email saying mongo db will suspend the connection if not connected to a cluster, The point is, i was conencted to a cluster from a local host. And now, looking at it, all the data is goneIn addition to the information requested by Ramachandra, could you provide the content of the email received? Please redact any personal or sensitive information before sending it here.Since you advised it was an email that was sent from MongoDB, I assumed that this particular instance is hosted on Atlas but please correct me if i’m wrong here. It would also be useful to know the cluster tier if the deployment in question is hosted on Atlas. So please provide this information as well.In saying so, the automatic pause of clusters is generally related to M0 (free tier) clusters on Atlas. In that case, I would like to note the following:Atlas automatically stops collecting monitoring information for an M0 cluster after a few days of inactivity.\nIf there is no activity for 60 days, then Atlas automatically pauses the cluster completely, disallowing any connections to it until you resume the cluster. Atlas sends an email seven days before pausing the cluster. Atlas sends another email after pausing the cluster.\nYou can resume or terminate an automatically paused cluster at any time. You can’t initiate a pause for M0 clusters.If you have multiple cluster(s), project(s) or organisation(s) - I would also recommend checking each to confirm the data does not exist elsewhere.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "My collections in MongoDB Atlas also disappear in the same way. Could you help me to recover them? many thanks.\n\nimage1010×683 36.9 KB\n", "username": "A_MO" }, { "code": "", "text": "Here I found they are gone.\n\nimage1464×627 41.5 KB\n", "username": "A_MO" }, { "code": "", "text": "Hi @A_MO - Thank for providing those screenshots.Please contact the Atlas support team via the in-app chat to investigate any operational issues related to your Atlas account such as the one you’ve noted here. You can additionally raise a support case if you have a support subscription. The community forums are for public discussion and we cannot help with service or account enquiries.In saying so, it does appear the cluster is being resumed based off the screenshot although it may be another deployment change. If it is still being resumed, perhaps you can try checking once the resume has fully completed.Best Regards,\nJason Tran", "username": "Jason_Tran" }, { "code": "", "text": "Thank you for your reply. I failed to notice it is working on the resume. But usually, how long does the resume take?", "username": "A_MO" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo DB Collections are gone
2022-03-14T08:36:36.921Z
Mongo DB Collections are gone
3,654
null
[ "queries", "node-js" ]
[ { "code": "/**\n * Get all envelopes of a month of a year\n */\nyearRouter.get(\"/:year/:month/all\", async (req, res) => {\n try {\n const { year, month } = req.params;\n const projection = {\n months: { $elemMatch: { month: month }},\n };\n const data = await Year.findOne({ year: year }, projection);\n res.json(data);\n } catch (e) {\n res.status(500).json({ message: e.message });\n }\n});\n{\n \"_id\": \"635997a3a7099a37783cae90\",\n \"months\": [\n {\n \"month\": \"January\",\n \"total\": 15000,\n \"envelopes\": [\n \"635cae5428135652527ce99d\",\n \"635d1cab58ec2d6f8995f82e\"\n ],\n \"_id\": \"635997a3a7099a37783cae84\",\n \"remaining\": -2000,\n \"spent\": 2000\n }\n ]\n}\n", "text": "I am trying to return just the array from a deep nested object. I have a top schema year, with 12 embedded months, and each month has an array of reference id’s for envelopes. I can select the specific month using thisand that gets mehowever I can’t seem to just get the envelope array from it. I tried originally using projection but that returned me the envelopes from every month so I am currently stuck on what to do", "username": "Evan_Goldberg" }, { "code": "envelopestest> db.new.aggregate( [ { $unwind: \"$months\"}, { $match: { \"months.month\": \"March\", \"year\":2011}}, { $project: {\"months.envelopes\": 1}}])\n[\n {\n _id: ObjectId(\"635f5cc3b11ae4deae2f0884\"),\n months: {\n envelopes: [ '785cae5428135652527ce99d', '785d1cab58ec2d6f8995f82e' ]\n }\n }\n]\n$elemMatch$elemMatch$project,$unwind", "text": "Hi @Evan_Goldberg and welcome to the MongoDB community forum!!however I can’t seem to just get the envelope array from it.If I understand the problem correctly, you need to display only the envelopes field for the months array. To achieve the following query could be used:However, please note that the above query is based on the example posted above.If the above query does not fulfil what you are looking for, could you shareAlso, please note that the official MongoDB documentation for version 6.0 states:The $elemMatch operator matches documents that contain an array field with at least one element that matches all the specified query criteria.Thus the query using $elemMatch returns the document that satisfies that condition as a whole. It does not return partial document. If you want to only return the matching portion, you’ll need other operations such as $project, or $unwind as in the example above.Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" } ]
Returning deep nested array
2022-10-29T15:02:55.642Z
Returning deep nested array
2,288
null
[ "aggregation", "queries" ]
[ { "code": "{ animals : { cats: Array[3], dogs: Array[3] }}\n{ animals : { cats: Array[2], fish: Array[1] }}\n{ animals : { dogs: Array[2] }}\n{ cats { min: XXX, max: XXX, avg: XXX }}\n{ dogs { min: XXX, max: XXX; avg: XXX }}\n{ fish { min: XXX, max: XXX; avg: XXX }} \n\n[{\n $project: {\n foo: {\n $objectToArray: '$animals'\n }\n }\n}, {\n $unwind: {\n path: '$foo',\n preserveNullAndEmptyArrays: true\n }\n}, {\n $group: {\n _id: '$foo.k',\n min: {\n $min: {\n $size: '$foo.v'\n }\n },\n max: {\n $max: {\n $size: '$foo.v'\n }\n },\n avg: {\n $avg: {\n $size: '$foo.v'\n }\n },\n sum: {\n $sum: {\n $size: '$foo.v'\n }\n }\n }\n}]\n{ \"_id\": \"cats\", \"min\": 2, \"max\": 3, \"avg\": 2.5, \"sum\": 5 }\n{ \"_id\": \"dogs\", \"min\": 2, \"max\": 3, \"avg\": 2.5, \"sum\": 5 }\n{ \"_id\": \"fish\", \"min\": 1, \"max\": 1, \"avg\": 1, \"sum\": 1 }\n{ \"_id\": \"cats\", \"min\": 0, \"max\": 3, \"avg\": 1.666, \"sum\": 5 }\n{ \"_id\": \"dogs\", \"min\": 0, \"max\": 3, \"avg\": 1.666, \"sum\": 5 }\n{ \"_id\": \"fish\", \"min\": 0, \"max\": 1, \"avg\": 0.3333, \"sum\": 1 }\n", "text": "I have multiple documents in this form, where the Array is an array of objects whose contents don’t really matterWhat I’d like to do is compute things like the total, min, max and average count of each category and end up with something like thisThis pipeline comes closewhich returnswhich is close but it doesn’t account for missing items. I need to treat those as 0 values and incorporate them into the various calculations. The correct result would be", "username": "John_Dunn" }, { "code": "Array[<int>]$unwind[\n {\n _id: ObjectId(\"6359f0e35e8d9dd1c663bebb\"),\n foo: { k: 'cats', v: [ 1, 2, 3 ] }\n },\n {\n _id: ObjectId(\"6359f0e35e8d9dd1c663bebb\"),\n foo: { k: 'dogs', v: [ 1, 2, 3 ] }\n },\n {\n _id: ObjectId(\"6359f0e35e8d9dd1c663bebc\"),\n foo: { k: 'cats', v: [ 1, 2 ] }\n },\n {\n _id: ObjectId(\"6359f0e35e8d9dd1c663bebc\"),\n foo: { k: 'fish', v: [ 1 ] }\n },\n {\n _id: ObjectId(\"6359f0e35e8d9dd1c663bebd\"),\n foo: { k: 'dogs', v: [ 1, 2 ] }\n }\n]\n", "text": "Hi @John_Dunn - Welcome to the community.Thanks for providing what you’ve attempted, current and expected output.which is close but it doesn’t account for missing items.I just need some clarification here - If you need the calculation to cater for missing items, then the data needs to reflect that fact.I do understand you’ve noted that the values within the Array are not important however providing these may help with reproduction steps as it makes it easier to import to any test environments and also may give insight to how to achieve the expected output.I need to treat those as 0 values and incorporate them into the various calculationsIn the meantime, i’ve replicated the first 2 stages of the array just using integers to fill up the Array[<int>] field of your sample data. Here is the current ouput after the $unwind stage of your pipeline:Could you clarify where the items which should be 0 would be located?Regards,\nJason", "username": "Jason_Tran" }, { "code": "{ animals : { cats: 3, dogs: 3 }}\n{ animals : { cats: 2, fish: 1 }}\n{ animals : { dogs: 2 }}\n{ animals : { cats: 3, dogs: 3, fish: 0 }}\n{ animals : { cats: 2, fish: 1, dogs: 0 }}\n{ animals : { dogs: 2, cats: 0, fish: 0 }}\n", "text": "Thanks for the reply JasonI think my use of arrays was confusing the question and isn’t actually required to answer my question. I can simplify my data and swap the arrays for integer values and my question still stands. Take this data for exampleI would like my query to calculate the average number of cats, dogs and fish across all documents. If a given document doesn’t have a value for a specific animal I would like 0 to be used in place. In the above document the average number of cats would be calculated as ( 3 + 2 + 0 ) / 3. Basically I would like the above documents to be expanded to something like this automatically in the query.I also need to be able to calculate the average number of all animal types without having to explicitly specify each type of animal. In my case I have 500+ different ‘types’ of objects so having to specify each one in the query isn’t really feasible.Does that make sense?Thanks-John", "username": "John_Dunn" }, { "code": "avgDB> db.animals.find({},{_id:0})\n[\n { animals: { cats: 3, dogs: 3 } },\n { animals: { cats: 2, fish: 1 } },\n { animals: { dogs: 2 } }\n]\nDB> db.animals.aggregate([\n {\n '$facet': {\n animalSums: [\n { '$addFields': { foo: { '$objectToArray': '$animals' } } },\n { '$unwind': { path: '$foo', preserveNullAndEmptyArrays: true } },\n { '$group': { _id: '$foo.k', sum: { '$sum': '$foo.v' } } }\n ],\n totalCount: [ { '$count': 'totalDocuments' } ]\n }\n },\n { $unwind: \"$totalCount\" },\n { $unwind: \"$animalSums\" },\n {\n '$project': {\n animal: '$animalSums._id',\n sum: '$animalSums.sum',\n average: { '$divide': [ '$animalSums.sum', '$totalCount.totalDocuments' ] }\n }\n }\n])\n[\n { animal: 'cats', sum: 5, average: 1.6666666666666667 },\n { animal: 'fish', sum: 1, average: 0.3333333333333333 },\n { animal: 'dogs', sum: 5, average: 1.6666666666666667 }\n]\n$facet$unwind$addFields$count$divide0", "text": "Thanks for clarifying John.I would like my query to calculate the average number of cats, dogs and fish across all documents. If a given document doesn’t have a value for a specific animal I would like 0 to be used in place.I have a example which outputs a calculated average based off the sum and total count of distinct animal types that exists in the collection as a whole. For brevity, I only used avg in this example.Sample documents:Aggregation:Output:Some aggregation stages / operators related documentation used in the above example for reference:Please note I’ve only performed this on the 3 sample documents provided. Please test thoroughly on a test environment to see if it suits your use case and requirements.In saying the above, although it may be possible to achieve the desired output, it is not performant by any measure. If this is to be part of the standard workload and is often run then it could cause resource issues. On the other hand, if the performance issues can be ignored due to it being used in a once-off scenario, then this would be fine depending on your use case(s).There are a few possible things you may wish to consider for future:Lastly, you should optimize the schema design according to the most common workload (considering details in 1. and 2. above). If document size is a concern, you could utilise zlib/zstd compression on the collection(s).Regards,\nJason", "username": "Jason_Tran" }, { "code": "addFields", "text": "Thanks Jason.Is the ‘expensive’ part of your example the addFields portion? It seems like if I populated my documents with all animal types the rest of your example would still apply.ThanksJohn", "username": "John_Dunn" }, { "code": "addFields", "text": "Is the ‘expensive’ part of your example the addFields portion?Essentially the pipeline will not be able to use any indexes which will result in all documents in the collection being processed. Although on the 3 test documents it may not seem relatively poor in terms of performance, it may eventually lead to issues with large / growing collections.I would recommend also viewing the Explain Results documentation to further understand index usage for different aggregation pipelines for your testing.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Compute min/max/avg from all array fields
2022-10-26T15:32:36.706Z
Compute min/max/avg from all array fields
2,208
null
[ "data-modeling", "atlas-data-lake" ]
[ { "code": "", "text": "Hi everyone.\nI need to understand if data lake, can be the right soluction for resolves my problems.\nI have many unknown files on my cloud: usually unstructured text files. I need to import only the fields that has a specific charateristic.For example:[email protected]:31423-54365-6775-road road-Name-Last name\nfuvhdsfuhnvuids fodivbiued83495y834ythwnv svnfvnfdjndbjgd\nand so on…[email protected] interest to me, and then other field in this line of a text file. You can observe that there isn’t a single delimitator.\nBut I don’t want import the entire file into my MongoDB.\nIs it possible, by data lake, to import only the “good line”? So I can have a DB more light and faster.\nThank you.", "username": "Nicola_Ricci" }, { "code": "", "text": "Hey @Nicola_Ricci , unfortunately data lake would not be a good fit for this challenge. We only support the formats specified in the documentation. I know there are many tools that could help, but I think most of them would require that you create a custom parser where you are effectively defining your own format.", "username": "Benjamin_Flast" }, { "code": "mongoimport", "text": "Welcome to the MongoDB Community @Nicola_Ricci !Since you are referring to text files, I assume you are asking about using Atlas Data Federation (which was known as Atlas Data Lake prior to June 2022). Data Federation allows you to query supported data formats (JSON, CSV, Parquet, Avro, …) in cloud object storage (eg AWS S3) using the MongoDB Query Language.Data Federation works with supported file formats directly and does not import that data into MongoDB. If you want to efficiently read a subset of data, it will be better to filter the source files before saving to cloud storage. You could also choose to partition your data to support your common query patterns. However, as @Benjamin_Flast mentioned Data Federation would not be a good fit for your data format which appears to require a custom parser.If your goal is to import your data into a MongoDB deployment, you probably want to be looking at using mongoimport or a custom import script. A custom script would be more appropriate if you are using unsupported file formats or want to filter data during ingestion.Regards,\nStennie", "username": "Stennie_X" } ]
Is possible by Data Lake?
2022-10-26T07:36:43.712Z
Is possible by Data Lake?
2,004
null
[ "node-js", "server" ]
[ { "code": "", "text": "I encounter this error message when I try to establish a connection, and my operating system is Windows. I have already tried reinstalling multiple times, but I can’t resolve it", "username": "Fabio_D_amato" }, { "code": "mongod.cfg", "text": "", "username": "Jack_Woehr" } ]
Connect ECONNREFUSED 127.0.0.1:27017
2022-10-29T22:02:20.953Z
Connect ECONNREFUSED 127.0.0.1:27017
10,519
https://www.mongodb.com/…d0a296316a30.png
[ "mongodb-shell", "containers", "kubernetes-operator" ]
[ { "code": "", "text": "Hi,I try to set up a mongodb database locally but would like to access it with host=ip address instead of host=localhost.After setting up my users (admin, root + other users) through mongosh on 127.0.0.1:27017 (localhost), everything works fine when testing. It recognized the user i just created.However, when trying to login to mongo via host=ip address, mongosh still logs in fine but cannot recognize the admin user.I’m very confused about this behaviour, and this is preventing me login with a client with ip address (which i need because i want to use a docker container to then access my local mongo database).I would appreciate some help here \nThanks!PS: See attached screenshots of the 2 different behaviours.\nhost_ip_authentication_failed747×274 13 KB\n\n\nlocalhost_success1084×576 20.3 KB\n", "username": "Loan_Dewaele" }, { "code": "ps -aef | grep [m]ongo\nss -tlnp \n", "text": "It look like you are connecting to 2 different instances of mongod.The one in 127.0.0.1 generates introduction messages that are different from the ones generated by the one at 192.168.1.119. In particular the one at 127.0.0.1 warns you that Access control is not enabled.We need the output of the following commands. We need the output from both terminals, the one you use to connect to 127… and the one you use to connect to 192…", "username": "steevej" }, { "code": "", "text": "Hi Steeve,Thanks for the reply!First i need to precise i run mongo on a windows machine. My ip addres 192.168.1.119 is the one i fixed on my network (i tried at least - wifi network).\nI ran the commands i posted earlier on the same terminal back-to-back, while mongod was running in an other terminal.The fact that the introductory message when login on localhost(127.0.0.1) mentionned that “Access control is not enabled” is indeed strange. In my mongod.conf, i have security authorization = enabled.The other weird thing is that on localhost, when i request the list of db users, i can see my admin user in the admin database. However, when login on 192…, when i request the list of users and roles, i see no users (see attachments).As i’m running my terminal on windows, the ss command is not working as it is a linux command, though i’m attaching the equivalent with netstat.I am attaching a couple of screenshots that i hope can help:Appreciate the help,\nThanksmongo_conf.png\nmongosh_admin_user_localhost.png\nmongosh_no_user_recognized_on_ip_192.png\n\nmongosh_no_user_recognized_on_ip_192730×297 10.1 KB\nps_grep_mongo.png\nrouting_table_netstat_r.png\n\nrounting_table_netstat_r559×604 9.98 KB\nnet_stat_o.txt\nnetstat_o.txt (11.0 KB)", "username": "Loan_Dewaele" }, { "code": "", "text": "Your netstat output seems to confirms thatyou are connecting to 2 different instances of mongod.The 127.0.0.1 seems to be on k8s while the 192… is not.", "username": "steevej" }, { "code": "", "text": "Just filtering by port 27017, this is what i see from my 2 ip addresses.\nAny idea how to solve this issue?Proto Local Address Foreign Address State PID\nTCP 127.0.0.1:27017 kubernetes:50497 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:50498 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:50499 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:50500 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:50501 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:50502 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:50503 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:55095 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:55096 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:55097 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:55128 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:55129 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:55130 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:63075 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:63076 ESTABLISHED 5868\nTCP 127.0.0.1:27017 kubernetes:63077 ESTABLISHED 5868\nTCP 192.168.1.119:62175 host:27017 ESTABLISHED 23136\nTCP 192.168.1.119:62176 host:27017 ESTABLISHED 23136\nTCP 192.168.1.119:62177 host:27017 ESTABLISHED 23136\nTCP 192.168.1.119:62178 host:27017 ESTABLISHED 23136", "username": "Loan_Dewaele" }, { "code": "", "text": "In our case we are interested with the LISTENING ports rather than ESTABLISHED.The first step would be to stop the windows instances that listen on 192…119:27017.Then it is possible that traffic is automatically re-directed to the k8s instances once the first step is done.If not, then you have to do port forwarding for 192…119:27017 the same port forwarding that you have done for 127…1:27017.But since your goal isI try to set up a mongodb database locallyyou should simply get rid of the k8s instance and just use the Windows version.", "username": "steevej" } ]
Autentication failed when using host=ip, but succeed with localhost
2022-10-30T21:10:16.709Z
Autentication failed when using host=ip, but succeed with localhost
3,279
null
[ "python" ]
[ { "code": "", "text": "am using Mongodb Database to store data, and while I query data from the DB, it gives the output in ‘pymongo cursor’ type, which I need to convert to Dataframe. However, I have tried 2 methods, but they are taking 2-4 seconds to do it(even for as less as 1 record). Is there a way of faster conversion to dataframe?", "username": "Likith_sai" }, { "code": "explain('executionStats')DataFrameTablendarray", "text": "Welcome to the MongoDB Community @Likith_sai !What versions of Python and PyMongo are you using?I have tried 2 methods, but they are taking 2-4 seconds to do it(even for as less as 1 record)If it takes this long to retrieve a single document, I would suspect one or more of the following:you are retrieving a document from a large collection using a query that isn’t properly supported by an indexyour document is large or highly nested (perhaps spending significant time to convert into a data frame)your measure of time includes application overhead that is unrelated to the database query processing timePlease share some more details about your environment:how are you measuring 2-4 seconds time and what does that include (query time, time to fetch results over the network, time for your function to execute, etc) ?explain('executionStats') output for the query you are running to fetch resultssnippet of Python code showing how you are fetching and processing the resultsaverage size (in bytes) and complexity (number of fields, levels of nesting) of the document you are fetchingtype of MongoDB deployment are you connecting to (local or remote relative to your Python code; standalone, replica set, or sharded cluster)?Is there a way of faster conversion to dataframe?The MongoDB Python driver team maintains a PyMongoArrow extension for PyMongo which returns query result sets as Pandas’ DataFrame, Apache Arrow Table, and NumPy ndarray types.This is definitely a recommended approach if your goal is to produce a result set in any of the supported data formats, but if you currently have performance issues fetching a single document there is probably tuning required elsewhere to improve your outcomes.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hey @Likith_sai, the direction that @Stennie_X provided about PyMongoArrow is probably the way to go. But would you be able to share more information about what you’re doing with the data and what you’re looking to accomplish?Feel free to throw some time on my calendar if it would be easier: Calendly - Benjamin Flast", "username": "Benjamin_Flast" } ]
Pymongo Cursor to Dataframe taking long time
2022-10-21T06:01:11.698Z
Pymongo Cursor to Dataframe taking long time
3,101
null
[ "aggregation", "queries" ]
[ { "code": "db.feedposts.aggregate([\n {\n $lookup: {\n from: 'friends',\n pipeline: [\n {\n $match: {\n $and: [\n {\n $or: [\n { from: { $eq: ObjectId('634d06ae9787d60afbddff96') } },\n { to: { $eq: ObjectId('634d06ae9787d60afbddff96') } },\n ],\n },\n { reaction: 3 },\n ],\n },\n },\n { $group: { _id: null, from: { $addToSet: '$from' }, to: { $addToSet: '$to' } } },\n {\n $project: {\n _id: 0,\n ids: { $setUnion: ['$from', '$to'] },\n },\n },\n ],\n as: 'friendsDetails',\n },\n },\n { $unwind: '$friendsDetails' },\n {\n $match: {\n userIds: {\n $in: '$$friendDetails'\n }\n }\n }\n ])\n", "text": "Here I am giving sample query of my question. let me know what is the way to achieve this type of thingCurrently this query gives error but here I am giving idea what I need to achieve", "username": "ronakd" }, { "code": "{\n $match: {\n $expr: {\n \n {\n $in: [\n '$userIds',\n {$ifNull : ['$friendDetails.ids',[]]}\n ]\n }\n }}\n}\n", "text": "Hi @ronakd ,.I think it should work with $expr in the match:I haven’t tested this yet but it should allow you to use a field as a value in an expression…Let me know if that works.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks @Pavel_Duchovny . it works for me", "username": "ronakd" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I am using aggregation and doing lookup after that need to use lookup value in match query. Is that possible?
2022-10-27T08:46:39.041Z
I am using aggregation and doing lookup after that need to use lookup value in match query. Is that possible?
1,228
null
[ "dot-net", "replication" ]
[ { "code": " db.adminCommand(\n {\n getCmdLineOpts: 1\n }\n)\n1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken) at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation1 operation, ReadPreference readPreference, CancellationToken cancellationToken) at MongoDB.Driver.MongoDatabaseImpl.UsingImplicitSessionAsync[TResult](Func", "text": "While trying to connect to mongodb secondary node non replicaset mode to rotate logs and trying to run this command from .net to get the configured log pathConnectionstring: \"mongodb://superuser:password@localhost:27017/?authSource=admin\".net mongdb driver: 2.14.1Getting connection error.Error occurred while running StartMongoLogCleanup A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode : Primary } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : “1”, Type : “ReplicaSet”, State : “Connected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/localhost:27017” }”, EndPoint: “Unspecified/localhost:27017”, ReasonChanged: “Heartbeat”, State: “Connected”, ServerVersion: 4.4.0, TopologyVersion: { “processId” : ObjectId(“635faf2c862466cca6786229”), “counter” : NumberLong(3) }, Type: “ReplicaSetSecondary”, WireVersionRange: “[0, 9]”, LastHeartbeatTimestamp: “2022-10-31T14:04:20.4815165Z”, LastUpdateTimestamp: “2022-10-31T14:04:20.4815167Z” }] }. at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description) at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask) at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedAsync(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken) at MongoDB.Driver.Core.Clusters.Cluster.SelectServerAsync(IServerSelector selector, CancellationToken cancellationToken) at MongoDB.Driver.Core.Clusters.IClusterExtensions.SelectServerAndPinIfNeededAsync(ICluster cluster, ICoreSessionHandle session, IServerSelector selector, CancellationToken cancellationToken) at MongoDB.Driver.Core.Bindings.ReadPreferenceBinding.GetReadChannelSourceAsync(CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.RetryableReadContext.InitializeAsync(CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.RetryableReadContext.CreateAsync(IReadBinding binding, Boolean retryRequested, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.ReadCommandOperation 1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken) at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation 1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoDatabaseImpl.ExecuteReadOperationAsync[T](IClientSessionHandle session, IReadOperation 1 operation, ReadPreference readPreference, CancellationToken cancellationToken) at MongoDB.Driver.MongoDatabaseImpl.UsingImplicitSessionAsync[TResult](Func 2 funcAsync, CancellationToken cancellationToken)Using the same connectionstring from mongo client works.And need to mention replicaset in connectionstring to be able to connect from .net driver. “mongodb://superuser:password@localhost:27017/?replicaSet=rsWordWatch&authSource=admin&readPreference=secondary” ( this format works but I get the results from primary instead of secondary)", "username": "Sameer_Kattel" }, { "code": "", "text": "use directConnection=true in connectionstring", "username": "Sameer_Kattel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb .net drive connect to mongodb secondary not working
2022-10-31T14:18:54.714Z
Mongodb .net drive connect to mongodb secondary not working
1,265
null
[ "aggregation", "compass", "atlas-search" ]
[ { "code": "\"_id\": 635be28f9f4d67744732c92c\n\"name\": \"Kungfu Panda\"\n\"keywords:\" [\n \"martial\",\n \"art\"\n \"kid\",\n \"animal\"\n ]\n},\n\"_id\": 635be28f9f4d67744732c92c\n\"name\": \"Spiderman\"\n\"keywords:\" [\n \"insect\",\n \"super\"\n \"hero\",\n \"superhero\",\n \"kid\"\n ]\n},\n{\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"mappings\": {\n \"fields\": {\n \"keywords\": {\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"type\": \"string\"\n }\n }\n }\n}\n$search {\n index: 'filmKeywords',\n text: {\n query: 'kids',\n path: 'keywords.*'\n }\n}\n$search", "text": "Hello,As part of a proof-of-concept to demonstrate searches & indexes, I’ve created a collection of films and denoted keywords for each. I’ve then created a search index using entries in the keywords array using lucene.english to take advantage of stemming. The goal is to provide a service whereby a user can enter these keywords and be presented with matching films.Collection - named \"films\"Index Definition - named \"filmKeywords\"In Compass I’m trying to create an Aggregation Pipeline to demonstrate how the query terms would return different documents - based on reading the MongoDB manuals I have created a single $search stage so far like this…however I’m not having any documents returned.My questions are", "username": "Craig_Watson" }, { "code": "{\n text: {\n query: 'kid',\n path: 'keywords'\n }\n}\nkids{\n text: {\n query: 'kids',\n path: 'keywords',\n fuzzy: {}\n }\n}\n", "text": "Hi there,I was able to return results using this query:However, if you’d like to turn on fuzzy matching so results still return for kids, you can add the fuzzy parameter like so:", "username": "Elle_Shwer" }, { "code": "{\n index: 'filmKeywords',\n text: {\n query: 'kids',\n path: 'keywords'\n }\n}\n", "text": "Thank you @Elle_Shwer for the quick reply.I revised my query in the $search stage toand now it works, as the stem on “kids” matches “kid” - I was certain that didn’t work first time! I have another question on the scores and ordering, which I’ll start a new topic for.", "username": "Craig_Watson" } ]
Search index on array & Aggregation Pipeline
2022-10-31T13:10:58.778Z
Search index on array &amp; Aggregation Pipeline
1,750
null
[ "database-tools", "containers", "backup" ]
[ { "code": "COLLECTION_NAME", "text": "Hello!MongoDB 3.5 on docker container.I’ve been experiencing an issue related to mongodump. I tried to create a dump from docker container this way :docker exec $i mongodump --archive --gzip > $backupmongo/dump_mongo.gz but it constantly fails while dumping specific collection :Failed: archive writer: error writing data for collection COLLECTION_NAME to disk: error reading collection: Unknown element kind (0x65) / Mux ending but selectCases still open 4I suspected that it may be related to OOM but I couldn’t find anything which indicates a such reason.\nHas anyone encountered such a problem?", "username": "Azdev" }, { "code": "", "text": "Check this thread related to mongodump issues on Docker.It may help", "username": "Ramachandra_Tummala" } ]
Mongodump issue
2022-10-31T09:22:05.934Z
Mongodump issue
1,575
null
[ "queries", "indexes" ]
[ { "code": "", "text": "Hi all,\nI have created indexes in the format {a: 1, b: 1, c: 1, d:1, e:1} on collection which has 11.1 million documents.\nAnd the base query which I am applying is like\n{a: true, b: ‘text’, c: {\"$in\": [long array]}, d: {\"$in\": [‘A’, 'B]}}.sort(e: 1)So the query is using this index which is mentioned above, but the query is still taking 13 sec, and its returning 72 K records.\nIf I reduce the values for the ‘c’ attribute to a smaller array eg: [1,2] then the results are returned quicker.\nBut as soon as the array size is grown the time taken s increasing.\nSo what should be done in this case?\nWhat I observed is if there is no sorting then the query is running faster but if I apply sorting the time is increasing and ‘Sorted in Memory’ is marked as yes. What type of index I should create so that there is no in-memory sort?\nThanks for the help in Advance!", "username": "Viraj_Chheda" }, { "code": "{ a: 1, b: 1, d:1, e: 1, c: 1}", "text": "Hi @Viraj_Chheda ,You found exactly the reason why the query is slow and it is because of the fileds order which do not comply with the Equality Sort Range order.A $in with a large array of inputs is actually a range query as it build different bounds to the plan.The actual good order for the query is { a: 1, b: 1, d:1, e: 1, c: 1}Having very large $in is also unadvisable and perhaps you can break the query into smaller batches.Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "hi @Pavel_Duchovny , Thanks for the suggestion of ESR order for indexing, that has worked for me.\nFor all the columns on which range i.e $in was getting applied, I have moved it at the end in indexing order.", "username": "Viraj_Chheda" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query using index still taking time when using $in attribute and sorting
2022-10-28T10:39:02.707Z
Query using index still taking time when using $in attribute and sorting
1,740
https://www.mongodb.com/…0f1f3ce9dbb6.png
[ "dot-net" ]
[ { "code": "", "text": "This is my Controller. It works fine and gives me a product with a list of images.\n\nscreen1938×654 54.4 KB\nBut when i call the api the ImageDescription and ImageUrl are lost.\n\nscreen2919×374 8.23 KB\nAny suggestions would be greatly appreciated.", "username": "Andre_Wiebusch" }, { "code": "", "text": "Spend 3 days on debugging because i forgot {get; set;} in my Model ", "username": "Andre_Wiebusch" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lost properties on api call
2022-10-28T17:25:49.295Z
Lost properties on api call
1,211
null
[ "aggregation", "replication", "java", "atlas-cluster" ]
[ { "code": "18:45:05.239 [main] INFO org.mongodb.driver.client - MongoClient with metadata {\"driver\": {\"name\": \"mongo-java-driver|sync\", \"version\": \"4.6.1\"}, \"os\": {\"type\": \"Windows\", \"name\": \"Windows 10\", \"architecture\": \"amd64\", \"version\": \"10.0\"}, \"platform\": \"Java/Oracle Corporation/1.8.0_271-b09\"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=majority, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=MONGODB-X509, userName='null', source='$external', password=<hidden>, mechanismProperties=<hidden>}, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@7791a895]}, clusterSettings={hosts=[127.0.0.1:27017], srvHost=SANDBOX_LOCATION, srvServiceName=mongodb, mode=MULTIPLE, requiredClusterType=REPLICA_SET, requiredReplicaSetName='atlas-14aia3-shard-0', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=true, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=ServerApi{version=V1, deprecationErrors=null, strict=null}, autoEncryptionSettings=null, contextProvider=null}\n18:45:05.250 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-srv-SANDBOX_LOCATION] INFO org.mongodb.driver.cluster - Adding discovered server sandbox-shard-00-01.zvy8h.mongodb.net:27017 to client view of cluster\n18:45:05.264 [main] INFO org.mongodb.driver.cluster - Cluster description not yet available. Waiting for 30000 ms before timing out\n18:45:05.283 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-srv-SANDBOX_LOCATION] INFO org.mongodb.driver.cluster - Adding discovered server sandbox-shard-00-02.zvy8h.mongodb.net:27017 to client view of cluster\n18:45:05.284 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-srv-SANDBOX_LOCATION] INFO org.mongodb.driver.cluster - Adding discovered server SANDBOX_LOCATION:27017 to client view of cluster\n18:45:05.287 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-srv-SANDBOX_LOCATION] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sandbox-shard-00-01.zvy8h.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, {address=SANDBOX_LOCATION:27017, type=UNKNOWN, state=CONNECTING}, {address=sandbox-shard-00-02.zvy8h.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}]\n18:45:05.302 [main] INFO org.mongodb.driver.cluster - No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@704a52ec from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=sandbox-shard-00-01.zvy8h.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=SANDBOX_LOCATION:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=sandbox-shard-00-02.zvy8h.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out\n18:45:07.391 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:109113}] to SANDBOX_LOCATION:27017\n18:45:07.391 [cluster-rtt-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:106109}] to SANDBOX_LOCATION:27017\n18:45:07.392 [cluster-rtt-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:4, serverValue:109113}] to SANDBOX_LOCATION:27017\n18:45:07.393 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:110945}] to SANDBOX_LOCATION:27017\n18:45:07.393 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:3, serverValue:106109}] to SANDBOX_LOCATION:27017\n18:45:07.392 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=SANDBOX_LOCATION:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1070126600, setName='atlas-14aia3-shard-0', canonicalAddress=SANDBOX_LOCATION:27017, hosts=[SANDBOX_LOCATION:27017, SANDBOX_LOCATION:27017, SANDBOX_LOCATION:27017], passives=[], arbiters=[], primary='SANDBOX_LOCATION:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=7fffffff000000000000020b, setVersion=10, topologyVersion=TopologyVersion{processId=6357f67f41bb24fc0e204756, counter=6}, lastWriteDate=Wed Oct 26 18:47:55 IST 2022, lastUpdateTimeNanos=31725277708200}\n18:45:07.393 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=SANDBOX_LOCATION:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1069773000, setName='atlas-14aia3-shard-0', canonicalAddress=SANDBOX_LOCATION:27017, hosts=[SANDBOX_LOCATION:27017, SANDBOX_LOCATION:27017, SANDBOX_LOCATION:27017], passives=[], arbiters=[], primary='SANDBOX_LOCATION:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=10, topologyVersion=TopologyVersion{processId=6357efd39128363c5c6845f4, counter=4}, lastWriteDate=Wed Oct 26 18:47:55 IST 2022, lastUpdateTimeNanos=31725277361900}\n18:45:07.393 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=SANDBOX_LOCATION:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1068864000, setName='atlas-14aia3-shard-0', canonicalAddress=SANDBOX_LOCATION:27017, hosts=[SANDBOX_LOCATION:27017, SANDBOX_LOCATION:27017, SANDBOX_LOCATION:27017], passives=[], arbiters=[], primary='SANDBOX_LOCATION:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=10, topologyVersion=TopologyVersion{processId=6357f880fe3872997c6064b0, counter=3}, lastWriteDate=Wed Oct 26 18:47:55 IST 2022, lastUpdateTimeNanos=31725276468500}\n18:45:07.397 [cluster-rtt-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:6, serverValue:110945}] to SANDBOX_LOCATION:27017\n18:45:07.399 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.connection - Marking the connection pool for ServerId{clusterId=ClusterId{value='635932d941f8df4d23f8ee64', description='null'}, address=SANDBOX_LOCATION:27017} as 'ready'\n18:45:07.401 [MaintenanceTimer-3-thread-1] DEBUG org.mongodb.driver.connection - Pruning pooled connections to SANDBOX_LOCATION:27017\n18:45:07.402 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.cluster - Setting max election id to 7fffffff000000000000020b from replica set primary SANDBOX_LOCATION:27017\n18:45:07.403 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.cluster - Setting max set version to 10 from replica set primary SANDBOX_LOCATION:27017\n18:45:07.403 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] INFO org.mongodb.driver.cluster - Discovered replica set primary SANDBOX_LOCATION:27017\n18:45:07.404 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=SANDBOX_LOCATION:27017, type=UNKNOWN, state=CONNECTING}, {address=SANDBOX_LOCATION:27017, type=UNKNOWN, state=CONNECTING}, {address=SANDBOX_LOCATION:27017, type=REPLICA_SET_PRIMARY, TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, roundTripTime=1070.1 ms, state=CONNECTED}]\n18:45:07.404 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.cluster - Checking status of SANDBOX_LOCATION:27017\n18:45:07.405 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.connection - Marking the connection pool for ServerId{clusterId=ClusterId{value='635932d941f8df4d23f8ee64', description='null'}, address=SANDBOX_LOCATION:27017} as 'ready'\n18:45:07.405 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=SANDBOX_LOCATION:27017, type=UNKNOWN, state=CONNECTING}, {address=SANDBOX_LOCATION:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, roundTripTime=1069.8 ms, state=CONNECTED}, {address=SANDBOX_LOCATION:27017, type=REPLICA_SET_PRIMARY, TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, roundTripTime=1070.1 ms, state=CONNECTED}]\n18:45:07.406 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.cluster - Checking status of SANDBOX_LOCATION:27017\n18:45:07.406 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.connection - Marking the connection pool for ServerId{clusterId=ClusterId{value='635932d941f8df4d23f8ee64', description='null'}, address=SANDBOX_LOCATION:27017} as 'ready'\n18:45:07.407 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=SANDBOX_LOCATION:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, roundTripTime=1068.9 ms, state=CONNECTED}, {address=SANDBOX_LOCATION:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, roundTripTime=1069.8 ms, state=CONNECTED}, {address=SANDBOX_LOCATION:27017, type=REPLICA_SET_PRIMARY, TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, roundTripTime=1070.1 ms, state=CONNECTED}]\n18:45:07.407 [cluster-ClusterId{value='635932d941f8df4d23f8ee64', description='null'}-SANDBOX_LOCATION:27017] DEBUG org.mongodb.driver.cluster - Checking status of SANDBOX_LOCATION:27017\n18:45:07.409 [MaintenanceTimer-2-thread-1] DEBUG org.mongodb.driver.connection - Pruning pooled connections to SANDBOX_LOCATION:27017\n18:45:07.409 [MaintenanceTimer-4-thread-1] DEBUG org.mongodb.driver.connection - Pruning pooled connections to SANDBOX_LOCATION:27017\n18:45:08.902 [main] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:7, serverValue:109450}\n18:45:08.911 [main] DEBUG org.mongodb.driver.connection - Closed connection [connectionId{localValue:7, serverValue:109450}] to SANDBOX_LOCATION:27017 because there was a socket exception raised by this connection.\n18:45:08.918 [main] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=SANDBOX_LOCATION:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, roundTripTime=1068.9 ms, state=CONNECTED}, {address=SANDBOX_LOCATION:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, roundTripTime=1069.8 ms, state=CONNECTED}, {address=SANDBOX_LOCATION:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSecurityException: Exception authenticating}, caused by {com.mongodb.MongoCommandException: Command failed with error 8000 (AtlasError): 'certificate validation failed' on server SANDBOX_LOCATION:27017. The full response is {\"ok\": 0, \"errmsg\": \"certificate validation failed\", \"code\": 8000, \"codeName\": \"AtlasError\"}}}]\n18:45:08.924 [main] DEBUG org.mongodb.driver.connection - Invalidating the connection pool for ServerId{clusterId=ClusterId{value='635932d941f8df4d23f8ee64', description='null'}, address=SANDBOX_LOCATION:27017} and marking it as 'paused' due to com.mongodb.MongoSecurityException: Exception authenticating\n18:45:08.925 [main] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:2, serverValue:109113}\n18:45:08.925 [MaintenanceTimer-3-thread-1] DEBUG org.mongodb.driver.connection - Pruning pooled connections to SANDBOX_LOCATION:27017\n18:45:08.931 [main] DEBUG org.mongodb.driver.operation - Unable to retry an operation due to the error \"com.mongodb.MongoSecurityException: Exception authenticating\"\nException in thread \"main\" com.mongodb.MongoSecurityException: Exception authenticating\n\tat com.mongodb.internal.connection.X509Authenticator.authenticate(X509Authenticator.java:57)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.authenticate(InternalStreamConnectionInitializer.java:207)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.finishHandshake(InternalStreamConnectionInitializer.java:81)\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:185)\n\tat com.mongodb.internal.connection.UsageTrackingInternalConnection.open(UsageTrackingInternalConnection.java:54)\n\tat com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.open(DefaultConnectionPool.java:535)\n\tat com.mongodb.internal.connection.DefaultConnectionPool$OpenConcurrencyLimiter.openWithConcurrencyLimit(DefaultConnectionPool.java:911)\n\tat com.mongodb.internal.connection.DefaultConnectionPool$OpenConcurrencyLimiter.openOrGetAvailable(DefaultConnectionPool.java:852)\n\tat com.mongodb.internal.connection.DefaultConnectionPool.get(DefaultConnectionPool.java:178)\n\tat com.mongodb.internal.connection.DefaultConnectionPool.get(DefaultConnectionPool.java:167)\n\tat com.mongodb.internal.connection.DefaultServer.getConnection(DefaultServer.java:103)\n\tat com.mongodb.internal.binding.ClusterBinding$ClusterBindingConnectionSource.getConnection(ClusterBinding.java:175)\n\tat com.mongodb.client.internal.ClientSessionBinding$SessionBindingConnectionSource.getConnection(ClientSessionBinding.java:192)\n\tat com.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:592)\n\tat com.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$3(OperationHelper.java:574)\n\tat com.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:600)\n\tat com.mongodb.internal.operation.OperationHelper.withSourceAndConnection(OperationHelper.java:573)\n\tat com.mongodb.internal.operation.CommandOperationHelper.lambda$executeRetryableRead$5(CommandOperationHelper.java:211)\n\tat com.mongodb.internal.async.function.RetryingSyncSupplier.get(RetryingSyncSupplier.java:65)\n\tat com.mongodb.internal.operation.CommandOperationHelper.executeRetryableRead(CommandOperationHelper.java:217)\n\tat com.mongodb.internal.operation.CommandOperationHelper.executeRetryableRead(CommandOperationHelper.java:197)\n\tat com.mongodb.internal.operation.AggregateOperationImpl.execute(AggregateOperationImpl.java:195)\n\tat com.mongodb.internal.operation.AggregateOperation.execute(AggregateOperation.java:306)\n\tat com.mongodb.internal.operation.CountDocumentsOperation.execute(CountDocumentsOperation.java:131)\n\tat com.mongodb.internal.operation.CountDocumentsOperation.execute(CountDocumentsOperation.java:38)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:191)\n\tat com.mongodb.client.internal.MongoCollectionImpl.executeCount(MongoCollectionImpl.java:219)\n\tat com.mongodb.client.internal.MongoCollectionImpl.countDocuments(MongoCollectionImpl.java:189)\n\tat com.mongodb.client.internal.MongoCollectionImpl.countDocuments(MongoCollectionImpl.java:184)\n\tat PROJECTNAME.capability.XXXXX.XXXXXXX.main(XXXXXXXX.java:57)\nCaused by: com.mongodb.MongoCommandException: Command failed with error 8000 (AtlasError): 'certificate validation failed' on server SANDBOX_LOCATION:27017. The full response is {\"ok\": 0, \"errmsg\": \"certificate validation failed\", \"code\": 8000, \"codeName\": \"AtlasError\"}\n\tat com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:198)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:413)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:337)\n\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:101)\n\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:45)\n\tat com.mongodb.internal.connection.X509Authenticator.authenticate(X509Authenticator.java:55)\n\t... 29 more\n\nProcess finished with exit code 1\n\n", "text": "Hello,I have a simple springboot application which is connecting to mongodb atlas using x509. The error message is as below:Thanks", "username": "Sudeep_Banerjee" }, { "code": "", "text": "Hi @Sudeep_Banerjee!I presume you’re connecting to an Atlas instance based off the hostname described in the error but please correct me if I am wrong here. If so and just to confirm, are you utilsing a self-managed X509 certificates and were you able to connect previously?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hey Jason,Thank you for the reply. This issue is resolved now.\nThe hostname was manually updated in the log file to maintain confidentiality.\nThe certificate was provided by Mongo when a user is created in Atlas.I feel the issue was I was not able to properly register the certificate in JKS. Post doing the steps properly, the authentication issue was resolved.Regards,\nSudeep", "username": "Sudeep_Banerjee" }, { "code": "", "text": "Perfect - Glad to hear it is resolved and thanks for updating the post with the resolution.", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error authenticating using Java Springboot Application to Atlas Cluster using X509
2022-10-26T13:45:04.109Z
Error authenticating using Java Springboot Application to Atlas Cluster using X509
3,276
null
[ "node-js", "crud", "mongoose-odm" ]
[ { "code": "", "text": "Hello ,\nI am using updateMany to update a field on the matching records .\nI was surprised to learn that all(matching as well as not matching the condition ) the documents had their updatedAt time stamp updated. Questions :\na) is there a way not to update the timestamps where there is no match\nb) does it make it a slower operation as all the documents are updated as opposed to a doing single update operation in a loop (especially when i know only a few documents need to updated out of Thousands/Millions )Thanks in advance!", "username": "Anurag_Negi" }, { "code": "", "text": "Hi @Anurag_Negi ,Can you share the update statement and its filter ?Also a document sample?Ty\nPavel", "username": "Pavel_Duchovny" }, { "code": "const updateResult = await Notifications.updateMany(\n { _id: { $in: req.body } },\n { $set: { isRead: true } }\n );\n", "text": "Hi @Pavel_Duchovny ,Below are the details :backend is implement in/via Node/expressMongoose library for interfacing with Mongo dbupdate statement : (collection is named Notifications and an array of _ids are passed in req.body )sample dcoument:\n_id:ObjectId(‘635e41c65abbd1082e2b40d7’)\nuserId:ObjectId(‘62f26698d08902fa6ffab823’)\nfromUserId:ObjectId(‘6332fe3a9d679ece1d94c9f5’)\nisRead:true\nisClicked:false\ntype:“like”\nitemType:“comment”\ncreatedAt:2022-10-30T09:20:06.131+00:00\nupdatedAt:2022-10-30T09:37:34.015+00:00\n__v:0Prob description : I am setting the isRead field to true for the _ids that are passed . This works fine and only those documents which match the ids passed gets the isRead field updated to true. However, updatedAt time stamp gets updated for “all” the documents in the collection and not just the matching records/documents.Questions:\na) is it less efficient compared to multiple find and update when the updates are only few records out of thousands/millions ?\nb) is there a way to update only the timestamp of the affected records and not all the records?Many Thanks!", "username": "Anurag_Negi" }, { "code": "const updateResult = await Notifications.updateMany(\n { _id: { $in: req.body } },\n { $set: { isRead: true , updatedAt : new Date()}}\n );\n", "text": "Hi @Anurag_Negi ,Yes if you update all records with no filter the only way is to perform a full collection scan. This could be a resource intensive.If you need to update only those ids record just add additional set:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny Thanks for clarifying.", "username": "Anurag_Negi" } ]
Updatemany updates the updatedAt timestamp (using Mongoose) of all the records. Does it make it slow as it needs to go through all records
2022-10-28T18:01:29.239Z
Updatemany updates the updatedAt timestamp (using Mongoose) of all the records. Does it make it slow as it needs to go through all records
4,151
https://www.mongodb.com/…1c3a8a611529.png
[]
[ { "code": "console.log('accounts', orgUtilityAccountsByHouse.length)\n console.log(Object.keys(orgUtilityAccountsByHouse).length)\n for (let x = 0; x < Object.keys(orgUtilityAccountsByHouse).length; x++){\n console.log(x, Object.keys(orgUtilityAccountsByHouse)[x])\n }\n", "text": "I’m trying to debug a trigger function with console logs in the web UI, but it seems like the output is getting cut off.The following code should print out a few hundred log statements.But instead I seemingly get a truncated version with only a few lines.\nHow exactly am I supposed to debug without being able to see logs?", "username": "Bowden_Kelly" }, { "code": "", "text": "Hi @Bowden_Kelly - Welcome to the community!Thank for you providing those details But instead I seemingly get a truncated version with only a few lines.Would you be able to clarify if you mean seeing the ~3-4 lines as shown in your screenshot?I have done some testing on my end and it appears the console results output 100 lines of logs (based off my testing). However, in saying so, I did attempt to log 200 where only the most recent 100 had shown. I would just like to clarify if you meant that you could only see 3-4 log lines or if you had >100 log lines you wanted to see.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran - you are correct. I get ~100 as well which isn’t super useful.", "username": "Bowden_Kelly" }, { "code": "", "text": "Thanks for clarifying @Bowden_KellyI’m still looking into this one but in the meantime for debugging purposes, would you be able to perhaps be output with longer lines (maybe with delimiters if you wish to parse the data)?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hmm something like storing all my logs on an increasingly large string and then print it all once at the end? That might work unless there is an exception. So I guess I could wrap the whole thing in a try / catch / finally a print the single line log that way. Then take that output, put it into another script that formats it? I’ll give it a shot I guess.Though honestly, if this is the solution, I’ll probably just find another product. Is it unreasonable to want to see my function’s output? Feels like logging is table stakes.", "username": "Bowden_Kelly" }, { "code": "", "text": "Sorry your experience have not been as positive as you would like it to be. However, we’re keen to improve the usability of the product, and would be very interested if you’re able to provide a relevant feedback into the MongoDB feedback engine. All feedbacks are monitored by the product team, and would help with prioritizing future features of the product.", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Atlas functions seem to not display all of the logs when running in web UI
2022-10-18T00:19:13.878Z
Atlas functions seem to not display all of the logs when running in web UI
2,046
null
[ "configuration" ]
[ { "code": "diagnosticDataCollectionEnabled: falsemongodb.confsecurity:\n authorization: enabled\nprocessManagement:\n fork: false\nstorage:\n dbPath: /data/dpdash/mongodb/dbs/\nnet:\n port: 27017\n bindIp: 127.0.0.1,/tmp/mongodb-27017.sock\n tls:\n mode: requireTLS\n certificateKeyFile: /data/ssl/mongo_server.pem\n CAFile: /data/ssl/ca/cacert.pem\nsystemLog:\n destination: file\n path: \"/data/dpdash/mongodb/logs/mongodb.log\"\n logAppend: true\nsetParameter:\n diagnosticDataCollectionEnabled: false\nmongod/data/dpdash/mongodb/dbs/diagnostic.data/diagnosticDataCollectionEnabled: false", "text": "Hi all,Despite using diagnosticDataCollectionEnabled: false setting in my mongodb.conf, the server starts with capturing diagnostic data.$ mongod --config /data/dpdash/configs/mongodb.conf --logpath /data/dpdash/mongodb/logs/mongod.logThe following log convinces me of mongod starting to capture diagnostic data:{“t”:{\"$date\":“2022-10-30T20:34:46.133-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“config”:\"/data/dpdash/configs/mongodb.conf\",“net”:{“bindIp”:“127.0.0.1,/tmp/mongodb-27017.sock”,“port”:27017,“tls”:{“CAFile”:\"/data/ssl/ca/cacert.pem\",“certificateKeyFile”:\"/data/ssl/mongo_server.pem\",“mode”:“requireTLS”}},“processManagement”:{“fork”:false},“security”:{“authorization”:“enabled”},“setParameter”:{“diagnosticDataCollectionEnabled”:“false”},“storage”:{“dbPath”:\"/data/dpdash/mongodb/dbs/\"},“systemLog”:{“destination”:“file”,“logAppend”:true,“path”:\"/data/dpdash/mongodb/logs/mongod.log\"}}}}\n…\n…\n{“t”:{\"$date\":“2022-10-30T20:36:01.868-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20625, “ctx”:“initandlisten”,“msg”:“Initializing full-time diagnostic data capture”,“attr”:{“dataDirectory”:\"/data/dpdash/mongodb/dbs/diagnostic.data\"}}However, I see my /data/dpdash/mongodb/dbs/diagnostic.data/ directory empty. So is the above log a bug or does diagnosticDataCollectionEnabled: false has no effect at all?", "username": "Tashrif" }, { "code": "\n \n // GetFileSystemTime for now which has ~10 ms granularity.\n _config = _configTemp;\n \n // if we hit a timeout on the condvar, we need to do another collection\n // if we were signalled, then we have a config update only or were asked to stop\n if (status == stdx::cv_status::no_timeout) {\n continue;\n }\n }\n \n // TODO: consider only running this thread if we are enabled\n // for now, we just keep an idle thread as it is simpler\n if (_config.enabled) {\n // Delay initialization of FTDCFileManager until we are sure the user has enabled\n // FTDC\n if (!_mgr) {\n auto swMgr = FTDCFileManager::create(&_config, _path, &_rotateCollectors, client);\n \n _mgr = uassertStatusOK(std::move(swMgr));\n }\n \n \n ", "text": "Welcome to the MongoDB Community @Tashrif !The startup message is a bit misleading, but technically correct. If collection of diagnostic data is disabled, the thread for diagnostic data capture still gets initialised but does not collect any data (as you have observed).Per comments in the current codebase (MongoDB 6.0), this was an intentional choice with a future TODO to consider not starting the thread unless enabled:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Disabling diagnostic data capture
2022-10-31T02:48:59.640Z
Disabling diagnostic data capture
2,608
https://www.mongodb.com/…0_2_1023x194.png
[]
[ { "code": "{\n \"op\" : \"query\",\n \"ns\" : \"SquidexContentV3.States_Contents_Published3\",\n \"command\" : {\n \"find\" : \"States_Contents_Published3\",\n \"filter\" : {\n \"mt\" : {\n \"$exists\" : true\n },\n \"id\" : {\n \"$exists\" : true\n },\n \"_ai\" : \"16a2ca1c-6871-4a8d-9c21-98d47652355c\",\n \"_si\" : {\n \"$in\" : [\n \"5a050a0a-ee27-45a5-958d-39a4adfc2802\"\n ]\n },\n \"dl\" : {\n \"$ne\" : true\n },\n \"do.Slug.iv\" : \"BandeauListeAnnonce\",\n \"do.Site.iv\" : \"Amivac\"\n },\n \"sort\" : {\n \"mt\" : -1,\n \"id\" : 1\n },\n \"limit\" : 200,\n \"$db\" : \"SquidexContentV3\",\n \"$readPreference\" : {\n \"mode\" : \"secondary\"\n },\n \"lsid\" : {\n \"id\" : UUID(\"2284bc5a-ca9d-4093-b1f2-ff56e12cb13a\")\n },\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1656406366, 10),\n \"signature\" : {\n \"hash\" : BinData(0, \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : 0\n }\n }\n },\n \"keysExamined\" : 6,\n \"docsExamined\" : 5,\n \"hasSortStage\" : true,\n \"cursorExhausted\" : true,\n \"numYield\" : 0,\n \"nreturned\" : 0,\n \"queryHash\" : \"A13373FD\",\n \"planCacheKey\" : \"41B8B1E5\",\n \"locks\" : {\n \"ReplicationStateTransition\" : {\n \"acquireCount\" : {\n \"w\" : 2\n }\n },\n \"Global\" : {\n \"acquireCount\" : {\n \"r\" : 2\n }\n },\n \"Database\" : {\n \"acquireCount\" : {\n \"r\" : 1\n }\n },\n \"Collection\" : {\n \"acquireCount\" : {\n \"r\" : 1\n }\n },\n \"Mutex\" : {\n \"acquireCount\" : {\n \"r\" : 1\n }\n }\n },\n \"flowControl\" : {\n\n },\n \"storage\" : {\n\n },\n \"responseLength\" : 257,\n \"protocol\" : \"op_msg\",\n \"millis\" : 556,\n \"planSummary\" : \"IXSCAN { _si: 1, dl: 1, mt: -1 }\",\n \"execStats\" : {\n \"stage\" : \"CACHED_PLAN\",\n \"nReturned\" : 0,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 1,\n \"advanced\" : 0,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"inputStage\" : {\n \"stage\" : \"SORT\",\n \"nReturned\" : 0,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 7,\n \"advanced\" : 0,\n \"needTime\" : 6,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"sortPattern\" : {\n \"mt\" : -1,\n \"id\" : 1\n },\n \"memLimit\" : 104857600,\n \"limitAmount\" : 200,\n \"type\" : \"simple\",\n \"totalDataSizeSorted\" : 0,\n \"usedDisk\" : false,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [\n {\n \"mt\" : {\n \"$exists\" : true\n }\n },\n {\n \"_ai\" : {\n \"$eq\" : \"16a2ca1c-6871-4a8d-9c21-98d47652355c\"\n }\n },\n {\n \"do.Site.iv\" : {\n \"$eq\" : \"Amivac\"\n }\n },\n {\n \"do.Slug.iv\" : {\n \"$eq\" : \"BandeauListeAnnonce\"\n }\n },\n {\n \"id\" : {\n \"$exists\" : true\n }\n }\n ]\n },\n \"nReturned\" : 0,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 6,\n \"advanced\" : 0,\n \"needTime\" : 5,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"docsExamined\" : 5,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 5,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 6,\n \"advanced\" : 5,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"_si\" : 1,\n \"dl\" : 1,\n \"mt\" : -1\n },\n \"indexName\" : \"_si_1_dl_1_mt_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"_si\" : [\n\n ],\n \"dl\" : [\n\n ],\n \"mt\" : [\n\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"_si\" : [\n \"[\\\"5a050a0a-ee27-45a5-958d-39a4adfc2802\\\", \\\"5a050a0a-ee27-45a5-958d-39a4adfc2802\\\"]\"\n ],\n \"dl\" : [\n \"[MinKey, true)\",\n \"(true, MaxKey]\"\n ],\n \"mt\" : [\n \"[MaxKey, MinKey]\"\n ]\n },\n \"keysExamined\" : 6,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n }\n },\n \"ts\" : ISODate(\"2022-06-28T08:52:46.742+0000\"),\n \"client\" : \"10.0.0.10\",\n \"allUsers\" : [\n\n ],\n \"user\" : \"\"\n}\n", "text": "Hi,I have a cluster with 3 members. The queries for the public API go to the secondaries and the queries for the developer dashboard / management UI go to the primary node to distribute the load.There is no other setting like “Nearest” or so to configure the read preference.I would expect that the two secondaries have similar queries characteristics regarding load and query targeting. But it is totally different:\nimage1759×334 36.4 KB\nYou can see it here: MongoDB Free MonitoringI thought that the index might be corrupt or so but my test queries produce the same results. I make a test query and run explain on both secondaries individually.Is there anything I can do to check what is going on here?EDIT: I just restarted mongo-1, you might see it in the charts.\nEDIT 2: The primary has not changed the last 7 days.I also see very weird entries in the profiler:no yields, no locks, zero results, index scan and still it takes a lot of time.", "username": "Sebastian_Stehle" }, { "code": "mongod", "text": "Hi @Sebastian_StehleAre you still seeing different metrics for query targeting lately? Since the metrics are just reporting the facts about what’s happening in the server, then I would assume that there are two different queries that happens to target the different secondaries. I believe if you have a look at the mongod logs on that timeframe, you’ll have more information about the different queries and can trace it back to the originating application.About the profiler entry, I see that the query takes about 500ms to finish. It may not need to yield, but it returns zero results because the query predicates doesn’t match anything. However it still needs to do work to determine this fact. I don’t have enough information to say why it takes 500ms, but it’s usually due to the circumstances surrounding the server’s situation during execution. Are you seeing this query regularly in the logs? If yes, how often?I would also note that although you can read from secondaries to distribute read load somewhat, please be careful about this since a replica set’s primary purpose is to give you high availability instead of providing scaling. If you’re distributing reads equally to the two secondaries, if one of them is offline, then the remaining secondary will be hit with double the workload instantly. If it cannot handle this surge, then the application will timeout. See Can I use more replica nodes to scale? for a more thorough discussion on this.Best regards\nKevin", "username": "kevinadi" } ]
How can the query targeting counters differ so much between two Mongo secondaries
2022-10-28T16:39:11.419Z
How can the query targeting counters differ so much between two Mongo secondaries
1,353
null
[ "sharding", "golang" ]
[ { "code": "", "text": "Hi,I setup a sharded cluster and mainly interact via mongos with go-mongo-driver. There are 6 query routers:a:27016b:27016c:27016d:27016e:27016f:27016I format connect string according to the order above. It seems a:27016 consumes more CPU than other mongos instance (about 50%), which means a:27016 takes more requests and it’s imbalancing.I read the server selection algorithm and everything is great. This is my understanding:When server kind is MongosDo I misconfigure or is there any bug leads to imbalancing?", "username": "Jay_Chung" }, { "code": "mongosmongosmongoslocalThresholdMSmongosfa", "text": "Hi @Jay_Chung welcome to the community!Could you share your connection code, and the connection URI string? You can change the server names if you need to just like you did in the post, but seeing the URI string might help.However, from Read Preference for Sharded Clusters:If there is more than one mongos instance in the connection seed list, the driver determines which mongos is the “closest” (i.e. the member with the lowest average network round-trip-time) and calculates the latency window by adding the average round-trip-time of this “closest” mongos instance and the localThresholdMS . The driver will load balance randomly across the mongos instances that fall within the latency window.I think it should randomly choose a mongos that falls within a latency window. I have a test in mind: what if you reverse the order of the mongos in your URI string (e.g. f,e,d,c,b,a) and see if now server f is the busiest one, or is it still server a? This would make an interesting data point for the investigation.Best regards\nKevin", "username": "kevinadi" } ]
Connect to multiple mongos but they are imbalancing
2022-10-28T21:18:13.237Z
Connect to multiple mongos but they are imbalancing
1,848
null
[]
[ { "code": "", "text": "Hello all! No doubts that turning off THP is one of strongly recommended prerequisites while installing and configuring MongoDB, but I’d like to discuss the way to do it. According to official documentation the recommended way is to write simple one-shot “pseudo service” which will write required settings to appropriate /proc location. But why not using more convenient way through configuring it in /etc/default/grub settings? I mean transparent_hugepages=never parameter. This option is provided “out of the box” and it’s more proper way from my POV. What do you think folks?", "username": "Konstantin_Filippov" }, { "code": "", "text": "Still there is no anybody from support here? Can you shed some light on question above please?", "username": "Konstantin_Filippov" }, { "code": "", "text": "Hi @Konstantin_Filippov welcome to the community!Actually you can use any method you deem best to disable transparent huge pages. The method presented in the documentation was tested to work reliably in many cases, but alternative means are equally valid.The end goal is for the server to not display any startup warning connected to transparent huge pages. If you still see this warning printed in the mongod logs, then the method needs to be revisited.In fact I would say that there may be better methods to do this depending on your Linux distribution. There may be some ideas in server admin oriented sites such as ServerFault, for example.Best regards\nKevin", "username": "kevinadi" } ]
And again about THP off persistent setting
2022-10-11T09:48:11.658Z
And again about THP off persistent setting
1,431
null
[ "storage" ]
[ { "code": "", "text": "Hi GuysWe are planning to move from standalone mongodb instance to Atlas cloud. As part of Migrating existing local data on to cloud , Im kind of stuck on deciding how much storage should be subscribed for my existing data plus additional storage for future.Below is the db.stats() output from standalone node.db.runCommand({dbstats:1,scale:1})\n{\n“db” : “migrationdump”,\n“collections” : 38,\n“views” : 0,\n“objects” : 671469,\n“avgObjSize” : 8370.277510949873,\n“dataSize” : 5620381870,\n“storageSize” : 846135296,\n“numExtents” : 0,\n“indexes” : 60,\n“indexSize” : 29302784,\n“fsUsedSize” : 39560773632,\n“fsTotalSize” : 49359622144,\n“ok” : 1\n}From the above stats what value I should be referring to while finalising the required storage in Atlas. Is it dataSize or storageSize… or combination other stats also ??", "username": "Dilip_D" }, { "code": "", "text": "Hello @Dilip_D ,Choosing the Storage size and cluster tier depends on you application throughput, below are some docs that might help you in choosing the respective Atlas cluster tier and storage size.Ultimately the amount of storage you need depends on how much you’re expecting your data to grow in the future.Note: You can configure the cluster tier ranges that Atlas uses to automatically scale your cluster tier, storage capacity, or both in response to cluster usage.Cluster auto-scaling removes the need to write scripts or use consulting services to make scaling decisions. To help control costs, you can specify a range of maximum and minimum cluster sizes that your cluster can automatically scale to.Auto-scaling works on a rolling basis, meaning the process doesn’t incur any downtime.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Storage selection from Standalone to Atlas cloud
2022-10-26T11:15:37.202Z
Storage selection from Standalone to Atlas cloud
1,468
null
[ "aggregation" ]
[ { "code": "$lookup: {\n from: \"cards\",\n localField: \"deck\",\n foreignField: \"_id\",\n as: \"card\"\n}\n {\n \"$lookup\": {\n \"from\": \"cards\",\n \"as\": \"card\",\n \"localField\": \"deck\",\n \"foreignField\": \"_id\",\n \"unwinding\": {\n \"preserveNullAndEmptyArrays\": false\n }\n },\n \"totalDocsExamined\": 30600,\n \"totalKeysExamined\": 30600,\n \"collectionScans\": 0,\n \"indexesUsed\": [\n \"_id_\"\n ],\n \"nReturned\": 30600,\n \"executionTimeMillisEstimate\": 2488\n },\n", "text": "I have an aggregate which uses $lookup to check a foreign field in another table which is the _id field. The query runs slowly and seems to be checking every document during lookup. The $lookup stage takes ~3 seconds. I figured this step would take milliseconds since it is checking an indexed field. I think I fundamentally misunderstand how $lookup works.I am using lookup to go from the deck collection to the card collection. The deck has a reference to the _id key in the card collection.I’m seeing these results for the lookup stage, however:How can I get this execution time down?", "username": "Eric_Hurt" }, { "code": "executionStats", "text": "Hello @Eric_Hurt ,Welcome to The MongoDB Community Forums! Can you please share below details to understand your use case better?Regards,\nTarun", "username": "Tarun_Gaur" } ]
Issue with understand aggregate with $lookup and foreign field _id
2022-10-27T18:39:57.622Z
Issue with understand aggregate with $lookup and foreign field _id
1,137
null
[ "aggregation" ]
[ { "code": "", "text": "Consider two models ‘cars’ and ‘bookings’. car modal has fields like carName, registrationNumber, totalCapacity. The project is that users can rent cars for particular days, and their bookings will be stored in the ‘bookings’ modal which includes the date, objectId of the car, and user details.Please help me to create one query which gives list of available cars on searched date by checking in ‘bookings’ model.", "username": "Phinahas_Philip" }, { "code": "", "text": "Please share sample documents from all you collections. Also share expected results for the documents you shared.You wrote model rather than collections. Are you using mongoose or something like it?", "username": "steevej" }, { "code": "{\n _id:1;\n registrationNumber:'KL-01-101';\n color:'red'\n},\n{\n _id:2;\n registrationNumber:'KL-02-2002';\n color:'blue';\n}\n{\n _id:b1;\n carId:1;\n tripDate:'01-11-2022';\n}\n", "text": "I am using mongoose.carsbookingsIf the user input is like, he wants to see cars that are available on 01-11-2022. The desired output is that car with _id:2. Because a car with id 1 is booked for the date 01-11-2022 which we can see in bookings .I want this filtering in one query. I have heard about the $lookup and pipeline but I don’t know how to apply it. Please help ", "username": "Phinahas_Philip" }, { "code": "", "text": "What you need is an outer pipeline $lookup with localField:_id and foreignField:carId that has a inner pipeline that $match your $tripDate. The the outer pipeline terminates with a $match stage that filters out non-empty array (result of the $lookup).See https://www.mongodb.com/docs/manual/reference/operator/aggregation/lookup/#correlated-subqueries-using-concise-syntax.", "username": "steevej" } ]
$lookup and filtering
2022-10-27T12:23:15.120Z
$lookup and filtering
1,395
null
[ "aggregation" ]
[ { "code": " \"$lookup\":{\n \"from\": \"metrics\",\n \"as\": \"metrics\",\n \"let\": {\"userId\": \"$_id\"},\n \"pipeline\": {\n {\"$match\":{\"$expr\": {\"$in\": {\"$userId\",\"$ownerIds\"}}}},\n ----> {\"$project\": {\"fieldName\": 1, \"ownerIds\": 1, \"auth0Cache\": 1}},\n},\n},\n \"$lookup\":{\n \"from\": \"metrics\",\n \"as\": \"metrics\",\n \"let\": {\"userId\": \"$_id\"},\n \"pipeline\": {\n----> {\"$project\": {\"fieldName\": 1, \"ownerIds\": 1, \"auth0Cache\": 1}},\n {\"$match\":{\"$expr\": {\"$in\": {\"$userId\",\"$ownerIds\"}}}},\n},\n},\n", "text": "Hey all i was doing some performance tests on large aggregation mongo scripts and I noticed when I move the project line fromtoIt made performance jump. It ran 2-3times faster. It kinda makes sense that when you limit the data since this collection is a big one it should run faster, but I didn’t find any documentation about it. Does anyone know why this is an important change and can explain to me why exactly is happening and provide some more information about it? Thanks in advance.", "username": "Petar_Dragnev" }, { "code": "", "text": "Yes , limiting the quantity of data usually speed up things.But, in your case, the $project is the same, so the original amount of data (read from disk/cache) and the output amount of data is the same. It should not have a big influence.What could make a big difference are:How did you timed your operations? Are your numbers coming from the explain plan or wall clock including the total round trip of sending the query and processing the result? So it might be network delays if client and server are running on different machines. Resource contention if both client and server are running on the same machine.Did you run your tests multiple times? If not, it could be that the documents and indexes were not in memory for the first and slow test and were already in cache for the second and fast test.In principal, $match-ing unmodified (before $project) documents directly from a collection is faster because indexes can be used. You do have an index on ownerIds, don’t you? So, in principal, your 2nd faster test should be slower since you $project before you $match. I italicized should because I think the query optimizer detects that your $match uses fields from the original documents and performs the same.Since your $match is using $in for $userId and $userId is a single value you could use the following syntax:", "username": "steevej" }, { "code": "{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"accountId\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : ObjectId(\"634d6e4e8b661fe80d614513\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"isDeleted\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : true\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"isSystem\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : true\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"fields\" : {\n\t\t\t\t\t\"accountId\" : 1,\n\t\t\t\t\t\"auth0Cache.usermetadata.firstName\" : 1,\n\t\t\t\t\t\"auth0Cache.usermetadata.lastName\" : 1,\n\t\t\t\t\t\"dynamicMetricCount\" : 1,\n\t\t\t\t\t\"manualMetricCount\" : 1,\n\t\t\t\t\t\"metrics\" : 1,\n\t\t\t\t\t\"_id\" : 1\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"test_employees_v2.users\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"accountId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : ObjectId(\"634d6e4e8b661fe80d614513\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"isDeleted\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : true\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"isSystem\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : true\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"isDeleted\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : true\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"isSystem\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : true\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"accountId\" : 1,\n\t\t\t\t\t\t\t\t\"subscriptionType\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"accountId_1_subscriptionType_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"accountId\" : [ ],\n\t\t\t\t\t\t\t\t\"subscriptionType\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"accountId\" : [ \"[ObjectId('634d6e4e8b661fe80d614513'), ObjectId('634d6e4e8b661fe80d614513')]\" ],\n\t\t\t\t\t\t\t\t\"subscriptionType\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$lookup\" : {\n\t\t\t\t\"from\" : \"metrics\",\n\t\t\t\t\"as\" : \"metrics\",\n\t\t\t\t\"let\" : {\n\t\t\t\t\t\"userId\" : \"$_id\"\n\t\t\t\t},\n\t\t\t\t\"pipeline\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$project\" : {\n\t\t\t\t\t\t\t\"fieldName\" : 1,\n\t\t\t\t\t\t\t\"accountId\" : 1,\n\t\t\t\t\t\t\t\"isDeleted\" : 1,\n\t\t\t\t\t\t\t\"ownerIds\" : 1,\n\t\t\t\t\t\t\t\"auth0Cache\" : 1,\n\t\t\t\t\t\t\t\"goalId\" : 1\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$match\" : {\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"accountId\" : ObjectId(\"634d6e4e8b661fe80d614513\")\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"isSystem\" : {\n\t\t\t\t\t\t\t\t\t\t\"$ne\" : true\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$match\" : {\n\t\t\t\t\t\t\t\"$expr\" : {\n\t\t\t\t\t\t\t\t\"$in\" : [ \"$$userId\", \"$ownerIds\" ]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$addFields\" : {\n\t\t\t\t\"manualMetricCount\" : {\n\t\t\t\t\t\"$size\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$filter\" : {\n\t\t\t\t\t\t\t\t\"input\" : \"$metrics\",\n\t\t\t\t\t\t\t\t\"as\" : \"metrics\",\n\t\t\t\t\t\t\t\t\"cond\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : [\n\t\t\t\t\t\t\t\t\t\t\"$$metrics.fieldName\",\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"dynamicMetricCount\" : {\n\t\t\t\t\t\"$size\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$filter\" : {\n\t\t\t\t\t\t\t\t\"input\" : \"$metrics\",\n\t\t\t\t\t\t\t\t\"as\" : \"metrics\",\n\t\t\t\t\t\t\t\t\"cond\" : {\n\t\t\t\t\t\t\t\t\t\"$ne\" : [\n\t\t\t\t\t\t\t\t\t\t\"$$metrics.fieldName\",\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$match\" : {\n\t\t\t\t\"dynamicMetricCount\" : {\n\t\t\t\t\t\"$gt\" : 1\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$facet\" : {\n\t\t\t\t\"items\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$sort\" : {\n\t\t\t\t\t\t\t\"sortKey\" : {\n\t\t\t\t\t\t\t\t\"auth0Cache.usermetadata.firstName\" : 1,\n\t\t\t\t\t\t\t\t\"_id\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"limit\" : 100\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$project\" : {\n\t\t\t\t\t\t\t\"_id\" : true,\n\t\t\t\t\t\t\t\"dynamicMetricCount\" : true,\n\t\t\t\t\t\t\t\"accountId\" : true,\n\t\t\t\t\t\t\t\"manualMetricCount\" : true,\n\t\t\t\t\t\t\t\"auth0Cache\" : {\n\t\t\t\t\t\t\t\t\"usermetadata\" : {\n\t\t\t\t\t\t\t\t\t\"lastName\" : true\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"totalCount\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$group\" : {\n\t\t\t\t\t\t\t\"_id\" : {\n\t\t\t\t\t\t\t\t\"$const\" : null\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"totalCount\" : {\n\t\t\t\t\t\t\t\t\"$sum\" : {\n\t\t\t\t\t\t\t\t\t\"$const\" : 1\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$project\" : {\n\t\t\t\t\t\t\t\"_id\" : false,\n\t\t\t\t\t\t\t\"totalCount\" : true\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"_id\" : true,\n\t\t\t\t\"items\" : true,\n\t\t\t\t\"totalCount\" : {\n\t\t\t\t\t\"$arrayElemAt\" : [\n\t\t\t\t\t\t\"$totalCount\",\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$const\" : 0\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"_id\" : true,\n\t\t\t\t\"items\" : true,\n\t\t\t\t\"totalCount\" : \"$totalCount.totalCount\"\n\t\t\t}\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1666600486, 1),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1666600486, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"RTt0l2k9ECaNmPQz+Tt9eQHxIEA=\"),\n\t\t\t\"keyId\" : NumberLong(\"7132835261448192001\")\n\t\t}\n\t}\n}\n", "text": "Hi again,\nOn point 1 : I timed it both with Golang performance tests and also how much time took the mongodb to respond on the script. We are seeing this results both on local setups where client and server and running on the same machine and also on deployed ones.\nOn point 2: Ive run it a lot of times and timings are the same.\nHere is the explain:For the lookup it seems to be no data for the explain, maybe i should provide some parameter to the explain. Here the project is the first thing in the lookup pipeline. Before adding it there there are no project and this script ran 2-3 times slower than now.", "username": "Petar_Dragnev" }, { "code": "", "text": "I see you do some $project before $match. It is always better to $match before $project. The query optimizer might not be able to use the indexes.Can you test for isDeleted:false and isSystem:false rather than $nor:isDeleted:$eq:true?You did not share you whole pipeline so it is hard to make sense of the explain plan.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$lookup (aggregation) and $project based on performance
2022-10-21T08:55:21.515Z
$lookup (aggregation) and $project based on performance
2,147
null
[ "queries" ]
[ { "code": "Example of a compound shard key:\n\n{ \"sku\": 1, \"type\": 1, \"name\": 1 }\n\nExamples of queries that would be **targeted** with the compound shard key:\n\ndb.products.find( { \"sku\": ... } ) \ndb.products.find( { \"sku\": ... , \"type\": ... } ) \ndb.products.find( { \"sku\": ... , \"type\": ... , \"name\": ... } )\n\nExamples of queries that would be **scatter-gather** with the compound shard key:\n\ndb.products.find( { \"type\": ... } ) \ndb.products.find( { \"name\": ... } )\ndb.products.find( { \"sku\": ... , \"type\": ... , \"name\": ... } )db.products.find( { \"name\": ... , \"sku\": ... , \"type\": ... } )db.products.find( { \"type\": ... , \"name\": ... , \"sku\": ... } )", "text": "Excerpt from courseHere what I need to understand is whether the following are valid “targetted” queriesOr does the order of shard keys matter?", "username": "Shrinidhi_Rao" }, { "code": "", "text": "Never mind, I figured that the order doesn’t matter…", "username": "Shrinidhi_Rao" }, { "code": "", "text": "Hi @Shrinidhi_Rao ,Your observation is correct: a query to a sharded collection just has to include all of the fields in a compound shard key index (in any order) to target the shards that may contain matching documents.In the more general case, the order of keys in a query does not have to match the order of keys in a compound index definition. However, the order and direction of keys in a compound index definition is definitely important and should follow guidelines like the ESR (Equality, Sort, Range) Rule.You can explain query results to confirm which indexes are used and which shards are accessed for a query.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can queries have the compounded Shard Keys in a different order?
2022-10-30T14:08:08.668Z
Can queries have the compounded Shard Keys in a different order?
1,459
null
[ "node-js", "mongoose-odm" ]
[ { "code": "if (cachedDB) {\n try {\n cachedDB = await mongoose.connect(uriString, {\n useUnifiedTopology: false,\n useNewUrlParser: true,\n useFindAndModify: false,\n poolSize: 3,\n });\n } catch (err) {\n logger.error(err);\n cachedDB = null;\n }\n}\n", "text": "I am using lambda (nodeJS) with mongoDB atlas. The lambda creates a new connection to mongoDB every time it is invoked. It works perfectly fine with fewer requests. However, if I make 100 concurrent requests, assuming 100 new instance of lambda is created, 100 concurrent connections to mongoDB atlas is initiated. The connection time is then keep on increasing starting from 100ms to as high as 30 seconds.I am using M10 cluster tier of mongoDB which supports 1500 concurrent connections and I am using following code block for connection:I am not sure what’s wrong here. Can’t we have concurrent mongoDB connections?", "username": "Pratik_Raj2" }, { "code": "", "text": "Hi @Pratik_Raj2 welcome to the community!AWS Lambda has always been tricky to manage in terms of database connection, due to the nature of the service itself. That is, as far as I understand, AWS Lambda guarantees the execution of your function, but not the environment or other operational concerns regarding that execution. It is geared toward stateless functions, and MongoDB drivers by default works in a stateful manner.Please have a look at Manage Connections with AWS Lambda for best practices regarding connecting from AWS Lambda.However in my opinion, the cleanest way to do this is to create an API layer in front of the MongoDB server. This can be deployed separately, or you can use a custom HTTPS endpoints in Atlas. This layer will be responsible for connection to MongoDB, and the Lambda function will be calling this endpoint for database access. Since it’s only the API layer that connects to the database, you can easily monitor its performance and you won’t have this concurrent connection issues that takes a lot of effort to solve. You can also more easily secure the database since you only need to give IP access to the API layer.Best regards\nKevin", "username": "kevinadi" } ]
Concurrent connections to mongodb is drastically slow
2022-10-29T03:55:29.577Z
Concurrent connections to mongodb is drastically slow
1,863
null
[]
[ { "code": "", "text": "Hello All,We are currently running mongodb 3.6.8 version on our server (ubuntu) and we are moving our server to a debain where I have two questions :How can I Install same 3.6.8 version I tried to do it as mentioned here https://www.mongodb.com/docs/v4.4/tutorial/install-mongodb-on-debian/ by replacing 4.4 with 3.6 however Im getting this error at 3rd step and Is there a way Can I install this old version?\nE: The repository ‘MongoDB Repositories buster/mongodb-org/3.6 Release’ does not have a Release file.\nN: Updating from such a repository can’t be done securely, and is therefore disabled by default.\nN: See apt-secure(8) manpage for repository creation and user configuration details.If I go with 4.4 version itself on new server how can I dump data from older 3.6.8 version to this new db?Please help me ,Thanks!", "username": "priyatham_ik" }, { "code": "", "text": "Hi @priyatham_ikMongoDB 3.6 series was out of support since April 2021 and thus it won’t receive any bugfixes or improvements anymore. Even if you can install a 3.6 series server, it’s best not to, and use a supported version instead. The oldest supported series is MongoDB 4.2 at this moment.If you would like to get a fresh 4.4 series installation and move the data from the old deployment to the new one, Stennie posted a great answer here: Replace mongodb binaries all at once? - #3 by Stennie for a similar situation. Please see if the post answers your question.Best regards\nKevin", "username": "kevinadi" }, { "code": "ssh -fN -L 27018:localhost:27017 <remote_host> \n\n mongodump --port 27018 --db <remote_db_name> --username <remote_db_username> --password <remote_db_password> --archive | mongorestore --username <destination_db_username> --password <destination_db_password> --archive\n\nAfter running these commands on my new server I see my db when I do show dbs on mongo shell however when I change the port to 27018 in mongodb config file in order to run this restored db its throwing error and exiting with 42 saying address already in use and Its happening because of the ssh tunnel and when I kill the port and re run it now I dont see the restored DB .\n\nCan anyone help me ?\n\n\n", "text": "Hello @kevinadi ,I have installed the 4.4 version on new server and did this mentioned below to dump and restore from old db to new db", "username": "priyatham_ik" }, { "code": "", "text": "Hi @priyatham_ikI think I would try to avoid using SSH tunneling for this operation, since it introduces another layer of complexity in the process.Instead, I would do the dump & restore process from a node that can directly connect to the servers. This would make the process easier, and you’re not mixing MongoDB restore issues with network issues and have to fix both to make this work.Best regards\nKevin", "username": "kevinadi" } ]
How can I dump and restore db data from 3.6.8 to current 4.4 version?
2022-10-27T17:28:56.166Z
How can I dump and restore db data from 3.6.8 to current 4.4 version?
2,001
null
[ "aggregation", "dot-net", "crud" ]
[ { "code": "$$NOW$$CLUSTER_TIME$currentDate$$CLUSTER_TIMEpublic class SomeDTO\n{\n [BsonId]\n public Guid Id { get; set; }\n\n [BsonElement(\"timestamp\")]\n public BsonTimestamp Timestamp { get; set; }\n}\n[Test]\npublic void Set_Unique_Timestamp_On_UpsertOne_With_ClusterTime_SysVar()\n{\n // creating test.test collection with unique timestamp index\n var mongoClient = LifetimeScope.Resolve<MongoClientProvider>().Get();\n var collection = mongoClient.GetDatabase(\"test\").GetCollection<SomeDTO>(\"test\");\n collection.Indexes.CreateOne(\n new CreateIndexModel<SomeDTO>(\n Builders<SomeDTO>.IndexKeys.Ascending(x => x.Timestamp),\n new CreateIndexOptions { Unique = true }));\n\n // upsert operation\n async Task UpsertOneAsync(Guid id)\n {\n var pipeline = new[] { new BsonDocument(\"$set\", new BsonDocument(\"timestamp\", \"$$CLUSTER_TIME\")) }; // <---- causing troubles\n\n await collection.UpdateOneAsync(\n Builders<SomeDTO>.Filter.Eq(x => x.Id, id),\n Builders<SomeDTO>.Update.Pipeline(pipeline),\n new UpdateOptions { IsUpsert = true })\n .ConfigureAwait(false);\n }\n\n // running tasks in parallel\n var tasks = Enumerable.Range(0, 150)\n .Select(_ => UpsertOneAsync(Guid.NewGuid()));\n\n Assert.DoesNotThrowAsync(() => Task.WhenAll(tasks));\n\n}\nA write operation resulted in an error. WriteError: { Category : \"DuplicateKey\", Code : 11000, Message : \"E11000 duplicate key error collection: test.test index: timestamp_1 dup key: { timestamp: Timestamp(1666297782, 58) }\"\n$currentDateasync Task UpsertOneAsync(Guid id)\n{\n await collection.UpdateOneAsync(\n Builders<SomeDTO>.Filter.Eq(x => x.Id, id),\n Builders<SomeDTO>.Update.CurrentDate(x => x.Timestamp, UpdateDefinitionCurrentDateType.Timestamp),\n new UpdateOptions { IsUpsert = true })\n .ConfigureAwait(false);\n}\n$$CLUSTER_TIME$currentDate", "text": "Since Mongodb 4.2 there is opportunity to use updates with aggregation pipeline. In particular, it became possible to use system variables: $$NOW and $$CLUSTER_TIME. So now you have 2 ways of setting current timestamp according to documentation. Using operator $currentDate or variable $$CLUSTER_TIMEBut for some reason these two methods works diffirently in some occasions, which is not clear from the docs at all.For example, I want to store some entities with timestamp field, which has to be unique across the entire collection.Now, let’s write a lot of objects concurrentlyTest failed with the error belowBut if I implement of UpsertOne method with $currentDate operatortest will pass.Why $$CLUSTER_TIME sys variable returns equal values in case of parallel upserts? And why $currentDate don’t?", "username": "hexlify" }, { "code": "$CLUSTER_TIME$currentDate$currentDate$$NOWNOWCLUSTER_TIMEtime_tordinalreplset [direct: primary] test> db.aggregate([ {$documents:[{now:'$$NOW', cluster_time:'$$CLUSTER_TIME'}]} ])\n[\n {\n now: ISODate(\"2022-10-26T03:07:05.889Z\"),\n cluster_time: Timestamp({ t: 1666753622, i: 1 })\n }\n]\n\nreplset [direct: primary] test> db.aggregate([ {$documents:[{now:'$$NOW', cluster_time:'$$CLUSTER_TIME'}]} ])\n[\n {\n now: ISODate(\"2022-10-26T03:07:06.194Z\"),\n cluster_time: Timestamp({ t: 1666753622, i: 1 })\n }\n]\nnowcluster_timeicluster_time$$CLUSTER_TIME$$NOW$currentDate$$NOW$currentDate", "text": "Hi @hexlify welcome to the community!Why $CLUSTER_TIME sys variable returns equal values in case of parallel upserts? And why $currentDate don’t?I am assuming you return $currentDate as date type instead of timestamp (which makes it equivalent to $$NOW). But in short, it’s because they return different things. From the system variable page:And this is what timestamp is: https://www.mongodb.com/docs/manual/reference/bson-types/#timestampsThis internal timestamp type is a 64 bit value where:So if I have an aggregation to see the values of those two variables and running that aggregation in quick-ish succession, it prints:Note that now looks different, but cluster_time looks identical between the two runs. This is because the i value in cluster_time is “operation counter in that second”, so it’s feasible that since you’re running the test in parallel, the cluster hasn’t done any operation and thus the script can create identical $$CLUSTER_TIME values. In contrast, you’ll very much less likely to create two identical $$NOW/$currentDate values since they have a much smaller resolution (milliseconds instead of seconds).Having said that, since timestamp is an internal MongoDB variable, it’s usually best to use $$NOW or $currentDate instead for practical purposes.Best regards\nKevin", "username": "kevinadi" }, { "code": "$currentDate$NOWCLUSTER_TIME$currentDateUpdateDefinitionCurrentDateType.Timestampmongod$currentDate { $type: \"timestamp\" }", "text": "Thanks for your response.I am assuming you return $currentDate as date type instead of timestamp (which makes it equivalent to $NOW )I guess it’s my fault that I didn’t make it clear that I compare CLUSTER_TIME sysvar and $currentDate operator with timestamp type (function defined in last snippet uses UpdateDefinitionCurrentDateType.Timestamp constant).This is because the i value in cluster_time is “operation counter in that second”, so it’s feasible that since you’re running the test in parallel, the cluster hasn’t done any operation and thus the script can create identical $$CLUSTER_TIME values.It’s interesting because I thought that timstamp values are always unique. But as the documentation says it’s only true for one mongod instance. And I have replica set of 2 nodes locally. Probably requests go to random node every time and therefore timestamps are identical.But the question remains: why the test with parallel upserts works perfectly with $currentDate { $type: \"timestamp\" }?", "username": "hexlify" }, { "code": "$currentDate { $type: \"timestamp\" }$$NOW", "text": "But the question remains: why the test with parallel upserts works perfectly with $currentDate { $type: \"timestamp\" } ?This is difficult to determine without running the tests themselves and work back on the exact reason. However I think there’s a high possiblity this can occur anyway, since a timestamp’s resolution is per-second. The incrementing ordinal totally depends on the number of operations performed by the server in that second, so a collision is quite likely to occur in a highly parallel workload.However I would recommend your app to not use timestamp datatype since:Hopefully this helps!Best regards\nKevin", "username": "kevinadi" } ]
$$CLUSTER_TIME vs $currentDate in case of parallel upserts
2022-10-20T21:01:58.160Z
$$CLUSTER_TIME vs $currentDate in case of parallel upserts
1,949
null
[ "php" ]
[ { "code": "", "text": "Has anyone successfully installed PHP8.1 with the mongodb.so driver in linux?\nI am using Ubuntu 22.04, which is running flawlessly.1… I have installed mongodb, which works as advertised using the mongosh shell.\n2… PHP is installed using the recommended installation process.\n3… The mongodb driver is installed through pear(PECL)\n4… I have verified that mongodb.so is actually in folder /usr/lib/php/20210902\n4… I have added extension=mongodb.so to the php.ini file which is located in /etc/php/8.1/apache2/\n5… I have added the extension dir /usr/lib/php/20210902, which isn’t supposed to be necessary, but I did it anyway.Both of the paths above are exactly the same as reported in phpinfo.phpWhen I verify my installation using php -m, I get the following errorPHP Startup: Unable to load dynamic library ‘mongodb.so’ (tried: /usr/lib/php7/modules/mongodb.so (/usr/lib/php7/modules/mongodb.so: undefined symbol: ns_parserr), /usr/lib/php7/modules/mongodb.so.so (/usr/lib/php7/modules/mongodb.so.so: cannot open shared object file: No such file or directory))This is the installation process I have used.\nsudo apt install apache2\nsudo apt install phpINSTALL MONGODB EXTENSION FOR PHPsudo apt install php-dev php-pear\nsudo apt -y install php-mongodb", "username": "Robert_Duncan" }, { "code": "sudo pecl install mongodb", "text": "You don’t show yourself installing the MongoDB PHP Extension via sudo pecl install mongodb", "username": "Jack_Woehr" }, { "code": "", "text": "I see where I installed it incorrecrly. re-installed with sudo pecl install mongodb. Still will not connect.\nI have tried new mongodbClient(‘localhost:27017’)\nI have tried new mongodb\\driver\\manager(‘localhost:27017’)\nboth throw this err recorder in apache2 error log\nUncaught Error: Class ‘MongoDB\\Driver\\Manager’ not found\nmongodb.so is in the folder /usr/lib/php/20210902\nI have 2 php.ini /etc/php/8.1/apache2/php.ini – /etc/php/8.1/cli/php.ini and the extension is loaded in both. the one in /cli is the loaded file.\nphp -m verifies that the extension is loaded and phpinfo verifies that mongodb is loadedI am at an impass. I can’t think of anything else.", "username": "Robert_Duncan" }, { "code": "require_once getenv('DOCUMENT_ROOT') . '/../vendor/autoload.php';", "text": "You have to have the vendor stuff at hand, e…g, require_once getenv('DOCUMENT_ROOT') . '/../vendor/autoload.php'; … you’re not loading the MongoDB library on top of the extension.", "username": "Jack_Woehr" }, { "code": "$ php -a\nInteractive shell\n\nphp > require_once('vendor/autoload.php');\nphp > $x = new \\MongoDB\\Client('mongodb://localhost');\nphp > var_dump($x);\nobject(MongoDB\\Client)#3 (4) {\n [\"manager\"]=>\n object(MongoDB\\Driver\\Manager)#2 (2) {\n [\"uri\"]=>\n string(19) \"mongodb://localhost\"\n [\"cluster\"]=>\n array(0) {\n }\n }\n [\"uri\"]=>\n string(19) \"mongodb://localhost\"\n [\"typeMap\"]=>\n array(3) {\n [\"array\"]=>\n string(23) \"MongoDB\\Model\\BSONArray\"\n [\"document\"]=>\n string(26) \"MongoDB\\Model\\BSONDocument\"\n [\"root\"]=>\n string(26) \"MongoDB\\Model\\BSONDocument\"\n }\n [\"writeConcern\"]=>\n object(MongoDB\\Driver\\WriteConcern)#6 (0) {\n }\n}\nphp >\n", "text": "", "username": "Jack_Woehr" }, { "code": "", "text": "When you have multiple php.ini files, you can create a PHP info file, and output the current PHP configuration to verify which version is active, and which php.ini file is being used.\nhttps://www.php.net/manual/en/function.phpinfo.phpIt would show up like this, for example:\nLoaded Configuration File\tC:\\SOME_PATH\\bin\\php\\php-7.4.30-Win32-vc15-x64\\php.iniWhen you search for “MongoDB” in the page, there’s a whole section that shows if MongoDB is active, and which version of the driver is loaded (or not).", "username": "Hubert_Nguyen1" }, { "code": "", "text": "if you have a file named mongodb.ini in the route /etc/php/8.1/mods-available whit the content “extension=mongodb.so”, you can tried comment this line, this works fine for me in php 8.1 and mongodb extensión 1.14.2.", "username": "CubitNet_api" } ]
Mongodb.so will not load with PHP 8.1 on Ubuntu 22.04
2022-10-13T23:55:05.308Z
Mongodb.so will not load with PHP 8.1 on Ubuntu 22.04
9,970
null
[ "aggregation" ]
[ { "code": "", "text": "So I am trying to sort a nested array which contains an array with many variables that I want to sort by.\nI recreated my issue in the following playground:\nMongo playground\nAs you can see I included $sort however the result I am getting is not being sorted. Any idea how can I make this work?", "username": "Aleks_tr" }, { "code": "", "text": "If you look at the $sort documentation as you use it, you will see that it is a pipeline stage that sorts the documents rather than the element of an array.To sort elements of an array you have $sortArray.If sorting the elements of an array is a major use-case, you might be better off sorting the elements while building the array using the $sort modifier.I usually prefer to do this type of data cosmetic (sorting an inner table) in the application presentation layer as to reduce the amount of work done on the data server as it is naturally distributed on the workstation of the users.As for the field “Supplier Manager”, why don’t you use a real array rather than a formatted string? You would not need $regexMatch.", "username": "steevej" } ]
Sorting nested tables
2022-10-30T04:11:33.871Z
Sorting nested tables
950
null
[ "aggregation", "node-js", "data-modeling", "mongoose-odm" ]
[ { "code": "const spaceSchema = Schema(\n\t{\n\t\tuser: {\n\t\t\ttype: Schema.Types.ObjectId,\n\t\t\trequired: true,\n\t\t\tref: \"User\",\n\t\t},\n\t\tname: {\n\t\t\ttype: String,\n\t\t\trequired: true,\n\t\t},\n\t\tdays: [{ type: Schema.Types.ObjectId, refPath: \"Day\" }],\n\t},\n\t{\n\t\tversionKey: false,\n\t timestamps: true \n },\n);\n\nconst Space = mongoose.model(\"Space\", spaceSchema);\n\nmodule.exports = Space;\n\nconst daySchema = Schema(\n\t{\n\t\tspace: { type: Schema.Types.ObjectId, refPath: \"Space\" },\n\t\tdate: {\n\t\t\ttype: String,\n\t\t\trequired: true,\n\t\t},\n\t\twater: {\n\t\t\ttype: Boolean,\n\t\t},\n\t\tfertilizer: {\n\t\t\ttype: Boolean,\n\t\t},\n\t\ttransplant: {\n\t\t\ttype: Boolean,\n\t\t},\n\t\tcomment: {\n\t\t\ttype: String,\n\t\t},\n\t},\n\t{\n\t\tversionKey: false,\n\t timestamps: true \n },\n);\n\nconst Day = mongoose.model(\"Day\", daySchema);\n\nmodule.exports = Day;\nconst getSpace = asyncHandler(async (req, res) => {\n\tconst space = await Space.findById(req.params.id).populate(\"days\");\n\tres.status(200).json(space);\n});\n\n{\n \"_id\": \"63580115978dbf8f2f5a7a50\",\n \"user\": \"63501ab84613855834daa4ef\",\n \"name\": \"spaceName\",\n \"days\": []\n}\n \"space\": {\n \"_id\": \"63580115978dbf8f2f5a7a50\",\n \"user\": \"63501ab84613855834daa4ef\",\n \"name\": \"spaceName\",\n \"days\": [\n {\n \"_id\": \"63581af565aed8cad3210046\",\n \"space\": \"63580115978dbf8f2f5a7a50\",\n \"date\": \"29/10/2022\",\n \"water\": true,\n \"fertilizer\": true,\n \"transplant\": false,\n \"comment\": \"This is a comment.\",\n \"createdAt\": ...,\n \"updatedAt\": ...\n },\n {\n \"_id\": \"63581af565aed8cad3210046\",\n \"space\": \"63580115978dbf8f2f5a7a50\",\n \"date\": \"29/10/2022\",\n \"water\": false,\n \"fertilizer\": false,\n \"transplant\": true,\n \"comment\": \"This is a comment.\",\n \"createdAt\": ...,\n \"updatedAt\": ...\n }\n ]\n }\n\n", "text": "First post here, so hello to everyone!I’m having trouble with mongoose populate because it’s apparently doing nothing. I’m going straight to the code.Here are my models:SpaceDayAnd the part of the controller where I have the getSpace route (/api/spaces/:id):And this is the result I get:I tried many things, but all the results were the same.Let me know if you need any other part of the code.Thank you all I expect the result to look something like this", "username": "Jordi_Olle_Balleste" }, { "code": "refPathrefrefPathrefdaysconst spaceSchema = Schema(\n\t{\n\t\tuser: {\n\t\t\ttype: Schema.Types.ObjectId,\n\t\t\trequired: true,\n\t\t\tref: \"User\",\n\t\t},\n\t\tname: {\n\t\t\ttype: String,\n\t\t\trequired: true,\n\t\t},\n\t},\n\t{\n\t\tversionKey: false,\n\t\ttimestamps: true,\n\t},\n);\n\nspaceSchema.virtual('days', {\n\tref: 'Day',\n\tlocalField: '_id',\n\tforeignField: 'space',\n});\n\nconst Space = mongoose.model(\"Space\", spaceSchema);\n\nmodule.exports = Space;\n", "text": "refPath doesn’t mean, what you might think it does. What you need here is a simple ref.\nSo, just change refPath to ref in your schemas and your example should be working as you expect.But, there is a second thing. You have redundant references: days refer to spaces and spaces to days. Generally you don’t need to refer to each other in both ways (unless in your specific use case you do). For many-to-one or many-to-many relations there are Populate Virtuals. You can read up on it in the Mongoose documentation.So I would get rid of the days field in the Space schema, and replace it with a populate virtual:", "username": "pepkin88" }, { "code": "{\n \"_id\": \"63580b9f6d2b99c33923c876\",\n \"user\": \"63501ab84613855834daa4ef\",\n \"name\": \"balco\",\n \"createdAt\": \"2022-10-25T16:15:27.304Z\",\n \"updatedAt\": \"2022-10-25T16:15:27.304Z\"\n}\nconst getSpace = asyncHandler(async (req, res) => {\n\tconst space = await Space.findById(req.params.spaceId);\n\tspace.days = await Day.find({ space: req.params.spaceId });\n\tres.status(200).json(space);\n});\n", "text": "Hi pepkin88,Thanks for your answer I tried this populate virtual, but it’s returning me this now:There are no days in my answer I also changed refPath to ref but no luck.I also tried another way (I’m not so wure if it’s better or worse than using populate) but I got the answer I wanted.\nI used this:And answer is what I wanted Thanks for your answer anyway ", "username": "Jordi_Olle_Balleste" } ]
Mongoose .populate() is not working with node.js app
2022-10-26T15:40:45.885Z
Mongoose .populate() is not working with node.js app
10,333
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello , I am new to mongodb. if you are embedding too much data in the document,it will increase document size and the overhead to send the document over network. But for retrieving the document, we can use projection and get only particular fields that we need. So we are decreasing the network overhead , right?. Please tell me if I am missing something here.", "username": "Sudarshan_Dhatrak" }, { "code": "", "text": "Hi @Sudarshan_Dhatrak ,You are correct. The main question weather embedding the data helps the application fetch any relationship in one query to one or few documents and not hitting any known antipattern:Get a summary of the six MongoDB Schema Design Anti-Patterns. Plus, learn how MongoDB Atlas can help you spot the anti-patterns in your databases.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Help regarding mongodb schema design
2022-10-29T08:52:00.432Z
Help regarding mongodb schema design
947
null
[]
[ { "code": "", "text": "For the past day, I keep randomly getting the error \" MongoError: connect ECONNREFUSED 127.0.0.1:27017\" Never had this issue as I have the service auto start on boot. When starting the mongod service I get the message below, not sure if this has something to do with it. Thanks/lib/systemd/system/mongod.service:11: PIDFile= references path below legacy directory /var/run/, updating /var/run/mongodb/mongod.pid → /run/mongodb/mongod.pid; please update the unit file accordingl", "username": "Joe" }, { "code": "", "text": "I think your mongod.config file and systemd service file should have same path for mongod.pid", "username": "Ramachandra_Tummala" } ]
MongoError: connect ECONNREFUSED 127.0.0.1:27017
2022-02-17T01:47:29.789Z
MongoError: connect ECONNREFUSED 127.0.0.1:27017
3,381
https://www.mongodb.com/…29e9abe38b1.jpeg
[ "dot-net" ]
[ { "code": "", "text": "Hi,I am new to Maui and Realm, so please bear with me.I have now been playing around with Realm for some days and I’m able to save local on the device.\nNext step is now to sync the data to the cloud, but I have some problems with that.I try to follow this Youtube video from Luce Carter, to see if that will work for me, but I am getting a No User Object error, can somebody help me?Kind regards,\nJonas", "username": "Jonas_Sonderby" }, { "code": "", "text": "Hey Jonas, can you post some code around where the error occurs as well the actual error you’re getting along with the stacktrace. That’ll help us greatly when identifying what the issue may be.", "username": "nirinchev" }, { "code": " public async Task InitialiseRealm()\n {\n config = new PartitionSyncConfiguration($\"{App.RealmApp.CurrentUser.Id}\", App.RealmApp.CurrentUser);\n realm = await Realm.GetInstanceAsync(config);\n Console.WriteLine(realm.All<User>());\n user = realm.Find<User>(App.RealmApp.CurrentUser.Id);\n\n if(user == null)\n {\n await Task.Delay(5000);\n user = realm.Find<User>(App.RealmApp.CurrentUser.Id);\n\n if(user == null)\n {\n Console.WriteLine(\"NO USER OBJECT: This error occurs if \" +\n \"you do not have the trigger configured on the backend \" +\n \"or when there is a network connectivity issue. See \" +\n \"https://docs.mongodb.com/realm/tutorial/realm-app/#triggers\");\n\n await App.Current.MainPage.DisplayAlert(\"No User object\",\n \"The User object for this user was not found on the server. \" +\n \"If this is a new user acocunt, the backend trigger may not have completed, \" +\n \"or the tirgger doesn't exist. Check your backend set up and logs.\", \"OK\");\n }\n\n }\nexports = async function(authEvent) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const users = mongodb.db(\"AtlasCluster\").collection(\"User\");\n const { user, time } = authEvent;\n\treturn users.insertOne({_id:user.id,_partition: user.id,name: user.data.email});\n};\n", "text": "Hi NikolaI can try, here is the code that’s failing, the user object is not being definedAnd here is my function:", "username": "Jonas_Sonderby" }, { "code": "", "text": "", "username": "Jonas_Sonderby" }, { "code": "", "text": "Hello,This app only shows an example of “front-end” code to show what kinds of things you can do with Realm and MAUI.The users are created from back-end code using Atlas App Services Triggers and Functions.If you wanted to actually replicate the whole thing yourself, you would also need to set that up too.\nThe tutorial I followed has been deprecated now in favour of using Templates in App Services but I will make two GitHub gists tomorrow and share the code I added", "username": "Luce_Carter" }, { "code": "", "text": "I’ve updated the repo now with some details in the Readme and files showing what you would need to add to your App Services app ", "username": "Luce_Carter" }, { "code": "", "text": "Thank you a lot Luce!!I made it work with some small changes:Make a new AppService with the Build your own app template and name it HouseMovingAssistantCreate functions\nReplace HouseMovingAssistantCluster with HouseMovingAssistantDBCreate triggerInsert the app ID in your appAuthentication\nGo to Authentication and Authentication Providers Select Email/Password and the the following:\nProvider Enabled: Enabled\nUser Confirmation Method: Automatically confirm users\nPassword Reset Method: Send a password reset email\nPassword Reset URL: https://yoururlEnable deviceSync\nGo to Device sync and Enable sync, press No thanks, continue to Sync and use the following settings\nSync Type: Partition based\nDevelopment Mode: Enabled\nSelect a Cluster to Sync: “Your Cluster”\nDefine a Database Name: HouseMovingAssistantDB\nChoose a Partition Key: _partitionDefine Permissions: Users can only read and write their own dataThe first time i started the app it was failing, but i think it is because the schema isn’t in place", "username": "Jonas_Sonderby" }, { "code": "", "text": "If you use https://www.buymeacoffee.com, then I would like to buy a coffee for you ", "username": "Jonas_Sonderby" }, { "code": "", "text": "I seem to have a problem with one line of code that is messing up the rest of my test of this app.\nuser = realm.Find(App.RealmApp.CurrentUser.Id);\nThe user property ends up null. I find this a pretty interesting example since using MVVM and the community tool kit. Has a nice simple, readable code, but that line?", "username": "Michael_Gamble" }, { "code": "", "text": "I started my learning of Xamarin with Realm/Atlas with the Template for xamarin. Got it working just fine. But what led me to your example was as soon as I tried to use a partition, which I have a feeling is the best way to sync, the template example went south! I played around trying to get it working and just was missing things.", "username": "Michael_Gamble" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Maui - HouseMovingAssistant
2022-10-26T23:53:26.159Z
Realm Maui - HouseMovingAssistant
2,145
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "const documents = [\n {\n id: '1',\n name: 'Ben'\n },\n {\n id: '2',\n name: 'John',\n age: 24\n },\n {\n id: '3',\n name: 'Jane',\n country: 'USA'\n }\n]\n\nconst result = documents.reduce((acc, doc) => {\n // loop through the given document\n // add 'key(field name)' to the accumulator object and\n // push the 'value' to the set\n Object.entries(doc).forEach(([key, value]) => {\n // ignore id\n if (key === 'id') {\n return\n }\n acc[key] = acc[key] || new Set()\n acc[key].add(value)\n })\n return acc\n}, {})\n\nconsole.log(result)\n\n// resulting in\n// {\n// name: Set(3) { 'Ben', 'John', 'Jane' },\n// age: Set(1) { 24 },\n// country: Set(1) { 'USA' }\n// }\n$mergeObjects$group$map$reduce$map$reduce", "text": "I thought it’s rather convenient to express what I want in code so I’ve written one.How would I achieve this aggregation in mongodb? I have some clues but not really sure how.\nI could make use of $mergeObjects, $group, $map, $reduce. The code is already there so I can simply run it on nodejs but I’m leavving it as a last ressort. And if I use $map or $reduce like in the code above, does it cause performance issue compare to other some smart looking aggregations?", "username": "Polar_swimming" }, { "code": "set_stage = { { \"$set\" : { \"_array\" : { \"$objectToArray\" : \"$$ROOT\" } } }\nunwind_stage = { \"$unwind\" : \"$_array\" }\ngroup_stage = { '$group': { _id: '$_array.k', v: { '$push': '$_array.v' } } }\npipeline = [ set_stage , unwind_stage , group_stage ]\ncollection.aggregate( pipeline )\n// ignore id", "text": "One way to do it would be:The exercise I left to the reading to filter out (with a $match between $unwind and $group) as per the requirement:// ignore id", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Merging documents into one document and accumulate fields and values
2022-10-28T13:24:35.613Z
Merging documents into one document and accumulate fields and values
1,079
null
[ "aggregation", "python" ]
[ { "code": " count().count_documents()Cursor' object has no attribute 'count_documentsCursor' object has no attribute 'counturi = \"XXXXXXXXXX\"\nclient = MongoClient(URI); # creating an instance of the client\n\nprint(client.sample_training.zips.find(\n {\n \"pop\" : {\"$lt\" : 1000}\n }\n\n ).count_documents())\n", "text": "Hey People,\nI know this is one of the most basic topics but I cant figure out where im going wrong, trying . count() and .count_documents() and getting the following error:Cursor' object has no attribute 'count_documents\nCursor' object has no attribute 'count.the code im using is:im feeling quite dumb here…", "username": "chris_wood" }, { "code": "uri = \"XXXXXXXXXX\"\nclient = MongoClient(URI);\n\nclient.sample_training.zips.count_documents(\n {\n \"pop\" : {\"$lt\" : 1000}\n }\n)\n", "text": "I figured out the answer, in case anyone else is interested:", "username": "chris_wood" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to count documents in pymongo
2022-10-29T11:34:06.005Z
How to count documents in pymongo
10,724
https://www.mongodb.com/…e_2_1024x598.png
[ "node-js", "data-modeling", "mongoose-odm" ]
[ { "code": "", "text": "Please I need help structuring my MongoDB database for a Multi Level Marketing Application my team and I are currently working on. We’re using Express for our server and MongoDB for the database. Here is my use case:I want when there is a user at the Left and Right node of each parent, a matching bonus should be paid to the parent.How to use mongoose to get each parent tree with all childrenHERE is a sample structure of a parent with children\nScreenshot 2022-07-26 1137021176×687 28.5 KBIt would be of great help if I could get a response from the community. Thank to anyone that could help, God, bless.", "username": "Omega_Uwedia" }, { "code": "", "text": "hi, Is your problem solved?", "username": "req_demo" } ]
Mongodb Database structure on an MLM application
2022-07-26T11:11:50.535Z
Mongodb Database structure on an MLM application
2,434
https://www.mongodb.com/…64786bb56b32.png
[ "installation" ]
[ { "code": "", "text": "Hi, I am trying to install a local instance of mongodb community server. During the installation i get an error message saying that it cannot add my user to the group, performance monitor users. What do i do?", "username": "Nils_Hedberg" }, { "code": "", "text": "It is solved with this command for cmd:\nnet localgroup “Performance Monitor Users” /add", "username": "Daniil_Sorokin" }, { "code": "", "text": "It worked for me. Thank you!", "username": "Mohammad_Kafaei" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB Community local installation error "Failed to add user to group"
2021-02-12T18:28:02.268Z
MongoDB Community local installation error &ldquo;Failed to add user to group&rdquo;
6,551
https://www.mongodb.com/…8b_2_1024x63.png
[ "installation" ]
[ { "code": "", "text": "\nScreen Shot 2021-12-15 at 10.55.491324×82 23.8 KB\nI tried to install the\n5.0 community version using homebrew. This is the error that I get.I tried to downgrade to version 4.4. I started mongo via terminal it works, but it does not recognize the ‘mongo’ command in the terminal. TI will be glad to get some help ", "username": "Eitan_Yona" }, { "code": "xcode-select --instal Because you need the Xcode tools.\nbrew tap mongodb/brew\nbrew install [email protected]\nbrew services start [email protected]\ndocker run -p 27018:27017ls -l /tmp/mongodb-27017.sock\nsudo rm -rf /tmp/mongodb-27017.sock\nbrew services start mongodb-community\nbrew services list\nbrew tap mongodb/brew\nbrew install [email protected]\nbrew services start mongodb-community \n", "text": "Hello Eitan,Welcome to the MongoDB community.This is somewhat amusing to a point, as I’ve literally spent the past few days working with MongoDB through the Home-brew, and Docker Images. lolSo yes, 5.0 may have issues installing on your Mac through home-brew, because you have an M1 Mac. A work around is definitely 4.4 due to SSE issues in 5 pertaining to AVX, but if you’re not using virtualization such as Docker this may not be the case.Now for your main issue of getting 4.4 to work, this is what I did/do (I have an M1 MacBook Pro that I just love to use for tests and things like this, too.)We have the documentation located here: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-os-x/But I do recommend that instead of doing it this way as it sounds like you’re setting up a test environment. I would pursue further evolving your knowledge and DevOps skills by implementing your MongoDB images on your Mac with Docker images. And learning how to use Docker alongside MongoDB, as well as other ways how to implement and orchestrate containerized virtualization.This method also makes it substantially faster and easier to setup and tear down applets and micro services that you construct.https://hub.docker.com/_/mongoNote:Note2: This is a method that has gotten 5 to work on an M1 Mac by doing the following:Note3: Depending on system configs and any scripts that you have going for graceful failovers, double check the MongoDB version installed on your system after you have it running.Note4: The main steps to install via Brew that I did on my M1 MacBook Pro are the following.If there is anything else for which I can be of assistance, please feel free to ask.Regards,Brock", "username": "Brock_GL" }, { "code": "", "text": "A post was split to a new topic: Error installing with Homebrew: Your Xcode (14.0.1) is too outdated", "username": "Stennie_X" } ]
Problem with mongodb installation mac using homebrew
2021-12-15T09:01:41.056Z
Problem with mongodb installation mac using homebrew
7,145
https://www.mongodb.com/…9decf51f2e57.png
[ "aggregation", "queries", "node-js" ]
[ { "code": "", "text": "\nerror1019×280 12.2 KB\n", "username": "Muhammad_Saif_ur_Rehman" }, { "code": "", "text": "Welcome to the MongoDB Community @Muhammad_Saif_ur_Rehman !Please share some more information to help reproduce this issue:Thanks,\nStennie", "username": "Stennie_X" } ]
I want to retrieve data from mongodb by using date month but error occur how to fix it...…
2022-10-28T08:57:15.744Z
I want to retrieve data from mongodb by using date month but error occur how to fix it&hellip;…
1,290
null
[]
[ { "code": "", "text": "Can we set w:0 & Journaling to false?How that will help to reduce speed", "username": "Santhosh_V" }, { "code": "", "text": "Hi @Santhosh_V,Depending on your deployment type and the importance of your data, both are available options that may trade some write latency for increased risks around data durability and crash recovery (which is the purpose of the journal). The impact on write throughput will depend on your use case and deployment resources, so you will have to test with a representative environment and workload.However, before taking either measure it would be best to discuss more details about your use case including deployment type (standalone, replica set, or sharded cluster), MongoDB server version, and the performance issue or concern you are trying to address.Journaling cannot be disabled for replica sets using the WiredTiger storage engine and modern versions of MongoDB. Turning off this safety feature on a standalone server will expose you to unclean shutdown scenarios which may require recovery from a backup. I would not recommend this approach unless your data is ephemeral or this is not a system of record and data can be easily reingested from another source.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we set w:0 & journaling to false?
2022-10-28T05:09:08.710Z
Can we set w:0 &amp; journaling to false?
911
null
[ "sharding" ]
[ { "code": "", "text": "Hello All,I tried to install 3.6.8 version on our new server to keep up with old server mongo version so that I can dump data and restore.Where I tried this at 4th step However its saying Unable to find packages for all these .Can someone help me. If not let me know how I can dump and restore from older version to newer version.sudo apt-get install -y mongodb-org=3.6.8 mongodb-org-server=3.6.8 mongodb-org-shell=3.6.8 mongodb-org-mongos=3.6.8 mongodb-org-tools=3.6.8", "username": "priyatham_ik" }, { "code": "mongodump", "text": "Hi @priyatham_ik,MongoDB 3.6 reached End Of Life (EOL) in April 2021 so it is likely you are using a newer version of Debian than may be supported: MongoDB 3.6 Supported Platforms.Some possible approaches in order of recommendation:Try using mongodump from the standalone MongoDB Database Tools. Compatibility is not tested as far back as 3.6 so this is not guaranteed to work, but would be worth trying.Download MongoDB 3.6 from the Archived Releases page. The “Linux (legacy) x64” tarball would be the most straightforward to work with since you are only looking to use database tools.If you are using Debian 8 or 9 (the only versions with MongoDB 3.6 packages) you could try following the instructions to Install MongoDB 3.6 on Debian. However, since MongoDB 3.6 is an End-Of-Live version you may encounter issues such as expired public keys for package signing.Regards,\nStennie", "username": "Stennie_X" }, { "code": "ssh -fN -L 27018:localhost:27017 <remote_host> \n\n mongodump --port 27018 --db <remote_db_name> --username <remote_db_username> --password <remote_db_password> --archive | mongorestore --username <destination_db_username> --password <destination_db_password> --archive\n\nAfter running these commands on my new server I see my db when I do show dbs on mongo shell however when I change the port to 27018 in mongodb config file in order to run this restored db its throwing error and exiting with 42 saying address already in use and Its happening because of the ssh tunnel and when I kill the port and re run it now I dont see the restored DB .\n\nCan anyone help me ?\n\n\n", "text": "Hello @Stennie_X ,I have installed the 4.4 version on new server and did this mentioned below to dump and restore from old db to new db", "username": "priyatham_ik" }, { "code": "", "text": "Why you want to start mongod again?\nJust connect to your restored db using the correct port 27017\nShow us how you are connecting with screenshots", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hello @Ramachandra_Tummala ,On my new server I have restored to the port 27018 as per my previous commands when I type mongo and run show dbs I can see the restored db however when I check sudo service mongod status its showing inactiveHere is what it shows when I do show dbs by entering the mongo shell using mongo --port 27018show dbs\nMyDB 0.018GB\nadmin 0.000GB\nconfig 0.000GB\nlocal 0.000GB\nplaylist 0.000GB\ntest_playlist 0.000GBHere is what I get when I do sudo service mongod status and I have updated port to 27018 in config file\nimage909×308 70.6 KB\n", "username": "priyatham_ik" }, { "code": "", "text": "Are you sure you restored to your mongod on port 28000\nYour mongodump is from port 28000 which you piped to restore on localhost on default port 27017\nWhat do you see if you issue just mongo", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Ramachandra_Tummala\nYes I’m certainly sure that I restored to port 27018 and I followed the second answer from this post How do I copy a database from one MongoDB server to another? - Stack Overflow.If I understood it correctly I have ran the ssh tunnel on new server which makes all the request forwarded through port 27018 to old server (the db from which Im going to dump) with default port.when I do mongo on new server this is what I get -mongo\nMongoDB shell version v4.4.17\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1However when I do mongo --port 27018 on the same new server this is what I get-\nmongo --port 27018\nMongoDB shell version v4.4.17\nconnecting to: mongodb://127.0.0.1:27018/?compressors=disabled&gssapiServiceName=mongodb", "username": "priyatham_ik" }, { "code": "", "text": "@Ramachandra_Tummala @Stennie_XThanks for your help and Info guys ,Finally I was able to dump and restore this way-From new server I dumped remotely using : mongodump --host =10.10.10.10(remotehost) --port=27017 and\nthen restored in newserver: mongorestore --port=27018", "username": "priyatham_ik" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How Can I Install Mongodb 3.6.8 version on debian?
2022-10-28T03:20:42.370Z
How Can I Install Mongodb 3.6.8 version on debian?
5,086
null
[ "flutter", "app-services-cli" ]
[ { "code": "const String _appId = 'xxxxx';\nfinal AppConfiguration _appConfig = AppConfiguration(_appId);\nfinal App app = App(_appConfig);\n\n var emailCred = Credentials.emailPassword(email, password);\n\n User currentUser = await app.logIn(emailCred);\n\n final realm = Realm(\n Configuration.flexibleSync(\n currentUser,\n [Dog.schema],\n syncErrorHandler: (SyncError error) {\n print(\"Error message ${error.message.toString()}\");\n },\n ),\n );\n print('User: ${currentUser.id}');\n\n realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add(\n realm.all<Dog>());\n });\n\n realm.write(() {\n realm.add(Dog(ObjectId(), 12, 'Clifford3'));\n });\nconst String _appId = 'xxxxxx';\nfinal AppConfiguration _appConfig = AppConfiguration(_appId);\nfinal App app = App(_appConfig);\n\nvoid main() {\n final ItemService service = ItemService();\n runApp(MyApp(\n service: service,\n ));\n}\n var emailCred = Credentials.emailPassword(email, password);\n User currentUser = await app.logIn(emailCred);\n openRealm() {\n\n final Configuration _config = Configuration.flexibleSync(\n app.currentUser,\n [Item.schema],\n syncErrorHandler: (SyncError error) {\n print(\"Error message ${error.message.toString()}\");\n },\n );\n _realm = Realm(_config);\n }\nLateInitializationError: Field '_realm@702196172' has not been initialized.\n", "text": "Hi guys! I’m new here and in mongoDB / Flutter / Realm world.I’m building an app with Flutter/Dart and Realm/MongoDB Atlas.My app has a login with email/password and it is working perfectly when I call all the code bellow in the same file (loginPage.dart), after the user insert email and password:But, it is not working when I try to separated the code in other files like:main.dart:loginPage.dart:item_service_dart:The error bellow happens in openRealm function in “app.currentUser” that is not setted. How can I get the user data?Thanks!", "username": "Leozitus" }, { "code": "openRealm", "text": "Are you displaying the login page at all? There seems to be code missing and my guess is that at the time openRealm is called, the user hasn’t been authenticated yet. Would it be possible to upload the complete project somewhere so we can look at the full picture?", "username": "nirinchev" }, { "code": "\"LateInitializationError: Field '_realm@702196172' has not been initialized.\"\n[ERROR] Realm: Connection[1]: Session[1]: Error integrating bootstrap changesets: Failed to transform received changeset: Schema mismatch: 'Item' has primary key '_id', which is nullable on one side, but not the other.\nflutter: Error message Bad changeset (DOWNLOAD)\n\n{\n \"title\": \"Item\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"done\",\n \"text\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"done\": {\n \"bsonType\": \"bool\"\n },\n \"text\": {\n \"bsonType\": \"string\"\n }\n }\n}\n", "text": "Hi @nirinchev . Thanks for your reply.My git repo is https://github.com/coffeelydev/shopping-list-v2 Maybe could help you to understand. There is not a login page. It will be in the future. Now I just call the an Auth function.Now there is 2 errors:I already created a schema but it is not working:", "username": "Leozitus" }, { "code": "_realmItemService.getItemslate\"LateInitializationError: Field '_realm@702196172' has not been initialized.\"\n", "text": "This happens because you haven’t set _realm yet when you call ItemService.getItems the first time.When you mark a member a late in dart without an initializer, then it is your responsibility that it is assigned before you first use the variable.Hence theyou see.This is not related to realm", "username": "Kasper_Nielsen1" }, { "code": "", "text": "@Leozitus I have made a PR with a few fixes.BTW: Be aware that you have committed your credentials in the repo!", "username": "Kasper_Nielsen1" }, { "code": "", "text": "Perfect, @Kasper_Nielsen1 ! Thanks a lot!\nYeah, I just keep my credentials to help you guys to find the error. Now I will remove it.", "username": "Leozitus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Retriving app.currentUser to open Realm connection in another file / Flutter
2022-10-26T19:47:13.238Z
Retriving app.currentUser to open Realm connection in another file / Flutter
2,906