image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "change-streams" ]
[ { "code": "", "text": "I see there is an open ticket for the 16MB limitation on change stream https://jira.mongodb.org/browse/SERVER-55062Until it is fixed, given a document can be 16MB as well, change stream can crash when we request the pre-image. Looking at https://www.mongodb.com/docs/manual/changeStreams/#change-streams-with-document-pre--and-post-images, it suggests toLimit the document size to 8 megabytesI wonder how can we achieve that with transactions, where we don’t know the full document size until the commit time.", "username": "Yang_Wu1" }, { "code": "", "text": "Hi @Yang_Wu1,I wonder how can we achieve that with transactions, where we don’t know the full document size until the commit time.The document size limit applies to each document in a multi-document distributed transaction (MongoDB 4.2+), not the overall size of the transaction.If individual documents you want pre-images for are likely to approach 8MB or more, I would review the reason for document growth and reconsider your data modelling approach. Large documents are often due to anti-patterns like massive arrays or bloated documents.Alternatively you could avoid using both post-images and pre-images for a collection which has large documents and consider:Requesting only post-imagesRequesting only pre-images in the change stream output and fetching the current documentDisabling pre-images for collections with large documentsRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change stream size limit workaround
2022-09-22T07:23:53.977Z
Change stream size limit workaround
2,400
null
[ "replication", "security" ]
[ { "code": "rs.stepDown(60)MongoServerError: not authorized on admin to execute command\n", "text": "I’m trying to run rs.stepDown(60) but I’m getting this error:What permission do I need to add to my user to be able to run stepDown?", "username": "Mark_De_May" }, { "code": "clusterAdminclusterManager", "text": "Hi @Mark_De_MayYou need replSetStateChange privilege, which is allowed in clusterAdmin or clusterManager roles.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
rs.stepDown() permission
2022-09-26T21:17:17.803Z
rs.stepDown() permission
2,235
null
[ "node-js", "connecting" ]
[ { "code": "", "text": "I followed the article you wrote but I’m getting this error message .MongoServerSelectionError: getaddrinfo ENOTFOUND", "username": "Penuel_Nwaneri" }, { "code": "getaddrinfo ENOTFOUNDpingping", "text": "Welcome to the MongoDB community @Penuel_Nwaneri !MongoServerSelectionError: getaddrinfo ENOTFOUNDThe getaddrinfo ENOTFOUND error indicates that the hostname you have provided for your MongoDB deployment cannot be resolved by the client you are using.I would double-check the details in your connection string and try to ping that hostname from the command line to verify the name can be resolved to an IP address.If you are still having trouble connecting, please share some more details including:where your cluster is deployed (self-hosted, MongoDB Atlas, other …)O/S version you are connecting fromconfirmation that the ping command is able to resolve the hostname for your MongoDB deployment to an IP addressIf this is a newly created deployment and DNS hostname, there may be a delay before the DNS information propagates through to your local DNS resolver. You may want to try changing your DNS servers to one of the public DNS services like Google Public DNS or Cloudflare public DNS.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoServerSelectionError: getaddrinfo ENOTFOUND
2022-09-24T18:04:54.999Z
MongoServerSelectionError: getaddrinfo ENOTFOUND
29,191
https://www.mongodb.com/…4_2_1024x448.png
[]
[ { "code": "", "text": "The overview of my cluster shows around 100 MB size:\n\nScreen Shot 2022-08-12 at 2.34.05 PM2466×1080 209 KB\n\nBut when I tap into the detailed graph the size is over 500 MB\n\nScreen Shot 2022-08-12 at 2.34.32 PM3178×1052 177 KB\n100 MB is closer to what I expect give the data i’m storing:\n\nScreen Shot 2022-08-12 at 2.36.03 PM2566×1170 190 KB\nWhat is the source of the extra storage?", "username": "Harry_Netzer1" }, { "code": "STORAGE SIZELOGICAL DATA SIZE", "text": "Hi @Harry_Netzer1,I suspect based off the timing of where the logical size starts to increase from 0.0B in your first screenshot, the screenshots were taken quite recently after some type of bulk insert.I believe the cause of this is due to the granularity of the metrics shown in your first screenshot (across the last 30 days with a granularity of 1 hour).100 MB is closer to what I expect give the data i’m storing:In your third screenshot I am seeing STORAGE SIZE (compressed) which i believe differs to LOGICAL DATA SIZE. Please see the below screenshot which highlights both from my test environment:\nimage983×247 25.1 KB\nYou can find some more details regarding this here.Is your UI still displaying the logical size difference between the first screenshot and the detailed view (second screenshot)? If you are still seeing this difference in logical size from the two metrics views, I would raise this with the Atlas chat support team as they would have more insight to your Atlas project / cluster in question.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you Jason. I am not seeing the difference anymore between the first and second screenshot. I believe you are correct that it was a difference in granularity.I’m still seeing the difference in the third screenshot. When I look at my individual collections, adding up the total size of each is about 4x less than the logical size listed on the overview. Is there another hidden data somewhere?For background, I’m copying data from JSON backups into realm. I’m using a small iOS app to read the JSON, decode it into realm objects and sync up to the server using flexible sync. Is this inefficient in any way? Should I instead use the Mongo swift driver to populate my database? Not sure if using realm in this way is creating a lot of extraneous metadata.", "username": "Harry_Netzer1" }, { "code": "LOGICAL DATA SIZESTORAGE SIZEINDEX SIZELOGICAL DATA SIZELOGICAL SIZELOGICAL DATA SIZE", "text": "Thank you Jason. I am not seeing the difference anymore between the first and second screenshot. I believe you are correct that it was a difference in granularity.That’s good to hear.I’m still seeing the difference in the third screenshot. When I look at my individual collections, adding up the total size of each is about 4x less than the logical size listed on the overview. Is there another hidden data somewhere?Regarding the above, can you provide a screenshot from one of the collections and highlight which size you are adding up? I’m curious to see if it’s the LOGICAL DATA SIZE, STORAGE SIZE or INDEX SIZE. The third screenshot you had provided initially does not include LOGICAL DATA SIZE (This value showing up in the UI in my screenshot could have been a more recent change).Can you also specify the total LOGICAL SIZE you are seeing in the UI as well as all the individual collections LOGICAL DATA SIZE?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks Jason. Here’s my total size showing as more than 500MB:\n\nScreen Shot 2022-09-19 at 4.57.57 PM1920×494 39 KB\n\n\nScreen Shot 2022-09-19 at 4.57.47 PM1920×473 44.2 KB\nAnd here’s all of my collections. If my math is correct, adding Logical Data and Indexes, these add up to 130MB.\n\nScreen Shot 2022-09-19 at 4.58.37 PM1920×1099 70.2 KB\n\n\nScreen Shot 2022-09-19 at 4.58.55 PM2012×330 86.2 KB\n\n\nScreen Shot 2022-09-19 at 4.59.07 PM2000×344 85.3 KB\n\n\nScreen Shot 2022-09-19 at 4.59.21 PM1264×162 30.5 KB\n\n\nScreen Shot 2022-09-19 at 4.59.27 PM1330×156 31.6 KB\n\n\nScreen Shot 2022-09-19 at 4.59.33 PM1196×126 27.2 KB\n\n\nScreen Shot 2022-09-19 at 4.59.40 PM1194×152 30.5 KB\n\n\nScreen Shot 2022-09-19 at 4.59.48 PM1182×144 27.7 KB\n\n\nScreen Shot 2022-09-19 at 4.59.52 PM1180×126 31 KB\n\n\nScreen Shot 2022-09-19 at 4.59.57 PM1300×158 34.3 KB\n\n\nScreen Shot 2022-09-19 at 5.00.03 PM1182×156 33 KB\n", "username": "Harry_Netzer1" }, { "code": "__realm_sync", "text": "Thanks @Harry_Netzer1,Can you try connecting via MongoDB compass and checking the available databases? I am wondering if you are able to see a __realm_sync database and if so, what the size of it would be. To my knowledge this database won’t be able to be seen via Data Explorer which is why I suggested MongoDB Compass.For your reference, my current theory for what may be consuming the storage and not being seen in the Atlas Data Explorer is related to the following post : __realm_sync history taking up all the storage on Atlas clusterRegards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks Jason. I am seeing this __realm_sync database with some largish tables:\nScreen Shot 2022-09-24 at 5.27.49 PM1512×1732 274 KB\nIs the next step emailing Ian Ward to enable compaction? Thanks for your help!", "username": "Harry_Netzer1" }, { "code": "", "text": "Hi @Harry_Netzer1,If you terminate / re-enable sync, it should rebuild the sync history.However, as the sync history will then begin to grow again, you may hit this limit once more. Should you find your application often hitting this limit, then it might be best to consider upgrading to a higher tier cluster.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data size is larger than expected
2022-08-12T18:36:37.940Z
Data size is larger than expected
5,179
null
[ "aggregation", "node-js", "mongoose-odm" ]
[ { "code": "\"_id\": \"63295ae7981b4314003ddbd1\",\n \"outlet_id\": \"63031a61ade51cae66eec7e1\",\n \"tags\": [\n {\n \"name\": \"cheeses\",\n \"_id\": \"63295ae7981b4314003ddbd2\"\n },\n {\n \"name\": \"beverage\",\n \"_id\": \"632995bdc6ab38a142d85650\"\n }\n ],\n \"createdAt\": \"2022-09-20T06:17:11.136Z\",\n \"updatedAt\": \"2022-09-20T11:19:34.770Z\",\n \"__v\": 2,\n \"id\": \"63295ae7981b4314003ddbd1\"\n_id\": \"632addc8d41e7f277469a619\",\n \"outlet_id\": \"63031a61ade51cae66eec7e1\",\n \"addons\": [\n {\n \"name\": \"Cheese Minor\",\n \"price\": 1239,\n \"description\": \"Cheese minor\",\n \"is_avaliable\": true,\n \"type\": \"veg\",\n \"tags\": [\n \"63295ae7981b4314003ddbd2\"\n ],\n \"_id\": \"632addc8d41e7f277469a61a\"\n },\n {\n \"name\": \"Cheese Minor\",\n \"price\": 1239,\n \"description\": \"Cheese minor\",\n \"is_avaliable\": true,\n \"type\": \"veg\",\n \"tags\": [\n \"63295ae7981b4314003ddbd2\"\n ],\n \"_id\": \"632addced41e7f277469a620\"\n },\n {\n \"name\": \"Cheese Minor\",\n \"price\": 1239,\n \"description\": \"Cheese minor\",\n \"is_avaliable\": true,\n \"type\": \"veg\",\n \"tags\": [\n \"63295ae7981b4314003ddbd2\"\n ],\n \"_id\": \"632addf954c84b3eb6dfbac8\"\n },\n {\n \"name\": \"Cheese Minor\",\n \"price\": 1239,\n \"description\": \"Cheese minor\",\n \"is_avaliable\": true,\n \"type\": \"veg\",\n \"tags\": [\n \"63295ae7981b4314003ddbd2\"\n ],\n \"_id\": \"632ade56232424f78f7b1647\"\n },\n {\n \"name\": \"Cheese Minor\",\n \"price\": 1239,\n \"description\": \"Cheese minor\",\n \"is_avaliable\": true,\n \"type\": \"veg\",\n \"tags\": [\n \"63295ae7981b4314003ddbd2\"\n ],\n \"_id\": \"632adef57a8def7b558a5701\"\n },\n {\n \"name\": \"Cheese Minor\",\n \"price\": 1239,\n \"description\": \"Cheese minor\",\n \"is_avaliable\": true,\n \"type\": \"veg\",\n \"tags\": [\n \"63295ae7981b4314003ddbd2\"\n ],\n \"_id\": \"632adf1c1574d90a5139a8fd\"\n }\n ],\n \"createdAt\": \"2022-09-21T09:47:52.097Z\",\n \"updatedAt\": \"2022-09-21T09:53:32.031Z\",\n \"__v\": 5\n let data = await FoodAddOnsModel.aggregate([\n {\n $match: {\n outlet_id: mongoose.Types.ObjectId(outlet_id),\n },\n },\n {\n $project: {\n _id: 1,\n outlet_id: 1,\n addons: 1,\n },\n },\n {\n $lookup: {\n from: \"food_tags\",\n localField: \"addons.tags\",\n foreignField: \"tags._id\",\n as: \"tagss\",\n },\n },\n ]);\n", "text": "I have two collections, namely food_addons and food_tags\nA sample food_tag looks like thisAnd the Food addon Looks like hisWhat I am trying to achieve is to populate the tags field on food_addons with reference to the food_tags collections for a particular outlet.I wrote this query but seemed not to work.", "username": "Samson_Kwaku_Nkrumah" }, { "code": "\"addons.tags\"food_addonsmongoshfoodtagsfoodaddonsfooddb>db.foodaddons.aggregate([\n{ '$project': { _id: 1, outlet_id: 1, addons: 1 } },\n{\n '$lookup': {\n from: 'foodtags',\n localField: 'addons.tags',\n foreignField: 'tags._id',\n as: 'tagss'\n }\n}\n])\n[\n {\n _id: '632addc8d41e7f277469a619',\n outlet_id: '63031a61ade51cae66eec7e1',\n addons: [\n {\n name: 'Cheese Minor',\n price: 1239,\n description: 'Cheese minor',\n is_avaliable: true,\n type: 'veg',\n tags: [ '63295ae7981b4314003ddbd2' ],\n _id: '632addc8d41e7f277469a61a'\n },\n {\n name: 'Cheese Minor',\n price: 1239,\n description: 'Cheese minor',\n is_avaliable: true,\n type: 'veg',\n tags: [ '63295ae7981b4314003ddbd2' ],\n _id: '632addced41e7f277469a620'\n },\n {\n name: 'Cheese Minor',\n price: 1239,\n description: 'Cheese minor',\n is_avaliable: true,\n type: 'veg',\n tags: [ '63295ae7981b4314003ddbd2' ],\n _id: '632addf954c84b3eb6dfbac8'\n },\n {\n name: 'Cheese Minor',\n price: 1239,\n description: 'Cheese minor',\n is_avaliable: true,\n type: 'veg',\n tags: [ '63295ae7981b4314003ddbd2' ],\n _id: '632ade56232424f78f7b1647'\n },\n {\n name: 'Cheese Minor',\n price: 1239,\n description: 'Cheese minor',\n is_avaliable: true,\n type: 'veg',\n tags: [ '63295ae7981b4314003ddbd2' ],\n _id: '632adef57a8def7b558a5701'\n },\n {\n name: 'Cheese Minor',\n price: 1239,\n description: 'Cheese minor',\n is_avaliable: true,\n type: 'veg',\n tags: [ '63295ae7981b4314003ddbd2' ],\n _id: '632adf1c1574d90a5139a8fd'\n }\n ],\n tagss: [\n {\n _id: '63295ae7981b4314003ddbd1',\n outlet_id: '63031a61ade51cae66eec7e1',\n tags: [\n { name: 'cheeses', _id: '63295ae7981b4314003ddbd2' },\n { name: 'beverage', _id: '632995bdc6ab38a142d85650' }\n ],\n createdAt: '2022-09-20T06:17:11.136Z',\n updatedAt: '2022-09-20T11:19:34.770Z',\n __v: 2,\n id: '63295ae7981b4314003ddbd1'\n }\n ]\n }\n]\n", "text": "Hi @Samson_Kwaku_Nkrumah,Could you provide the following information:Additionally, I had done some testing via mongosh on my own test environment using the same 2 sample documents put in collections foodtags and foodaddons. Please see the output below, is this your expected output?Output:Regards,\nJason", "username": "Jason_Tran" } ]
MongoDB Aggreagation
2022-09-23T09:11:17.631Z
MongoDB Aggreagation
1,013
null
[ "crud" ]
[ { "code": "", "text": "Hi,I’m trying to do a save on Mongo while doing a create. I want to throw an error if Duplicate key is found (this is a composite key for me). When I try to save the data with same composite key, somehow it works and does not throw error which is weird.I did follow steps mentioned here: https://www.baeldung.com/spring-data-mongodb-composite-key#1-testing-our-modelCan someone help to understand what might be happening here?", "username": "Pranita_Hatte" }, { "code": "", "text": "Hi @Pranita_Hatte and welcome to the MongoDB Community forums. MongoDB should not be saving documents that violate a unique constraint. Can you share the index definition that has this constraint on it and then the documents that were inserted that have the same keys?", "username": "Doug_Duncan" }, { "code": "", "text": "I was able to understand this. Mongo save is actually just updating the data and not doing the unique id check. I am also using CoroutineCrudRepository.", "username": "Pranita_Hatte" } ]
Mongo save not returning Duplicate key error
2022-09-09T20:51:52.222Z
Mongo save not returning Duplicate key error
1,702
null
[ "java" ]
[ { "code": "{\n \"rules\": {\n \"AnalysisModel\": [\n {\n \"name\": \"anyperson\",\n \"applyWhen\": {},\n \"read\": false,\n \"write\": true\n }\n ],\n \"CoordinatesModel\": [\n {\n \"name\": \"anyperson\",\n \"applyWhen\": {},\n \"read\": false,\n \"write\": true\n }\n ],\n \"UserModel\": [\n {\n \"name\": \"anyperson\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": true\n }\n ]\n },\n \"defaultRoles\": [\n {\n \"name\": \"read-write\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": true\n }\n ]\n}\nCredentials credentials = Credentials.anonymous();\n\n User userSync = app.login(credentials);\n\n SyncConfiguration config = new SyncConfiguration.Builder(userSync)\n .initialSubscriptions(new SyncConfiguration.InitialFlexibleSyncSubscriptions() {\n @Override\n public void configure(Realm realm, MutableSubscriptionSet subscriptions) {\n\n subscriptions.addOrUpdate(Subscription.create(\"anyperson\",realm.where(UserModel.class)));\n\n }\n })\n .allowQueriesOnUiThread(true)\n .allowWritesOnUiThread(true)\n .modules(new ModuleUserAndAnalysis())\n .build();\n Realm.getInstanceAsync(config, new Realm.Callback() {\n @Override\n public void onSuccess(Realm realm) {\n Log.v(\"EXAMPLE\", \"Successfully opened a realm.\");\n }\n });\n\n realmConfig = config;\n\n return Realm.getInstance(realmConfig);\nsignature\n", "text": "I would like to know how to limit a collection to read-only and another to write-only, I made a rule in flexible but I am not able to implement a signature that follows these rules can anyone help me?rule implemented in atlas", "username": "multiface_biometria" }, { "code": "{\n \"name\": \"readonly\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": false\n}", "text": "@multiface_biometria I’m not sure exactly what you are trying to do but perhaps Asymmetric Sync is what you are looking for?Another option would be to have a read-only role?\nsomething like:", "username": "Ian_Ward" }, { "code": "", "text": "Thanks for the return.What I really wanted is a rule and a signature to be read only in one collection and write only in another.", "username": "multiface_biometria" }, { "code": "", "text": "{\n“rules”: {\n“Collection1l”: [\n{\n“name”: “only-write”,\n“applyWhen”: {},\n“read”: false,\n“write”: true\n}\n],\n“Collection2”: [\n{\n“name”: “read-write”,\n“applyWhen”: {},\n“read”: true,\n“write”: true\n}\n]\n}I would like to know what signature I would use for collection 1 .\nwhere is only written.collection2 is using subscriptions.addOrUpdate(Subscription.create(“read-write”,realm.where(collection2.class)));", "username": "multiface_biometria" }, { "code": "", "text": "Write permissions require and imply read permissions, so unfortunately it’s not possible to make a rule with write-only (and not read) permissions.Take a look at the docs on permissions: https://www.mongodb.com/docs/atlas/app-services/sync/data-access-patterns/permissions/#write-permissions", "username": "Sudarshan_Muralidhar" }, { "code": "", "text": "Thanks for the return.Is there any other way to implement a write-only collection?The asymmetric mode for example?", "username": "multiface_biometria" }, { "code": "", "text": "Asymmetric sync is write-only in the sense that noone can actually sync it down. It is ideal for things like metrics, logging, IoT measurements, etc. I suspect it might be what you are looking for, but I am curious why exactly you wany write-only permissions since it does seem like a bit of an anti-pattern to let someone write something that they are not allowed to read.", "username": "Tyler_Kaye" }, { "code": "", "text": "Thank you very much for the feedback.You made everything more understandable.The ideal for me is to record a route taking the coordinate data and saving it directly in the mongo atlas, so if I cleaned the local data I would still have the atlas for consultation that would be used by an admin login.in my app we don’t need to have the data on the device only in mongo atlas so I didn’t want to read anything from the atlas.", "username": "multiface_biometria" }, { "code": "", "text": "If you never want your app to read any data locally / from atlas then Asymmetric sync is exactly what you want. It will essentially guarantee that everything you ever write will make it to Atlas (even if your device does not have service). The one caveat is that it is insert-only, meaning that you cant “update” objects but that makes sense considering that you cant “read” anything to update in the first place!Excited for you to try it out and let us know if you have any other questions.Thanks,\nTyler", "username": "Tyler_Kaye" } ]
How to create permission only read in one collection and only read in another collection on flexible sync?(JAVA SDK)
2022-09-26T17:23:58.673Z
How to create permission only read in one collection and only read in another collection on flexible sync?(JAVA SDK)
1,221
null
[ "node-js", "database-tools" ]
[ { "code": "mongoimportrunning: mongoimport -v -c coaches --uri=mongodb://mongo:27017/myproj-test --type=json --file=test_data/mongo_dump/coaches.json --stopOnError\nstderr: 2022-09-23T17:41:24.041+0000\tfilesize: 142688 bytes\nstderr: 2022-09-23T17:41:24.042+0000\tusing fields: \nstderr: 2022-09-23T17:41:27.048+0000\t[........................] myproj-test.coaches\t0B/139KB (0.0%)\nstderr: 2022-09-23T17:41:30.047+0000\t[........................] myproj-test.coaches\t0B/139KB (0.0%)\nstderr: 2022-09-23T17:41:33.047+0000\t[........................] myproj-test.coaches\t0B/139KB (0.0%)\nstderr: 2022-09-23T17:41:36.047+0000\t[........................] myproj-test.coaches\t0B/139KB (0.0%)\nmongo-tools", "text": "We are running into an issue with mongoimport during continuous integration, using a GitLab runner, whereby it loops with progress being “0B/139KB”:This is called as part of our test suite, to bootstrap the database prior to testing. It works locally when I try on my Mac, but not in the “nodejs:16” build image.We are using mongo-tools, which apt indicates as being: “mongo-tools/oldstable 3.4.14-4 amd64”.Can anyone indicate what I could be doing next to work out what is wrong or solve the issue?", "username": "Andre-John_Mas1" }, { "code": "", "text": "Hi @Andre-John_Mas1 ,“mongo-tools/oldstable 3.4.14-4 amd64”.What version of MongoDB server are you connecting to? 3.4.14 is a very old (and end-of-life) version of MongoDB tools released in March, 2018.Regards,\nStennie", "username": "Stennie_X" }, { "code": "async function getMongoVersion(config) {\n const mongoUrl = config.url;\n const mongoClient = await mongoConnect(mongoUrl, {\n useNewUrlParser: true,\n useUnifiedTopology: true\n });\n\n const database = mongoClient.db();\n if (database) {\n const adminDb = database.admin();\n const serverInfo = await adminDb.serverStatus();\n return serverInfo.version;\n }\n return undefined;\n}\n$ grep NAME /etc/*-release\nPRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"\nNAME=\"Debian GNU/Linux\"\nVERSION_CODENAME=buster\n$ uname -a\nLinux runner-jlguopmm-project-11773380-concurrent-0 5.4.109+ #1 SMP Wed Jun 16 20:00:10 PDT 2021 x86_64 GNU/Linux\n", "text": "Running the code below in NodeJS, I am getting 6.0.1:I’ll explore how to get the most recent client tools onto this runner. Right now their runner image for nodejs 16, gives the following as environment:", "username": "Andre-John_Mas1" }, { "code": "before_script - wget -qO - https://www.mongodb.org/static/pgp/server-6.0.asc | apt-key add -\n - echo \"deb http://repo.mongodb.org/apt/debian buster/mongodb-org/6.0 main\" | tee /etc/apt/sources.list.d/mongodb-org-6.0.list\n - apt-get update\n - apt-get install -y mongodb-org-tools\n", "text": "It looks like it was indeed the old version of the mongo-tools that was causing the issues.I have now added the following lines to our CI configuration, in the before_script section, based on the community edition instructions:", "username": "Andre-John_Mas1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoimport failing in GitLab continuous integration with 0B
2022-09-24T22:03:23.632Z
Mongoimport failing in GitLab continuous integration with 0B
1,843
https://www.mongodb.com/…c_2_1023x189.png
[ "aggregation", "crud", "serverless", "bucket-pattern" ]
[ { "code": "unitsupdateOnestartDateendDate{\n \"_id\": \"6323bcf78fd1d0c6dc571110\",\n \"endDate\": \"2022-09-16T23:59:59.000Z\",\n \"startDate\": \"2022-09-16T00:00:00.000Z\",\n \"batteryVoltageSum\": 2455.2400000000002,\n \"count\": 678,\n \"packetRSSISum\": -34791,\n \"measurements\": [\n {\n \"timestamp\" : ISODate(\"2022-09-16T01:00:01.000+01:00\"),\n \"temperature\" : 29.19889502762431,\n \"batteryVoltage\" : 3.64,\n }\n ... (up to 1439 more)\n ]\n}\nupdateOne$pushmeasurementupsert: true$setOnInsert _id: someId\n }, {\n $push: {\n measurements: newMeasurement\n },\n $inc: {\n batteryVoltageSum: measurement.batteryVoltage,\n packetRSSISum: measurement.packetRSSI,\n count: 1,\n },\n $setOnInsert: {\n startDate,\n endDate,\n },\n }, {\n upsert: true\n })\nupdateOneupdateOneupdateOneupdateOne", "text": "My question concerns the amount of read vs write units that register on MongoDB Atlas after updateOne operations on documents in a bucket pattern.I have a stream of steady IoT data for measurements at minute intervals. I decided to bucket these measurements daily and so they are stored in documents that have a startDate and endDate field compartmentalising each day. Here’s a representative schema:Every time I updateOne, I use the $push aggregation to add a measurement to the relevant array above (note that I am using upsert: true). If the bucket does not exist, I insert it, and use the $setOnInsert aggregation to set some field.\nHere’s the full query:Since updateOne with upsert does not retrieve the document that it updates or inserts, there should only be write units registered on MongoDB Atlas.Every time updateOne is triggered, the expected write units are registered, but , I also see a much bigger amount of read units that seem to grow proportionally with the size of the measurements array. This is proof that the updateOne operation is in fact also triggering reads for some reason.Here’s a screenshot of this from my logs. As you can see, at the beginning of each day when the bucket has no measurements, the read units are much less and they can be seen to grow throughout the day as measurements are recorded at minute intervals.Why are reads metered that are proportional to the size of a nested array when I am only performing an updateOne (with upsert) operation on such a document?", "username": "Iuliu_Teodor_Radu" }, { "code": "updateOneupdateOneIf a document exceeds 4KB or an index exceeds 256B, Atlas covers each excess chunk of 4KB or 256B with an additional RPU.", "text": "Hi @Iuliu_Teodor_Radu welcome to the community!Since updateOne with upsert does not retrieve the document that it updates or inserts, there should only be write units registered on MongoDB Atlas.The updateOne operation still needs to read the document. This is because it needs to apply the update operations on an existing document, which it cannot do if it doesn’t have the document to start with.From the pricing page, an RPU is multiplies of 4KB in terms of document size:If a document exceeds 4KB or an index exceeds 256B, Atlas covers each excess chunk of 4KB or 256B with an additional RPU.Thus if you can isolate the RPU for this update operation, I think you should find that the number of RPU consumed corresponds to the size of the document divided by 4KB.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi , thank you for your answer and welcoming me to the community!\nYour answer clears out my issue that I wanted to confirm. I was unaware that RPUs get clocked at that low level as this isn’t fully clear from the documentation. It seems that probably Serverless is not the way to go for my use case. I was only using it for staging some devices, but even that won’t be scalable.\nThanks again.All the best,\nTeodor", "username": "Iuliu_Teodor_Radu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
`updateOne` consumes a lot more RPU than WPU proportional to document size in a bucket pattern
2022-09-22T17:37:08.762Z
`updateOne` consumes a lot more RPU than WPU proportional to document size in a bucket pattern
2,445
null
[ "aggregation", "queries", "java", "compass" ]
[ { "code": "", "text": "I created an aggregation in Compass version 1.33.1 and it works fine. Then I used “export to language” to generate Java code. But Java code failed with error message:\ncom.mongodb.MongoCommandException: Command failed with error 40323 (Location40323): ‘A pipeline stage specification object must contain exactly one field.’ on server localhost:27017. The full response is {“ok”: 0.0, “errmsg”: “A pipeline stage specification object must contain exactly one field.”, “code”: 40323, “codeName”: “Location40323”}pipeline (worked perfect)\n[{\n$match: {\nME_PART_NUMBER: {\n$in: [\n‘72-0500-8-5003’,\n‘72-0500-8-5005’\n]\n}\n}\n}, {\n$project: {\nME_PART_NUMBER: 1,\nTASK_TYPE: 1,\nLABOR_HOURS: 1,\nNUM_MECHANICS_REQUIRED: 1\n}\n}, {\n$lookup: {\nfrom: ‘SGE001’,\n‘let’: {\npart_number: ‘$ME_PART_NUMBER’\n},\npipeline: [\n{\n$match: {\n$expr: {\n$eq: [\n‘$ME_PART_NUMBER’,\n‘$$part_number’\n]\n}\n}\n},\n{\n$project: {\n_id: 1,\nME_PART_NUMBER: 1,\nMFG_PART_NUMBER: 1,\nKEYWORD_DESCRIPTION: 1\n}\n}\n],\nas: ‘SGE001_JOINED’\n}\n}, {}]Generated Java code (failed):\n/*MongoClient mongoClient = new MongoClient(\nnew MongoClientURI(\n“mongodb://localhost:27017/”\n)\n);\nMongoDatabase database = mongoClient.getDatabase(“DTFPIMSIB”);\nMongoCollection collection = database.getCollection(“SST001”);FindIterable result = collection.aggregate(Arrays.asList(new Document(“$match”,\nnew Document(“ME_PART_NUMBER”,\nnew Document(“$in”, Arrays.asList(“72-0500-8-5003”, “72-0500-8-5005”)))),\nnew Document(“$project”,\nnew Document(“ME_PART_NUMBER”, 1L)\n.append(“TASK_TYPE”, 1L)\n.append(“LABOR_HOURS”, 1L)\n.append(“NUM_MECHANICS_REQUIRED”, 1L)),\nnew Document(“$lookup”,\nnew Document(“from”, “SGE001”)\n.append(“let”,\nnew Document(“part_number”, “$ME_PART_NUMBER”))\n.append(“pipeline”, Arrays.asList(new Document(“$match”,\nnew Document(“$expr”,\nnew Document(“$eq”, Arrays.asList(“$ME_PART_NUMBER”, “$$part_number”)))),\nnew Document(“$project”,\nnew Document(“_id”, 1L)\n.append(“ME_PART_NUMBER”, 1L)\n.append(“MFG_PART_NUMBER”, 1L)\n.append(“KEYWORD_DESCRIPTION”, 1L))))\n.append(“as”, “SGE001_JOINED”)),\nnew Document()));", "username": "peijun_cao" }, { "code": "new Document()new Document()", "text": "The provided code doesn’t compile, as it’s using FindIterable instead of AggregateIterable, but when I fix that, I can reproduce the error. The problem is that final, empty new Document() at the end of your pipeline: if you remove that then the error will go away.Was Compass actually generating that final new Document(), or was that inadvertently added when you copied the code.? If the former, it’s potentially a Compass bug.Let us know.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "thanks Jeffrey for quick response. I made two minor change following your suggestion. It works perfect after that. thanks\nimage752×355 13.4 KB\n", "username": "peijun_cao" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Java code generated by Compass does not work
2022-09-26T15:55:58.008Z
Java code generated by Compass does not work
1,524
null
[ "queries", "dot-net" ]
[ { "code": "using System;\nusing System.Linq;\nusing System.Linq.Expressions;\nusing System.Security.Authentication;\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization;\nusing MongoDB.Bson.Serialization.Serializers;\nusing MongoDB.Driver;\nusing MongoDB.Driver.Linq;\n\nBsonSerializer.RegisterSerializer(typeof(Guid), new GuidSerializer(BsonType.String));\nBsonSerializer.RegisterSerializer(typeof(InvoiceId), new MyGuidSerializer());\nBsonTypeMapper.RegisterCustomTypeMapper(typeof(InvoiceId), new MyGuidBsonTypeMapper());\n\n\nvar settings = MongoClientSettings.FromUrl(new MongoUrl(\"mongodb://localhost:27017/test\"));\nsettings.SslSettings = new SslSettings {EnabledSslProtocols = SslProtocols.Tls12};\nsettings.LinqProvider = LinqProvider.V3;\nvar mongoClient = new MongoClient(settings);\nvar mongoDatabase = mongoClient.GetDatabase(\"test\");\nmongoDatabase.DropCollection(\"test\");\nvar collection = mongoDatabase.GetCollection<Document>(\"test\");\n\n\nvar guid = Guid.NewGuid();\nvar invoiceId = new InvoiceId(guid);\nvar guidNullable = (Guid?) guid;\nvar invoiceIdNullable = (InvoiceId?) invoiceId;\nvar document = new Document\n{\n InvoiceId = invoiceId,\n InvoiceIdNullable = invoiceId,\n Guid = guid,\n GuidNullable = guid\n};\ncollection.InsertOne(document);\n\nExpression<Func<Document, bool>>[] f =\n{\n c => c.Guid == guid,\n c => c.GuidNullable == guid,\n c => c.Guid == invoiceId,\n c => c.GuidNullable == invoiceId,\n c => c.InvoiceId == invoiceId,\n c => c.InvoiceIdNullable == invoiceId,\n \n c => c.Guid == guidNullable,\n c => c.GuidNullable == guidNullable,\n c => c.Guid == invoiceIdNullable,\n c => c.GuidNullable == invoiceIdNullable,\n c => c.InvoiceId == invoiceIdNullable,\n c => c.InvoiceIdNullable == invoiceIdNullable,\n \n c => c.InvoiceId == guidNullable, // explodes in V3\n c => c.InvoiceIdNullable == guidNullable, // explodes in V3\n c => c.InvoiceId == guid, // explodes in V3\n c => c.InvoiceIdNullable == guid, // explodes in V3\n};\n\nforeach (var expression in f)\n{\n Console.Out.WriteLine(expression.ToString());\n \n var results = collection.AsQueryable().Where(expression).ToCursor().ToList();\n var result = results.FirstOrDefault() ?? throw new Exception(\"Not found!\");\n if (result.InvoiceId != invoiceId)\n {\n throw new Exception(\"Mismatch!\");\n }\n\n Console.Out.WriteLine(\"All good\");\n}\n\npublic class Document\n{\n public ObjectId Id { get; set; }\n public InvoiceId InvoiceId { get; set; }\n public InvoiceId? InvoiceIdNullable { get; set; }\n public Guid Guid { get; set; }\n public Guid? GuidNullable { get; set; }\n}\n\npublic readonly record struct InvoiceId(Guid Value)\n{\n public static implicit operator Guid(InvoiceId s) => s.Value;\n}\n\npublic class MyGuidSerializer : SerializerBase<InvoiceId>\n{\n public override InvoiceId Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n {\n if (context.Reader.CurrentBsonType == BsonType.Null)\n {\n context.Reader.ReadNull();\n return default;\n }\n\n if (Guid.TryParse(context.Reader.ReadString(), out var guid))\n {\n return new InvoiceId(guid);\n }\n\n return new InvoiceId(default);\n }\n\n public override void Serialize(BsonSerializationContext context, BsonSerializationArgs args, InvoiceId value)\n {\n context.Writer.WriteString(value.Value.ToString());\n }\n}\n \npublic class MyGuidBsonTypeMapper : ICustomBsonTypeMapper\n{\n public bool TryMapToBsonValue(object value, out BsonValue bsonValue)\n {\n bsonValue = (BsonString)((InvoiceId)value).Value.ToString();\n return true;\n }\n}\n", "text": "In my codebase I have a custom type defined together with the custom serializer for that type. After switching to linq v3 several types of queries are exploding with the following error:Unhandled exception. System.ArgumentException: Invalid toType: System.Guid. (Parameter ‘toType’)\nat MongoDB.Driver.Linq.Linq3Implementation.Ast.Expressions.AstExpression.Convert(AstExpression input, Type toType, AstExpression onError, AstExpression onNull)\n…Is there a way to configure the driver so the query rewrite won’t be necessary? Below full repro (driver 2.17.1):", "username": "Marek_Olszewski" }, { "code": "", "text": "Hi, @Marek_Olszewski,Thank you for reporting this issue. We have confirmed that LINQ2 passes your tests, but LINQ3 fails for InvoiceId. We really appreciate the time you invested to create a self-contained repro. I have created CSHARP-4332 to track this issue. Please follow that ticket for updates. You can also comment on CSHARP-4332 if you have further questions.Sincerely,\nJames", "username": "James_Kovacs" } ]
Casting/serialization issue after switching to Linq v3
2022-09-24T20:59:11.946Z
Casting/serialization issue after switching to Linq v3
2,603
null
[ "server" ]
[ { "code": "brew servicesName Status User File\nmongodb-community error 3584 code ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.412+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.417+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.417+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":71667,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"Pauls-MacBook-Pro.local\"}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\"},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1120}}\n{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "Hi,Something has gone wrong with my mongodb installation after upgrading OSX from 12.5 to 12.6 Monterey and I can’t now connect to the server. I haven’t manually changed anything myself.I have tried uninstalling and re-installing with Homebrew, without success.When I run brew services I get the following:Here’s the logs:Can anyone help with this, or point me in the right direction.Many thanks", "username": "Paul_Hollyer" }, { "code": "{\"t\":{\"$date\":\"2022-09-26T12:29:55.419+01:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}\n", "text": "I have seen this line from the logs:So I have changed the permissions on the socket file, but nothing changes wrt to starting mongo.", "username": "Paul_Hollyer" }, { "code": "mongod", "text": "Hi @Paul_Hollyer, and welcome to the MongoDB Community forums! After you changed the permissions on the file and tried to restart the mongod process, what do you see in the log files? MongoDB is pretty good about logging issues during startup.", "username": "Doug_Duncan" }, { "code": "mongod{\"t\":{\"$date\":\"2022-09-26T16:06:39.250+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.263+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.264+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.267+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.275+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.275+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.275+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.275+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.275+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":84096,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"Pauls-MacBook-Pro.local\"}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.276+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.276+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.276+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\"},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.278+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.279+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/usr/local/var/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.279+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7680M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.715+01:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.716+01:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"45: Operation not supported\"}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.716+01:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":702}}\n{\"t\":{\"$date\":\"2022-09-26T16:06:39.716+01:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "After you changed the permissions on the file and tried to restart the mongod process, what do you see in the log files?", "username": "Paul_Hollyer" }, { "code": "Failed to start up WiredTiger under any compatibility version", "text": "Failed to start up WiredTiger under any compatibility versionI’ve removed the old data directory, and created a new empty one, and the server starts up now.", "username": "Paul_Hollyer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't start mongodb-community
2022-09-26T11:47:16.458Z
Can&rsquo;t start mongodb-community
2,655
null
[ "data-modeling", "indexes" ]
[ { "code": "", "text": "I have a huge heterogenous collection( = different subclasses of data in the same collection, think ecommerce catalog type of data, where all products are in the same collection but with a hugely disjoint attribute set but with some common attributes inherited from a parent type).There can be a few thousand such types (subclasses) each with about 50-60 attributes and they are pretty dynamic - new subtypes get added all the time. The documents can also be nested. The single collection contains a few hundred million documents.Now, given this, the number “distinct” attributes that I need to index on the collection is pretty high a few hundred attributes perhaps and therefore that many number of indexes on the same collection approximately. With MongoDB’s 64 indexes limit, I cannot keep adding indexes ( even otherwise it’s not an approach that warms the cockles of my heart anyway). Any of these attributes are searchable and needs to be performant.I have the following optionsThoughts ? How have you solved this problem?", "username": "kembhootha_k" }, { "code": "", "text": "Can’t seem to edit my post. The last option again is a workaround and a horribly broken design, so not my preference TBH.", "username": "kembhootha_k" }, { "code": "$search", "text": "Welcome back @kembhootha_k !I would go with an option not listed yet – apply the Attribute Pattern to your data modelling use case since:There is a subset of fields that share common characteristics and you may want to sort or query on that subset of fieldsYou need to add a dynamic range of attributes that may only be found in a small subset of documentsThe Attribute Pattern allows you to efficiently index using key/value attribute pairs for dynamic attributes.For more reference patterns, please see Building with Patterns: A Summary.If MongoDB Atlas is an option for your use case, Atlas Search is an integrated search solution based on Apache Lucene (similar to ES). Configuration of indexing is done via the Atlas UI/API, index sync is automatic, and queries are performed via the standard MongoDB API ($search aggregation stage).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you. I will have a look at the Attribute Pattern.Further, the Atlas Search approach, is that recommended for non-analytics usecases where a small % of documents not being found on the search indexes could lead to a catastrophy? I have a 0 tolerance to out of sync data between primary store and seconday indexes.Much appreciated!", "username": "kembhootha_k" }, { "code": "{\n \"id\": \"A12355\",\n \"commonAtt1\": \"value1\",\n \"commonAtt2\": \"value2\",\n \"a\":[{\"k\": \"CustomAtt1\", \"v\": \"Value3\"},{\"k\": \"CustomAtt2\", \"v\": \"Value4\"},{\"k\": \"CustomAtt3\", \"v\": \"Value5\"}]\n\n \"childDocuments\": [{\n //looks very simlar to the parent document wrt it's attributes for example\n },\n {\n //looks very simlar to the parent document wrt it's attributes for example\n }]\n}\n", "text": "@Stennie_XThank you for the inputs. I spent some time looking at the Attribute Pattern and it certainly would work. However, I have the following concerns with the Attribute Pattern.The Attribute Pattern seems to dilute a document store’s capabilities. Most of us use document stores like MongoDB for the fundamental tenet - of being able to have the flexibility of schemaless/extensible schema documents. Given that, the attribute pattern seems a workaround (IMHO). Put another way, it could be done with a regular(non document) database with a large vertical table that stores the attributes as keyvalue pairs (parent object id, attribute id, value as string ) without a document store?Size of indexes - If I don’t need all my attributes indexed potentially, but only some of them only (say 20-30% attributes ), the attribute pattern wouldn’t let me do that, leading to large indexes ?Doesn’t handle nested documents well ( If I have a document with flexible attributes and it has a nested document with flexible attributes, I cannot use this patterns for any arbitrary level of or type of nested document )Sample Document For this discussion with attribute pattern", "username": "kembhootha_k" } ]
Heterogenous collection and indexing difficulties
2022-09-22T01:46:32.571Z
Heterogenous collection and indexing difficulties
2,544
null
[]
[ { "code": "", "text": "I m getting this in the collections “Data Explorer operation for request ID’s [] timed out after 45 secs Check your query and try again.” It is slowing down my CRUD operation", "username": "Sagar_sethi" }, { "code": "", "text": "Hi @Sagar_sethi - Welcome to the community.Can you advise the following information:I would recommend testing also using Compass to see if the same error is present.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "2 posts were split to a new topic: Data Explorer operation for request ID error", "username": "Jason_Tran" }, { "code": "", "text": "I am getting same message as Sagar, In compass it is taking little time but it is showing data perfectly. I am using Shared cluster.", "username": "Parvin_Desai" }, { "code": "", "text": "Hi @Parvin_Desai,Can you clarify what you mean by “It is take little time”? I’m curious to know if Compass is taking (30,35,40 seconds, etc).Also, please advise if:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "A post was split to a new topic: Compass delayed results", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Error in collectioons "Data Explorer operation for request ID's [] timed out after 45 secs Check your query and try again."
2022-07-04T11:47:38.659Z
Error in collectioons &ldquo;Data Explorer operation for request ID&rsquo;s [] timed out after 45 secs Check your query and try again.&rdquo;
3,708
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.23-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.22. The next stable release 4.2.23 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2.23-rc1 is released
2022-09-26T13:58:37.584Z
MongoDB 4.2.23-rc1 is released
2,012
https://www.mongodb.com/…ec2a7ebb7292.png
[ "compass", "time-series" ]
[ { "code": "", "text": "When trying to import csv file into a time-series collection, I got an error: ‘startAt’ must be present and contain a valid BSON UTC datetime value. I tried several formats, “2022-01-01T00:05:00.000+00:00”, “2022-01-01T00:05:00.000Z”, “2022-01-01T00:05:00Z”, etc. but none of them worked. Here is the data I used:loadId, startAt, value\n“632f85238b50ca44b3720982”, “2022-01-01T00:05:00.000+00:00”, 5213.2\n“632f85238b50ca44b3720982”, “2022-01-01T00:10:00.000+00:00”, 4987.4here is the error I got:\nimage728×581 30.8 KB\nAny suggestions on how to import time-series data from csv file would be appreciated. Thanks.Buck", "username": "Buck_Feng" }, { "code": "", "text": "Thanks for reporting this. I reproduced this locally and get the same error. It seems to be the fact that the CSV file has spaces after the commas. That’s confusing the date parsing. I verified this by importing the file into a non-timeseries collection. Compass turns all the startAt fields into the 1970 unix epoch.Simply removing the spaces fixes it. By the way: The quotes are also not needed in this case.I’ll file an issue on our end as well. I think we should either be resilient to the comma followed by space format variant or we should make it much clearer to the user that that’s wrong.", "username": "Le_Roux_Bodenstein" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to import csv file to time-series collection using Compass
2022-09-25T00:55:02.811Z
Unable to import csv file to time-series collection using Compass
2,487
null
[]
[ { "code": "", "text": "Hello,We created a MongoDB document from the MongoDB Atlas interface with the correct partitionValue (userId in our case) but in our mobile app, we can’t retrieve theses documents through our synced Realm.Then, for testing, we created a document directly from the mobile app with the synced realm, and it worked, the document created was exactly the same as the one created manually (but for the _id key).\nWhen we try to get all the objects from the Realm collection, we only get the one created through realm and not the one created manually.So our guess is: If we created a document manually, it will not be linked the Realm, event if this database is the same that realm use.In fact, our need is to populate a lot of datas from our backend so all the users can get them through Realm using a public partition key.Do you have any ideas to how we can do this ?Thanks.", "username": "Pierre_More" }, { "code": ".trace__realm_sync.unsynced_documents__realm_sync", "text": "Hi Pierre,We created a MongoDB document from the MongoDB Atlas interface with the correct partitionValue (userId in our case) but in our mobile app, we can’t retrieve theses documents through our synced Realm.I understand this to mean you inserted a document using the Atlas Data Explorer but you were not able to retrieve this document in your Sync Client (mobile device).Then, for testing, we created a document directly from the mobile app with the synced realm, and it worked, the document created was exactly the same as the one created manually (but for the _id key).Here you created a document in the sync client and the document appeared in MongoDB when checking through the Atlas Data Explorer.When we try to get all the objects from the Realm collection, we only get the one created through realm and not the one created manually.I’m not sure what you mean by this.\nWhether you perform writes on MongoDB or in a Sync Client, the data should get translated in either direction. i.e. creating an object in your mobile should appear in Atlas and create a document in your Atlas collection should appear in your mobile device - if configurations allow for that.What I’m understanding from your description is that data is not always being synced between MongoDB and your Sync devices. The cause behind this could be numerous and we would need to take a look into your environment to pinpoint what exactly is failing, as such I would recommend that you raise this in a Support Case with us for investigation.Having said that you could check a few things on your side:Hope that helps.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Hi Mansoor,I’m actually running into exactly the same problem, similar use case (need to sync data from Python web app to apps that are using Realm):I’ve attached some screenshots showing the successful sync of the manually created document (_id: “uniqueid”). Any help is much appreciated!\nScreenshot 2022-09-15 at 18.51.241306×1010 105 KB\n\n\nScreenshot 2022-09-15 at 18.52.441574×1374 207 KB\n", "username": "Johannes_Deva-Koch" }, { "code": "", "text": "Hi @Johannes_Deva-Koch, can you send the url for your application in realm.mongodb.com (it will have /groups/${group_id}/apps/${app_id}). I can poke around to see what might be going on.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi Tyler, I’ve sent the URL via DM ", "username": "Johannes_Deva-Koch" }, { "code": "", "text": "Hi again Tyler, have you managed to find out anything?", "username": "Johannes_Deva-Koch" }, { "code": "", "text": "Hi again. After some additional trial and error I managed to isolate the problem, and it was code on our end that was ignoring the manually added documents due to unexpected values in one of the fields. Everything is now working as it should. My sincere apologies for the wild goose chase!", "username": "Johannes_Deva-Koch" }, { "code": "", "text": "Hi! My apologies, I ended up having to tend to some other matters last week and was planning to get to this this morning but glad it sounds like it has been resolved Let me know if you have any more questions,\nTyler", "username": "Tyler_Kaye" } ]
Sync an object manually created
2022-03-15T11:28:17.641Z
Sync an object manually created
3,240
null
[ "node-js", "mongodb-shell", "atlas" ]
[ { "code": "", "text": "I have created and account on mongodb and able to port my data to atlas.\nI am able to view my DB and its collections on my system using mongoshell\nthe metrics shows that the DB is active and operations are happening\nhowever on the web portal when i see the Browse collections, i dont see the DB and its collections\nmy collections are in admin DB\nwhat do i need to do to be able to view the admin db on atlas web portal", "username": "Madhura_Bindu" }, { "code": "", "text": "Show us screenshots of your shell and Atlas\nHow did you load data to your Cluster\nDon’t use admin for your collections\nLoad it to your own db", "username": "Ramachandra_Tummala" }, { "code": "adminlocal", "text": "Hi @Madhura_Bindu and welcome to the MongoDB Community forums! The admin database is a system database and I’m pretty sure those are not shown in the web interface. Notice that you don’t see the local database in the web view either. These databases are meant to be used by the system only and should not have user collections put in them.", "username": "Doug_Duncan" } ]
Able to see admin db of Atlas on my mongo shell but on on web
2022-09-26T04:58:04.324Z
Able to see admin db of Atlas on my mongo shell but on on web
1,862
null
[ "containers", "c-driver" ]
[ { "code": "$ cmake --build . \n. . .\n#13 59.98 [ 62%] Downloading crypt_shared\n#13 60.61 Downloading [https://downloads.mongodb.org/full.json] ...\n#13 62.19 Refreshing downloads manifest ...\n#13 63.29 Download crypt_shared v6.0.0-rc8-enterprise for ubuntu2204-x86_64\n#13 63.29 Traceback (most recent call last):\n#13 63.29 File \"/usr/src/app/mongo-c-driver-1.23.0/build/mongodl.py\", line 700, in <module>\n#13 63.29 sys.exit(main())\n#13 63.29 File \"/usr/src/app/mongo-c-driver-1.23.0/build/mongodl.py\", line 684, in main\n#13 63.29 result = _dl_component(db,\n#13 63.29 File \"/usr/src/app/mongo-c-driver-1.23.0/build/mongodl.py\", line 416, in _dl_component\n#13 63.29 raise ValueError(\n#13 63.29 ValueError: No download for \"crypt_shared\" was found for the requested version+target+architecture+edition\n#13 63.30 gmake[2]: *** [src/libmongoc/CMakeFiles/get-crypt_shared.dir/build.make:73: src/libmongoc/mongo_crypt_v1.so] Error 1\n#13 63.30 gmake[1]: *** [CMakeFiles/Makefile2:1796: src/libmongoc/CMakeFiles/get-crypt_shared.dir/all] Error 2\n#13 63.30 gmake: *** [Makefile:166: all] Error 2\n", "text": "Hello, I’m trying to build the C driver on ubuntu (more specifically the latest Docker image of ubuntu), from mongo-c-driver-1.23.0.tar.gz. I run CMake like this:Can you tell me what I need to install? Thanks!", "username": "Gustav_H" }, { "code": "crypt_sharedcmake -DMONGOC_TEST_USE_CRYPT_SHARED=OFF .", "text": "Hello @Gustav_H,The crypt_shared library is only required running C driver tests. It is not required to build the C driver. The download can be skipped by configuring with:cmake -DMONGOC_TEST_USE_CRYPT_SHARED=OFF ..", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem downloading crypt_shared when installing the mongodb c driver
2022-09-24T14:54:49.907Z
Problem downloading crypt_shared when installing the mongodb c driver
3,001
null
[ "aggregation", "queries" ]
[ { "code": "{\n _id : xxx,\n status : 'BUY', /* This is indexed field */\n active : 't', /* This is indexed field */\n created_date2 : \"2022-09-23T09:00:00.000Z\", /* This is indexed field */\n\n audio_details : [\n {id : 123 /* This is indexed field */ , created_date : \"2022-XXX\", /* Other fields goes here */},\n {id : 124 /* This is indexed field */ , created_date : \"2022-XXX\", /* Other fields goes here */},\n ...\n\n ],\n /* Other 60 fields goes here */\n}\n \n\nQuery 1: This is very slow (300 s)\n\n\ndb.audio_details.aggregate([{$match : { status : 'BUY',active: 't','audio_history.id' : {$in: [123]}}}, {$sort : {created_date2 : -1}}]);\n\n\nQuery 2: This is very fast (0.5 s)\n\ndb.audio_details.find({ status : 'BUY',active: 't','audio_history.id' : {$in: [123]}\n}).sort({created_date2 : -1})\n\n\nPls share why the query 1\n\nRegards\nKris", "text": "HiPlease lemme know why the performance is so poor in aggregation query in my examples Query 1, count of records in my audio_details collection is around 3M+ and sample records are like :", "username": "Senthil_kumar3" }, { "code": "db.collection.explain(\"executionStats\")db.collection.getIndexes()", "text": "Hi @Senthil_kumar3 - Welcome to the community Thanks for providing the snippet of your document fields and which field are indexed.To further assist with this, could you provide the following details:Regards,\nJason", "username": "Jason_Tran" }, { "code": "4.4find()sort()$sort.find(){created_date2: 1}created_date2", "text": "Hi @Senthil_kumar3, thank you for posting. When you’re cross posting from Stack Overflow it would be helpful to share the link to ensure context isn’t lost - especially if a solution is presented.For completeness the response at aggregation framework - Mongodb 4 poor performance in indexed fields - Stack Overflow is below.This appears to be a duplicate of why would identical mongo query take much longer via aggregation than via find? We can therefore make the following observations:Keep in mind that databases, MongoDB included, are usually most effective at using a single index per data source (collection in this situation) per operation. The only compelling reasons to have a single field index on {created_date2: 1} would be if it is a TTL index or if you are issuing queries where created_date2 is the only or most selective predicate. You should consider dropping such an index (and incorporating that field in a compound index per the third point above) if none of these conditions apply in your situation.", "username": "alexbevi" } ]
Mongodb 4 poor performance in indexed fields
2022-09-25T03:07:12.439Z
Mongodb 4 poor performance in indexed fields
1,106
null
[ "crud" ]
[ { "code": "ownersdevicesrelatedJsondeviceseqiupmentIDequipmentIddb.serviceAgreement.updateMany({\"basicData.serviceRecipients.relatedJson.basicData.devices.equipmentID\": {$exists: true}},\n [\n {\n $set: {\n \"basicData.serviceRecipients\": {\n $map: {\n input: \"$basicData.serviceRecipients\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson\",\n undefined\n ]\n },\n {\n relatedJson : {\n\n $mergeObjects: [\n \"$$this.relatedJson.basicData\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson.basicData\",\n undefined\n ]\n },\n {\n basicData : {\n\n $mergeObjects: [\n \"$$this.relatedJson.basicData\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson.basicData.devices,\n undefined\n ]\n },\n {\n \"$$this.devices\": {\n $map: {\n \n }\n } \n },\n {}\n ]\n }\n ]\n }\n },\n {}\n ],\n }\n ]\n }\n },\n {}\n ]\n }\n ]\n }\n }\n }\n }\n },\n {\n $unset: \"basicData.serviceRecipients.relatedJson.basicData.devices.equipmentID\"\n }\n ])\n\"$$this.devices\":$mapequipmentId$unsetdevice", "text": "Hi there,\nThis is my first post here, I’m fairly new to mongoDB.\nI have the following structure in a collection called agreements:\nbasicData.owners.relatedJson.basicData.devices.equipmentIDWhere owners and devices are lists of objects.\nThis structure is quite complex, and it’s worth noting that there is no guarantee the relatedJson object or the devices list exist.\nI would like to rename the field eqiupmentID to equipmentId.I have tried modifying a similar query that renames a field in a nested object in a nested list to suit this purpose, so far I’ve got the following:I’m now looking at \"$$this.devices\": and calling $map in there, and mapping each device with the new name. Is this the correct approach? will this work?\nMy original plan was to add the new field equipmentId and remove the old one (see the $unset call at the end) but I cannot get that to work when working with lists of lists.\nAny advice either way would be greatly appreciated.\nI would prefer to add the new field the remove the old one WITHOUT having to specify every other field in each device object as some of those may contain more nested objects.", "username": "Paul_Mallon" }, { "code": "db.collection.updateMany({\n \"basicData.owners.relatedJson.basicData.devices.equipmentID\": {\n $exists: true\n }\n },\n [\n {\n $set: {\n \"basicData.owners\": {\n\n $map: {\n input: \"$basicData.owners\",\n in: {\n\n $mergeObjects: [\n \"$$this\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson\",\n undefined\n ]\n },\n {\n \"relatedJson\": {\n $mergeObjects: [\n \"$$this.relatedJson\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson.basicData\",\n undefined\n ]\n },\n {\n \"basicData\": {\n $mergeObjects: [\n \"$$this.relatedJson.basicData\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson.basicData.devices\",\n undefined\n ]\n },\n {\n \"devices\": {\n $map: {\n input: \"$$this.relatedJson.basicData.devices\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n equipmentId: \"$$this.equipmentID\",\n\n }\n ]\n }\n }\n }\n },\n {},\n ]\n }\n ]\n }\n },\n {},\n ]\n }\n ]\n }\n },\n {},\n ]\n }\n ]\n }\n }\n }\n }\n },\n {\n $unset: \"basicData.owners.relatedJson.basicData.devices.equipmentID\"\n }\n ])\n", "text": "I managed to figure it out! The code below does exactly what I’m looking for.\nI’d be curious to know if there’s a more efficient way to do this :slightly_smiling_face :", "username": "Paul_Mallon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I rename a field nested in a list of objects stored in a list?
2022-09-26T07:37:49.510Z
How can I rename a field nested in a list of objects stored in a list?
1,191
null
[ "python" ]
[ { "code": "pymongo.errors.OperationFailure: bad auth : Authentication failed., full error: {'ok': 0, 'errmsg': 'bad auth : Authentication failed.', 'code': 8000, 'codeName': 'AtlasError'}\n", "text": "I’m struggling with this since yesterday, when I get my credentials from an external file, it becomes unable to authenticate, it shows this error:I tried to get my credentials from sqlite3, it didn’t work, i thought it was just a problem with sqlite, so I tried to do the same and store my credentials in a config file (.ini), and it doesn’t work either,\nEven if I print the connection url, it is 100% correct, exactly the same I use when typing it manually and works.Any thoughts on this ?", "username": "Abdel" }, { "code": " # Get the MongoDB credentials from the config file\n cparser = ConfigParser()\n cparser.read(self.config)\n USERNAME = quote_plus(cparser[\"MONGODB\"][\"USERNAME\"])\n PASSWORD = quote_plus(cparser[\"MONGODB\"][\"PASSWORD\"])\n CLUSTER = cparser[\"MONGODB\"][\"CLUSTER\"]\n\n # Preparing the database URL\n uri = f'mongodb+srv://{USERNAME}:{PASSWORD}@{CLUSTER}/test?authSource=admin&replicaSet=atlas-itbq89-shard-0&readPreference=primary&ssl=true'\n\n # Connecting to the MongoDB database\n self.client = MongoClient(uri)\n", "text": "Forgot to give an example of the code I use", "username": "Abdel" }, { "code": "", "text": "Hi @Abdel and welcome to the MongoDB Community forums! Does your password happen to have a special character in it? If so you will want to URL encode that value.", "username": "Doug_Duncan" }, { "code": "quote_plus()", "text": "Hi @Doug_Duncan\nThank you for your response, but my password doesn’t have any special character, and even further, I already used the quote_plus() function that encode the username and password in case they contain any special character…", "username": "Abdel" }, { "code": "mongoshmongo", "text": "Well then, since it’s an authentication error, I would verify that the username and password in the config file are indeed correct by trying to connect with the mongosh shell (or the older mongo tool if you installed 5.0 or older).", "username": "Doug_Duncan" }, { "code": " urllib.parse.quote_plus(string, safe='', encoding=None, errors=None)¶\nencoding=", "text": "@Abdel are you providing a value for the encoding= kwarg?", "username": "Jack_Woehr" }, { "code": "quote_plus()", "text": "@Jack_Woehr No, I am only passing the username/password directly to the quote_plus() function as shown in the example", "username": "Abdel" }, { "code": "uri uri = f'mongodb+srv://{USERNAME}:{PASSWORD}@{CLUSTER}/test?authSource=admin&replicaSet=atlas-itbq89-shard-0&readPreference=primary&ssl=true'\nprint(uri)\n", "text": "Okay, this falls into the category of “impossible problems” \nIF you can print out the value uri from your example:AND IF you can then copy-and-paste what is printed to mongosh and it works\nTHEN the entire world is falling apart and broken \nBut you may have to provide a concrete example because I for one cannot reproduce this bug. You’re going to have to prove it exists.", "username": "Jack_Woehr" }, { "code": "", "text": "\nimage959×91 5.24 KB\nWELL, idk what to do to be honest, it doesn’t work on mongosh either, so that’s kinda relieving, but when typing this link letter by letter for example, it does work, is it something with the encoding ?", "username": "Abdel" }, { "code": "", "text": "It is hard to say for sure, @Abdel , but I would guess that is the case.\nIt is possible that characters encoded two different ways would appear the same on the screen.", "username": "Jack_Woehr" } ]
Unable to connect to my MongoDB Atlas using pymongo
2022-09-21T17:20:42.715Z
Unable to connect to my MongoDB Atlas using pymongo
8,559
null
[ "indexes" ]
[ { "code": "", "text": "Hi guys,We have a MongoDB Atlas database with a few collections and we need to design the way how to do continuous delivery for the indexes, triggers and inject some data.Until now we build the indexes using the Web UI of Mongo Atlas for our development enviroment but our customer have policies that don’t let us using this UI for production enviroment, so we need to think how to delivery changes of indexes, triggers and inject base data.For example, we use Azure DevOps pipelines for our CI/CD to SQL Server, but we don’t have any idea of how we could build a delivery pipeline for MongoDB Atlas. There is a few ideas we have:¿in your expirience, what approach we should explore for this?Thanks a lot for your help guys.", "username": "Jose_Alejandro_Benit" }, { "code": "mongosh", "text": "Hello @Jose_Alejandro_Benit ,Welcome back to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a desired solution?I think any of the ideas you mentioned could be used for importing data and creating indexes, it really depends on what you are comfortable implementing. For triggers, you can use Realm CLI or Atlas App Services Admin API. You can utilise a combination of API requests, mongosh scripts, etc to create an environment for fresh use every time.You can also refer to this article on “How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions” for more insight. Which method ultimately is best for your use case really depends on your specific situation, and perhaps your existing tooling.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to do continuous delivery to a mongo database
2022-09-15T15:11:38.579Z
How to do continuous delivery to a mongo database
1,976
null
[ "monitoring" ]
[ { "code": "", "text": "Hi There,We have a Mongo setup with 4 shards, I wanted to know about actual storage. Through queries I am getting the size on a single shard, but not on each server of shard. How to get the actual size on the server , also I need the size of config and others, not just the collection data. Overall idea is to get storage used on each server.Thanking in advance.Regards,\nKarthik.", "username": "Karthik_Reddy1" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Size of db on each server instead of the shard
2022-09-26T04:08:42.372Z
Size of db on each server instead of the shard
1,646
null
[ "aggregation", "queries" ]
[ { "code": "client_datatypetype1type2type3// \"client_data\" collection\n\n{\n\t_id: ...\n\ttype: \"type1\"\n\t...\n}\n{\n\t_id: ...\n\ttype: \"type2\"\n\t...\n}\n{\n\t_id: ...\n\ttype: \"type3\"\n\t...\n}\n\n// ... more docs\nusers// \"users\" collection\n\n{\n\t_id: ...\n\tname: \"\"\n\temail: \"\"\n\t...\n\n\tpermissions: [\n\t\t\"type1\", \"type2\", \"type3\"\n\t]\n}\npermissionsclient_datapermissionsclient_dataautocomplete", "text": "Let me explain my objective with an example:Let’s say I have a collection client_data where each document has a type property with a value of type1 or type2 or type3.I also have a users collection that contains user objects.The permissions array that contains types that this user has access to.Now if I want to run a search query on the client_data collection for this user, I’d have to ensure that I only return those documents which the user has access to based on their permissions.Now, I could filter the results in the app layer, but that would not be efficient when client_data has a massive number of documents and these types of searches are very frequent - think autocomplete.What I am looking for is a way to structure this at the level of search index and at the level of aggregation search query so that most of the heavy lifting is done by MongoDB.", "username": "Sid_J" }, { "code": "client_dataautocomplete$search$searchMeta$lookup$search$searchMeta$lookup$merge$outheavy liftingraw speedefficiency", "text": "Hello @Sid_J ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, could you help me with below details?Now, I could filter the results in the app layer, but that would not be efficient when client_data has a massive number of documents and these types of searches are very frequent - think autocomplete .As you mentioned autocomplete, do you specifically want to use $search?If you are using MongoDB v6.0 in Atlas, you can specify the Atlas Search $search or $searchMeta stage in the $lookup pipeline to search collections on the Atlas cluster. The $search or the $searchMeta stage must be the first stage inside the $lookup pipeline. It helps in querying from different collections together.Although this might help in your use-case but if you are doing this frequently then doing a $lookup for every single request might get expensive and might not be very performant so as an alternative you may be interested in checking out Materialised views which is a pre-computed aggregation pipeline result. On-demand materialized views are typically the results of a $merge or $out stage. It could provide a performance boost at the cost of extra storage space, and you will also need a way to trigger the data refresh so that the view remains up to date.most of the heavy lifting is done by MongoDB.Also, what is the heavy lifting you are looking for? Is it raw speed, efficiency in terms of index usage? Could you please help me with this?Regards,\nTarun", "username": "Tarun_Gaur" } ]
Atlas search contextual results filtering
2022-08-31T20:57:10.259Z
Atlas search contextual results filtering
1,361
https://www.mongodb.com/…_2_1024x527.jpeg
[ "java", "kafka-connector" ]
[ { "code": "", "text": "We released an updated version of our MongoDB Kafka for Apache Kafka V1.8!As the adoption of the connector has increased over time, there have been many requests for better monitoring of the MongoDB Kafka Connector. Since the connector is written in Java it made sense to expose metrics through Java’s native monitoring, JMX. For more details on this release read our blog post announcement . The MongoDB documentation will be available shortly that will enumerate all the metrics available.\nimage1238×638 106 KB\nDownload the latest from Confluent Hub. MongoDB Connector (Source and Sink) | Confluent Hub", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Connector for Apache Kafka V1.8 released
2022-09-25T20:40:22.451Z
MongoDB Connector for Apache Kafka V1.8 released
1,628
null
[ "flutter" ]
[ { "code": "// file schemas.dart\nimport 'package:realm/realm.dart';\n\npart 'schemas.g.dart';\n\n@RealmModel()\nclass _Song {\n late String? songNumber;\n late String? fileName;\n late String? englishTitle;\n late String? year;\n late String? key;\n late String? author;\n}\nflutter pub run realm generateflutter pub run realm generate\n[INFO] Generating build script...\n[INFO] Generating build script completed, took 429ms\n\n[INFO] Precompiling build script......\n[WARNING] /C:/<path>/flutter/.pub-cache/hosted/pub.dartlang.org/realm_generator-0.3.1+beta/lib/src/pseudo_type.dart:12:7: Error: The non-abstract class 'PseudoType' is missing implementations for these members:\n - DartType.element2\nTry to either\n - provide an implementation,\n - inherit an implementation from a superclass or mixin,\n - mark the class as abstract, or\n - provide a 'noSuchMethod' implementation.\n\nclass PseudoType extends TypeImpl {\n ^^^^^^^^^^\n/C:/<path>/flutter/.pub-cache/hosted/pub.dartlang.org/analyzer-4.6.0/lib/dart/element/type.dart:52:16: Context: 'DartType.element2' is defined here.\n Element? get element2;\n ^^^^^^^^\n[INFO] Precompiling build script... completed, took 1.4s\n\n[SEVERE] Failed to precompile build script .dart_tool/build/entrypoint/build.dart.\nThis is likely caused by a misconfigured builder definition.\n", "text": "I’m trying to generate a schema from a file with this content;On running flutter pub run realm generate I get the following error message.What am I doing wrong?\nI know the error message has suggestions on how to solve it. I’ve generated similar models in other projects successful.I’m on Windows 11 x64, using Flutter 3.0.5 and Realm .0.3.1+beta", "username": "Tembo_Nyati" }, { "code": "", "text": "I tried the same project on Ubuntu 22.04 x64 and got exactly the same error message.", "username": "Tembo_Nyati" }, { "code": "librealm-windowsrealm_dart.dll", "text": "Hi @Tembo_Nyati,\nWe have this issue from last Friday. The commands that you run are correct. But there are recently released breaking changes in the packages that we depends on. This problem is already fixed in PR757. We are publishing it as soon as possible.\nWill it help you if you download librealm-windows artefact from the last CI run and replace your realm_dart.dll with the new one until we have an official release?\nSorry for the inconvenience.", "username": "Desislava_Stefanova" }, { "code": "", "text": "Here is the full link to librealm-windows", "username": "Desislava_Stefanova" }, { "code": "", "text": "@Desislava_Stefanova, I’m totally OK with this approach, but the problem I’ve just faced is the same error message. I do not know at what time should I do the replace, is it before running 'flutter pub run realm generate` or before. What i did was before.", "username": "Tembo_Nyati" }, { "code": "", "text": "Yes @Tembo_Nyati , you are right.\nUnfortunately, the changes in dart_realm.dll are not enough. The most important changes are in the package and its dependencies.\nI suggest you to wait for our published release.\nMeanwhile, you can also send your dart model to [email protected]. I can try to generate the realm objects for you.", "username": "Desislava_Stefanova" }, { "code": "// file schemas.dart\nimport 'package:realm/realm.dart';\n\npart 'schemas.g.dart';\n\n@RealmModel()\nclass _Song {\n late String? songNumber;\n late String? fileName;\n late String? englishTitle;\n late String? year;\n late String? key;\n late String? author;\n}\n", "text": "@Desislava_Stefanova, my dart model is the one in the first post. Here it is, in terms of models I’ve nothing more in this project.", "username": "Tembo_Nyati" }, { "code": "", "text": "@Desislava_Stefanova, I’ve posted in [email protected] as well, thank you.Amani.", "username": "Tembo_Nyati" }, { "code": "// GENERATED CODE - DO NOT MODIFY BY HAND\n\npart of 'schemas.dart';\n\n// **************************************************************************\n// RealmObjectGenerator\n// **************************************************************************\n\nclass Song extends _Song with RealmEntity, RealmObject {\n Song({\n String? songNumber,\n String? fileName,\n String? englishTitle,\n String? year,\n String? key,\n String? author,\n }) {\n RealmObject.set(this, 'songNumber', songNumber);\n RealmObject.set(this, 'fileName', fileName);\n RealmObject.set(this, 'englishTitle', englishTitle);\n RealmObject.set(this, 'year', year);\n RealmObject.set(this, 'key', key);\n RealmObject.set(this, 'author', author);\n }\n\n Song._();\n\n @override\n String? get songNumber =>\n RealmObject.get<String>(this, 'songNumber') as String?;\n @override\n set songNumber(String? value) => RealmObject.set(this, 'songNumber', value);\n\n @override\n String? get fileName => RealmObject.get<String>(this, 'fileName') as String?;\n @override\n set fileName(String? value) => RealmObject.set(this, 'fileName', value);\n\n @override\n String? get englishTitle =>\n RealmObject.get<String>(this, 'englishTitle') as String?;\n @override\n set englishTitle(String? value) =>\n RealmObject.set(this, 'englishTitle', value);\n\n @override\n String? get year => RealmObject.get<String>(this, 'year') as String?;\n @override\n set year(String? value) => RealmObject.set(this, 'year', value);\n\n @override\n String? get key => RealmObject.get<String>(this, 'key') as String?;\n @override\n set key(String? value) => RealmObject.set(this, 'key', value);\n\n @override\n String? get author => RealmObject.get<String>(this, 'author') as String?;\n @override\n set author(String? value) => RealmObject.set(this, 'author', value);\n\n @override\n Stream<RealmObjectChanges<Song>> get changes =>\n RealmObject.getChanges<Song>(this);\n\n static SchemaObject get schema => _schema ??= _initSchema();\n static SchemaObject? _schema;\n static SchemaObject _initSchema() {\n RealmObject.registerFactory(Song._);\n return const SchemaObject(Song, 'Song', [\n SchemaProperty('songNumber', RealmPropertyType.string, optional: true),\n SchemaProperty('fileName', RealmPropertyType.string, optional: true),\n SchemaProperty('englishTitle', RealmPropertyType.string, optional: true),\n SchemaProperty('year', RealmPropertyType.string, optional: true),\n SchemaProperty('key', RealmPropertyType.string, optional: true),\n SchemaProperty('author', RealmPropertyType.string, optional: true),\n ]);\n }\n}\n\n", "text": "@Tembo_Nyati I answered to your email. Here is the model. Please save it into a file with name schemas.g.dart.I hope it will help.", "username": "Desislava_Stefanova" }, { "code": "pubspec.yamldependencies: \n analyzer: '>=4.0.0 <4.6.0'\n", "text": "A workaround (until we release a new version) is to avoid analyzer package version 4.6.0. Add a constraint in your pubspec.yaml like this:", "username": "Kasper_Nielsen1" }, { "code": "", "text": "I received it, thank you.", "username": "Tembo_Nyati" }, { "code": "", "text": "@Kasper_Nielsen1, this works as well, thank you so much.", "username": "Tembo_Nyati" }, { "code": "", "text": "this works, thank you so much.", "username": "12_12" }, { "code": "", "text": "@Tembo_Nyati , @12_12\nThe new release is already available:", "username": "Desislava_Stefanova" } ]
Realm schema generation error: The non-abstract class 'PseudoType' is missing implementations
2022-08-14T16:14:06.353Z
Realm schema generation error: The non-abstract class &lsquo;PseudoType&rsquo; is missing implementations
5,299
null
[ "queries", "compass" ]
[ { "code": "", "text": "Hello,\nI am unable to get the data based on the date range.\nTool used: Compass\nMessage: \" No results\" .\nQuery : {createdAt:{$gte:ISODate(“2021-01-01”),$lt:ISODate(“2022-09-25”)}}date format in document as below.\ncreatedAt:2021-10-27T14:42:35.344+00:00thanks in advance\nvijay", "username": "vijay_karki" }, { "code": "createdAt", "text": "Can you check if your createdAt property is storing values in string or date format?", "username": "NeNaD" } ]
Selecting data based on date range
2022-09-25T16:06:25.449Z
Selecting data based on date range
1,039
null
[ "aggregation", "node-js" ]
[ { "code": "", "text": "Hi, I would like to select only the fields in an object.This object with the fields to be selected is returned from a $function, however mongo dB is treating it as a literal how to I force it to perform the projection and stop setting the field to the literal object returned from the function.", "username": "John_Kennedy_Kalu" }, { "code": "", "text": "Hi @John_Kennedy_Kalu and welcome to the MongoDB Community forums! It would be helpful to see sample documents that you are working with and any code that is having problems. Without seeing what you’re seeing it’s really hard to understand the problem and provide any suggestions.", "username": "Doug_Duncan" } ]
How do I make mongodb select fields specified in an object in an aggregate pipeline?
2022-09-23T16:13:04.415Z
How do I make mongodb select fields specified in an object in an aggregate pipeline?
1,265
https://www.mongodb.com/…eff5ed421acc.png
[]
[ { "code": "", "text": "When connecting to VS Code, when I enter my password, which has a % in it, but I receive ‘URI malformed’ error.\nI tried to change my password, but I was receiving the following error in my terminal.\nmongo use products\ndb.changeUserPassword(“myUserAdmin”, passwordPrompt())zsh: parse error near `)’Any guidance would be a great help!", "username": "Shawn_Wilborne" }, { "code": "", "text": "It could be quotes around your user\nUse double quotes.Your quotes look different", "username": "Ramachandra_Tummala" }, { "code": "", "text": "mongo use products\ndb.changeUserPassword(“myUserAdmin”, passwordPrompt())Thanks for the feedback! I tried double quotes = \" - and also tried single quote ’ - no dice any other suggestions?", "username": "Shawn_Wilborne" }, { "code": "", "text": "\nScreen Shot 2022-09-23 at 10.24.43 AM846×388 79.8 KB\n", "username": "Shawn_Wilborne" }, { "code": "", "text": "Don’t use pwd prompt.Add your password directly in the command and see if it works", "username": "Ramachandra_Tummala" }, { "code": "mongo", "text": "Why are you trying to run MongoDB command from your terminal prompt? The error in the screnshot is coming from the zsh shell not understanding what you’re typing.You want to run mongo by itself to start the older mongo shell. After that starts and you are connected to the server then you can run your other commands.", "username": "Doug_Duncan" } ]
Connection to VS Code
2022-09-23T12:14:15.883Z
Connection to VS Code
1,605
https://www.mongodb.com/…aa69a05f609b.png
[]
[ { "code": "", "text": "Hey there guys,I’ve been working with MongoDB for a uni project, long story short, we’re doing things with weather data and we’re using Mongo to store historical data for a select few locations.I’ve noticed that when I check the data store, it’s randomly deleted objects from the db.Screenshot 2021-03-16 at 16.51.49744×606 104 KB\nAs you can see from the screenshot above, we might have one hour from 2016-01-01, and then it skips to 2016-01-03. This isn’t how the data was uploaded, in looked something more akin to this:\ndate: 01/01/2016 hour: 0\ndate: 01/01/2016 hour: 1\ndate: 01/01/2016 hour: 2\ndate: 01/01/2016 hour: 3\netc…This has happened once before, but I reuploaded our data via Compass and thought it was just a one off. I’ve looked into TTL, however, I don’t think that is the cause of the issue since we never established any TTL when initially uploading.If anyone has any ideas on what’s happening or how it can be stopped, please let me know! And let me know if you have any questions, I’ll try and answer them quickly Thanks!", "username": "Luke_Coleman" }, { "code": "", "text": "randomly deleted objects from the dbIt is very unlikely.The data might not be in order. For example, the hour field being a string, the natural order will not be 0, 1, 2, 3, 4, … it will be most likely “0” , “1” , “11” , “12” , “13”, as strings are not sorted like numbers.What is your data source? How do you ensure that you have all the data in the source?If you keep dateString, despite being wasteful because you have date, I would suggest that you, at least, keep it in the ISO-8601 standard. See ISO - ISO 8601 — Date and time format for some reasons.", "username": "steevej" }, { "code": "", "text": "I’ll quickly try and answer some things here The data might not be in order. For example, the hour field being a string, the natural order will not be 0, 1, 2, 3, 4, … it will be most likely “0” , “1” , “11” , “12” , “13”, as strings are not sorted like numbers.So with this I believe it’s already ordered by the ISO date format, and when we initially uploaded the data, it did show it with the hour (albeit being a string) in order. It should be worth mentioning too that we’re using GraphQL for this, which does support Int and Float, so I’ll take a look and see if we can change to those. I’ve also just re-queried our DB which still shows certain hours as missing. Screenshot 2021-03-16 at 17.30.41543×647 18.5 KBWhat is your data source? How do you ensure that you have all the data in the source?Our data source is Meteostat, where we gather all data from a particular weather station, then keep anything before 01/01/2016. The method of checking everything is there sadly isn’t particularly advanced, but we’ve done random spot checks over a variety of days to make sure what we expect to be there is there.If you keep dateString , despite being wasteful because you have dateAnd I’ll also take a look into this - this was implemented quickly to display on our frontend, however, I’m sure we can use a function to change this to a more “traditional” format. ", "username": "Luke_Coleman" }, { "code": "date2016-01-02", "text": "Welcome to the MongoDB Community @Luke_Coleman!As you can see from the screenshot above, we might have one hour from 2016-01-01, and then it skips to 2016-01-03. This isn’t how the data was uploadedWhat sort order are you specifying for your query? It sounds like you are expecting the order of result documents to match insertion order, which is only guaranteed for the special case of a capped collection (see: What is the default sort order when none is specified?).If you are sorting based on date components as string values, the lexicographic order will be be based on string comparisons (characters and length) as @steevej suggested.However, it looks like you have a proper date field you could use for sorting (which should also obviate the need to duplicate the data information in various string formats).To confirm this isn’t an issue with the results returned by your query, you could also try searching for the documents presumed missing from 2016-01-02.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Sounds like the documentation is misleading here?The documents are returned in insertion order: (…)\nNo capped collection here.", "username": "Marc_Knaup" } ]
MongoDB randomly deleting objects in collection
2021-03-16T16:51:21.229Z
MongoDB randomly deleting objects in collection
4,195
null
[ "node-js", "connecting" ]
[ { "code": "[nodemon] restarting due to changes...\n[nodemon] starting `node app.js`\nC:\\Users\\류 광섭\\Desktop\\00-starting-project\\node_modules\\mongodb\\lib\\sdam\\topology.js:291\n const timeoutError = new error_1.MongoServerSelectionError(`Server selection timed out after ${serverSelectionTimeoutMS} ms`, this.description);\n ^\n\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (C:\\Users\\류 광섭\\Desktop\\00-starting-project\\node_modules\\mongodb\\lib\\sdam\\topology.js:291:38)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 50395518,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED ::1:27017\n at connectionFailureError (C:\\Users\\류 광섭\\Desktop\\00-starting-project\\node_modules\\mongodb\\lib\\cmap\\connect.js:387:20)\n at Socket.<anonymous> (C:\\Users\\류 광섭\\Desktop\\00-starting-project\\node_modules\\mongodb\\lib\\cmap\\connect.js:310:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16) {\n errno: -4078,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '::1',\n port: 27017\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n\nNode.js v18.7.0\n[nodemon] app crashed - waiting for file changes before starting...\n[nodemon] restarting due to changes...\n[nodemon] starting `node app.js`\nC:\\Users\\류 광섭\\Desktop\\00-starting-project\\node_modules\\mongodb\\lib\\sdam\\topology.js:291\n const timeoutError = new error_1.MongoServerSelectionError(`Server selection timed out after ${serverSelectionTimeoutMS} ms`, this.description);\n ^\n\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (C:\\Users\\류 광섭\\Desktop\\00-starting-project\\node_modules\\mongodb\\lib\\sdam\\topology.js:291:38)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 50466197,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED ::1:27017\n at connectionFailureError (C:\\Users\\류 광섭\\Desktop\\00-starting-project\\node_modules\\mongodb\\lib\\cmap\\connect.js:387:20)\n at Socket.<anonymous> (C:\\Users\\류 광섭\\Desktop\\00-starting-project\\node_modules\\mongodb\\lib\\cmap\\connect.js:310:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16) {\n errno: -4078,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '::1',\n port: 27017\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n\nNode.js v18.7.0\n[nodemon] app crashed - waiting for file changes before starting...\n\napp.js\n\nconst path = require('path');\n\nconst express = require('express');\n\nconst blogRoutes = require('./routes/blog');\n\nconst app = express();\n\nconst db = require(\"./data/database\");\n\n// Activate EJS view engine\n\napp.set('view engine', 'ejs');\n\napp.set('views', path.join(__dirname, 'views'));\n\napp.use(express.urlencoded({ extended: true })); // Parse incoming request bodies\n\napp.use(express.static('public')); // Serve static files (e.g. CSS files)\n\napp.use(blogRoutes);\n\napp.use(function (error, req, res, next) {\n\n // Default error handling function\n\n // Will become active whenever any route / middleware crashes\n\n console.log(error);\n\n res.status(500).render('500');\n\n});\n\ndb.connectToDatabase().then(function () {\n\n app.listen(3000);\n\n});\n\ndatabase\n\nconst mongodb = require(\"mongodb\");\n\n const MongoClient = mongodb.MongoClient;\n\n let database;\n\n async function connect() {\n\n const client = await MongoClient.connect(\"mongodb://localhost:27017\");\n\n database = client.db(\"blog\");\n\n }\n\n function getDb() {\n\n if (!database) {\n\n throw { message: \"Database connection not establisehd!\" };\n\n }\n\n return database;\n\n }\n\n module.exports = {\n\n connectToDatabase: connect,\n\n getDb: getDb\n\n };\n", "text": "Hi I have been trying to figure this error out for two whole days, googling, reading relevant forums in here, but seems I can not solve this error to make my code to fully function. I would appreciate any helpthe first bit stands for the description of the error and the later bits are the codes . Again thank you in advance.", "username": "paulryu1998" }, { "code": "", "text": "Try 127.0.0.1 instead of localhost\nCheck this thread\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017", "username": "Ramachandra_Tummala" } ]
Onnect ECONNREFUSED ::1:27017
2022-09-25T05:59:01.203Z
Onnect ECONNREFUSED ::1:27017
4,212
null
[ "developer-hub" ]
[ { "code": "", "text": "Hi,I have been trying out the tutorial Gatsby and MongoDB: Build a Modern Blog with Gatsby and MongoDB | MongoDBEverything works fine until adding the file “gatsby-node.js”. Ideally, upon adding this file, it should generate the pages for 400 odd books from the database, but it isn’t.There isn’t any error with the code. Am I missing anything?Any help would be appreciated.", "username": "Akhil_Kintali" }, { "code": "", "text": "Hi Akhil - do you have a repo I can take a look at to help you further diagnose the issue?", "username": "ado" }, { "code": "", "text": "I’m doing the same tutorial, how do you parse the text from the mongodb document to render with the line breaks (or any other formatting) in gatsby?", "username": "Jfy" }, { "code": "", "text": "Hi Jfy,Do you have an example of the text you’re trying to render?", "username": "ado" }, { "code": "", "text": "A post was split to a new topic: MongoServerSelectionError: getaddrinfo ENOTFOUND", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Trying MongoDB + Gatsby tutorial - fails to generate pages
2020-06-15T14:07:52.209Z
Trying MongoDB + Gatsby tutorial - fails to generate pages
4,240
null
[ "crud" ]
[ { "code": "exports = async function(authEvent) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const users = mongodb.db(\"sportrank\").collection(\"users\");\n\n const { user, time } = authEvent;\n const newUser = { ...user, eventLog: [ { \"created\": time } ] };\n \n await users.updateOne({ id: newUser.id },\n { $set:\n {\n \"custom_data.active\": true,\n \"custom_data.description\":{level: '', comment: '' },\n \"custom_data.alternate_emai\": \"\",\n \"custom_data.nickname\": \"\",\n \"custom_data.ownerOf\": [{}],\n \"custom_data.memberOf\": [ {}]\n }\n }\n )\n await users.insertOne(newUser);\n}\n", "text": "I’m attempting this with the following code:I have created an authentication trigger that runs this code on authentication.A new user is created via the client app, but the custom_data fields are not set.Is there another/better approach (another trigger?)?How should I set these fields to default (empty) values on a new user creation? thanks …", "username": "freeross" }, { "code": "exports = async function(authEvent) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const users = mongodb.db(\"<my_db>\").collection(\"<user_collection>\");\n\n const { user, time } = authEvent;\n const newUser = { ...user, eventLog: [ { \"created\": time } ] };\n \n await users.insertOne(newUser);\n await users.updateOne({ id: newUser.id },\n { $set:\n {\n \"custom_data.active\": true,\n \"custom_data.description\":{level: '', comment: '' },\n \"custom_data.alternate_email\": \"\",\n \"custom_data.nickname\": \"\",\n \"custom_data.ownerOf\": [{}],\n \"custom_data.memberOf\": [ {}]\n }\n }\n )\n}\n", "text": "I had the update before the insert. Once corrected it worked:", "username": "freeross" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm: creating a new user document
2022-09-23T03:24:05.299Z
Realm: creating a new user document
1,230
null
[]
[ { "code": "", "text": "Hi mongoDB, thanks for the opportunity to share my startup experience at the Punjab User Group in Lovely Professional University. It was a nice experience, and I also learnt a lot from your team.", "username": "Samson_Kwaku_Nkrumah" }, { "code": "", "text": "Welcome to the MongoDB Community, Samson! \nGlad to know you had a good time sharing your story and great to see you join the community. We are sure the community will learn a lot from you and your experience.", "username": "Harshit" } ]
Hi MongoDb and MongoDB Punjab User Group
2022-09-17T08:04:35.073Z
Hi MongoDb and MongoDB Punjab User Group
2,005
null
[ "indexes" ]
[ { "code": "{\n\t\"_id\" : ObjectId(\"5ec2dbe1ad29e4000c272d16\"),\n\t\"Name\" : \"aaa\",\n\t\"Age\" : \"56\",\n\t\"Status\" : \"Draft\",\n\t\"Direction\" : \"forward,\n <Other Fields>\n}\n{name:\"aaa\", Age:56, Direction:\"Forward\"}\n{name:\"aaa\", Status:\"Draft\", Direction:\"Forward\"}\n{Age:56, Status:\"Draft\"}\n", "text": "Hi,I have a collection with more than 2 million documents with multiple fields, sample document:What I want to achieve is filtering out with a number of fields, the problem is the filtering fields can be in any combination or any number of fields, eg, it can be either of:and any other combinationsSince I do not have compound indexes for every combination, querying is extremely slowHow do I create compound indexes to make this querying fast?", "username": "Ishan_Roy" }, { "code": "", "text": "I’m facing the same scenario. Did you get the answer for this question", "username": "Yasir_Asarudheen" }, { "code": "", "text": "Currently i was achiving this by multiple compound index.\nConsider, I’ve five fields(a, b, c, d, e). All the fields used for search filter. So, I will create compound index like below.Mongo will choose the compound index by first key of index. If your search will be like this (a, d, e). It will select (1) index for quering.If anyone know, better solution. Please reply here.", "username": "Yasir_Asarudheen" }, { "code": "", "text": "i have got a simple collection with 50 fields and more than 1M docs.and we run queries with any filter combinations on 10 fields and then we sort it with on one field.the question is, if i want to filter on a,b,c,d,e,f,g,h,i,j and then sort on k field, how many compound indexes should i create, since more than 60 indexes are not allowed as i know.i know that i should follow the rule of “Equality - Sort - Range” in ordering the fields of an indexand i know if i have an index like {a,b,c,d,e} , it means that i have prefix indexes as well like {a} , {a,b} , {a,b,c} , {a,b,c,d} , {a,b,c,d}correct me if i’m wrong, but i think if we have an index like {a,b,c,d,e} and if our query is based on a,c,e fields, it means that it will only use this index as {a} index, because we have not provided values for b and d in the query?now, how should i create my indexes and how many should i create?as @Yasir_Asarudheen suggests, we can create compound indexes starting with each of fields.but what about the combinations of each?for example for {b,c,d,e} index what if we neede {b,d} or {b,e} ?", "username": "Masoud_Naghizade" }, { "code": "", "text": "hi @Masoud_Naghizade did you find the answer? I’m facing the same issue", "username": "Hieu_Ha" } ]
Efficient indexing for filtering with multiple fields
2020-05-18T19:21:50.655Z
Efficient indexing for filtering with multiple fields
7,757
null
[ "kafka-connector" ]
[ { "code": "apiVersion: kafka.strimzi.io/v1beta2\n\nkind: KafkaConnect\n\nmetadata:\n\n name: my-mongo-connect\n\n annotations:\n\n strimzi.io/use-connector-resources: \"true\"\n\nspec:\n\n image: STRIMZI KAFKA CONNECT IMAGE WITH MONGODB PLUGIN\n\n version: 3.2.1\n\n replicas: 1\n\n bootstrapServers: my-cluster-kafka-bootstrap:9092\n\n logging:\n\n type: inline\n\n loggers:\n\n connect.root.logger.level: \"INFO\"\n\n config:\n\n group.id: my-cluster\n\n offset.storage.topic: mongo-connect-cluster-offsets\n\n config.storage.topic: mongo-connect-cluster-configs\n\n status.storage.topic: mongo-connect-cluster-status\n\n key.converter: org.apache.kafka.connect.json.JsonConverter\n\n value.converter: org.apache.kafka.connect.json.JsonConverter\n\n key.converter.schemas.enable: true\n\n value.converter.schemas.enable: true\n\n config.storage.replication.factor: -1\n\n offset.storage.replication.factor: -1\n\n status.storage.replication.factor: -1\napiVersion: kafka.strimzi.io/v1beta2\n\nkind: KafkaConnector\n\nmetadata:\n\n name: mongodb-sink-connector\n\n labels:\n\n strimzi.io/cluster: my-cluster\n\nspec:\n\n class: com.mongodb.kafka.connect.MongoSinkConnector\n\n tasksMax: 2\n\n config:\n\n topics: my-topic\n\n connection.uri: \"MONGO ATLAS CONNECTION STRING\"\n\n database: my_database\n\n collection: my_collection\n\n post.processor.chain: com.mongodb.kafka.connect.sink.processor.DocumentIdAdder,com.mongodb.kafka.connect.sink.processor.KafkaMetaAdder\n\n key.converter: org.apache.kafka.connect.json.JsonConverter\n\n key.converter.schemas.enable: false\n\n value.converter: org.apache.kafka.connect.json.JsonConverter\n\n value.converter.schemas.enable: false\n", "text": "Hi,I am new kafka space and I have setup Strimzi cluster operator, Kafka bootstrap server, entity operator, and kafka connect in Kubernetes following the below guidelines:Deploying and Upgrading (0.33.0)How do I setup kafka mongo sink connector for strimzi kafka connect cluster ?I have the official mongodb connector plugin. Can I use this plugin to connect to atlas mongodb ?Most of the forums have explanation on confluent kafka but not strimzi kafka.Below is my kafka connect config:Below is my sink connector config:But the above setup is not working though my kafka server is up and running producer-consumer example works.Is the official mongodb plugin (Maven Central Repository Search) appropriate for this ? or do I use debezium mongodb connector ?If anyone can shed some light on step-by-step guideline with this regard, that would of great help.Thanks in advance.", "username": "Chirag_Mukkati" }, { "code": "", "text": "The mongodb connector just talks with Kafka Connect so it doesn’t matter much where Kafka itself is running. K8S is fine, you should be good to go.By not work, is there an error message? Is both the source and sink not working? Is the MongoDB connector stopped or is it running? check the Kafka Connect Logs", "username": "Robert_Walters" }, { "code": "kubectl get kafkaconnectors -n kafkaNAME CLUSTER CONNECTOR CLASS MAX TASKS READY\nmongodb-sink-connector my-cluster com.mongodb.kafka.connect.MongoSinkConnector 2\n", "text": "There is no error message in the kafka-connect logs. How do I verify if the MongoDB connector is up and running ?kubectl get kafkaconnectors -n kafka shows:The ready column is empty. How do I make sure this is running?", "username": "Chirag_Mukkati" }, { "code": "echo \"\\nKafka topics:\\n\"\n\ncurl --silent \"http://localhost:8082/topics\" | jq\n\necho \"\\nThe status of the connectors:\\n\"\n\ncurl -s \"http://localhost:8083/connectors?expand=info&expand=status\" | \\\n jq '. | to_entries[] | [ .value.info.type, .key, .value.status.connector.state,.value.status.tasks[].state,.value.info.config.\"connector.class\"]|join(\":|:\")' | \\\n column -s : -t| sed 's/\\\"//g'| sort\n\necho \"\\nCurrently configured connectors\\n\"\ncurl --silent -X GET http://localhost:8083/connectors | jq\n\necho \"\\n\\nVersion of MongoDB Connector for Apache Kafka installed:\\n\"\ncurl --silent http://localhost:8083/connector-plugins | jq -c '.[] | select( .class == \"com.mongodb.kafka.connect.MongoSourceConnector\" or .class == \"com.mongodb.kafka.connect.MongoSinkConnector\" )'\n\n", "text": "Here is a script I use to enumerate, you may have to tweak the hostnames to your environment and network situation.", "username": "Robert_Walters" } ]
Atlas MongoDB sink connector for strimzi kafka setup
2022-09-22T12:52:37.401Z
Atlas MongoDB sink connector for strimzi kafka setup
3,359
null
[ "java", "swift", "android" ]
[ { "code": "class RelevantDO: Object {\n \n @Persisted var name: String? = \"\"\n @Persisted var count: Int = 0\n let percentage = RealmProperty<Float?>()\n...\n...\n if (version == 53) {\n schema.get(RelevantDO.class.getSimpleName())\n .addField(\"percentage\", Float.class)\n .transform(obj -> obj.set(\"percentage\", null));\n version++;\n }\n", "text": "Hi.I’m using the same realm definition in my Android and iOS app and while on Android, the migration block easily allows adding new optional fields, I struggle to find a solution for how to achieve that in Swift.This is how my realm object looks like, the last property is the one I want to add in the new version. It does not automatically get added to the table like @Persisted properties do. It cannot be tagged with @Persisted either. How do I solve this problem and add the column to the table?My migration block in Java looks like this:", "username": "BlueCobold_N_A" }, { "code": "@Persisted@objc dynamicRealmOptionalRealmProperty@Persisted var name: String? = \"\"\n@Persisted var count: Int = 0\n@Persisted var percentage = 0.0 //for example. Or @Persisted var percentage: Float", "text": "A couple of thingsNew in version 10.10.0 : The @Persisted declaration style replaces the @objc dynamic , RealmOptional , and RealmPropertySo you shouldn’t be using RealmProperty at this point.Also additive changes do not require a migration at all, only destructive changes.class RelevantDO: Object {", "username": "Jay" }, { "code": "@Persisted var percentage: Float? = null\n", "text": "So to be 100% exact, it’s just", "username": "BlueCobold_N_A" }, { "code": "", "text": "@BlueCobold_N_AYes! And that’s an OPTIONAL option as well.", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migrating realm, adding a RealmProperty<Float?>
2022-09-23T06:11:27.146Z
Migrating realm, adding a RealmProperty&lt;Float?&gt;
1,954
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 5.0.13-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.12. The next stable release 5.0.13 will be a recommended upgrade for all 5.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.13-rc0 is released
2022-09-23T18:16:49.761Z
MongoDB 5.0.13-rc0 is released
1,978
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.4.17-rc2 is out and ready for testing. This is a release candidate containing only fixes since 4.4.16. The next stable release 4.4.17 will be a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.17-rc2 is released
2022-09-23T18:13:53.855Z
MongoDB 4.4.17-rc2 is released
1,970
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 6.0.2-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.1. The next stable release 6.0.2 will be a recommended upgrade for all 6.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.2-rc1 is released
2022-09-23T18:10:35.705Z
MongoDB 6.0.2-rc1 is released
1,846
null
[ "swift" ]
[ { "code": "", "text": "Hey!What would be the best way to move objects between 2 distinct Realm instances ?I tried to manually fetch all objects from one instance and then add them all in the other instance, but, I end up getting an error saying that the object is already managed by other realm.Object is already managed by another Realm. Use create instead to copy it into this Realm.I also tried to only declare the object type only in the instance that I want to move the object into, however, in that scenario, I can’t query the objects from the old realm because the object is not declared in there anymore…Would appreciate some help!\nThanks in advance ", "username": "Tiago_Bastos" }, { "code": "", "text": "From my understanding, this is not possible. Personally, I convert them to an internal business-logic model and back to realm DTOs. For an easy way, you could use some de/serializer for this task, but it won’t be very efficient.", "username": "BlueCobold_N_A" }, { "code": "class PersonClass: Object {\n....\nlet person = realm.objects.... //get the person(s) from realmlet unmanagedPerson = PersonClass(value: person)try! differentRealm.write {\n differentReam.add(unmanagedPerson)\n}\nrealmrealm", "text": "Super simple - In a nutshell, instantiate an unmanaged object based on the object and write that to the different realmSome pseudo code:Get the object(s)\nlet person = realm.objects.... //get the person(s) from realmCreate an unmanaged version\nlet unmanagedPerson = PersonClass(value: person)write it outThe key is than an unmanaged object (one that has NOT been written) has it’s realm property set to nil so you can do whatever you want with it. Once’s it’s written, it’s managed and the realm property will not be nil.", "username": "Jay" } ]
Move objects between Realms
2022-09-07T10:47:59.111Z
Move objects between Realms
1,838
null
[ "server", "installation" ]
[ { "code": "brew services run mongodb-communityBootstrap failed: 5: Input/output error\nTry re-running the command as root for richer errors.\nError: Failure while executing; `/bin/launchctl bootstrap gui/501 /usr/local/opt/mongodb-community/homebrew.mxcl.mongodb-community.plist` exited with 5.\n/usr/local/var/log/mongodb{\"t\":{\"$date\":\"2022-09-13T17:27:48.638+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.643+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.648+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.666+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.673+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.673+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.673+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.673+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.673+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":21736,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"DSs-MacBook-Pro.local\"}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.673+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.673+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.673+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\"},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.674+02:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.674+02:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1120}}\n{\"t\":{\"$date\":\"2022-09-13T17:27:48.674+02:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "I have installed mongodb with homebrew following the guidelines here, but for some reason I cannot start my mongo instance I get the following error when running brew services run mongodb-communitybefore I didn’t have that error, but some type of update happened that I can no longer run my mongo instancemongo logs inside /usr/local/var/log/mongodb", "username": "D_S1" }, { "code": "", "text": "It says permission denied on that TMP file\nCheck ownership of this file\nls -lrt /tmp/mongodb-27017.sock\nCheck if you have any mongod running\nps -ef|grep mongod or try mongo/mongosh depending on your shell\nIf you can connect means mongod is up\nLooks like you brought up mongod as root\nIf TMP file is owned by root shutdown all mongods and remove that TMP.sock file and start your service again", "username": "Ramachandra_Tummala" }, { "code": "mongodrootrootmongod", "text": "In addition to the advice that Ramachandra provided, if mongod had been started by root it’s highly probable that the data and log files/directories are also owned by the root user. You will need to check those as well and change the ownership of those before mongod will run as your normal user.", "username": "Doug_Duncan" }, { "code": "ls -lrt /tmp/mongodb-27017.socksrwx------ 1 root wheel 0 Sep 15 11:37 /tmp/mongodb-27017.sockrootmongodb-27017.socksrwx------ 1 ds wheel 0 Sep 15 11:48 /tmp/mongodb-27017.sock{\"t\":{\"$date\":\"2022-09-15T11:48:25.621+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.627+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.639+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.639+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.643+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.643+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.643+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.643+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.643+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":4834,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"DSs-MacBook-Pro.local\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.644+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.644+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.644+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\"},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.645+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.645+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/usr/local/var/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:25.646+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3584M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.070+02:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1663235306:69670][4834:0x11c7a5600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.072+02:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1663235306:72132][4834:0x11c7a5600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.072+02:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1663235306:72436][4834:0x11c7a5600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.072+02:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.072+02:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"13: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.072+02:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":702}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.072+02:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\nmongod.confsystemLog:\n destination: file\n path: /usr/local/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /usr/local/var/mongodb\nnet:\n bindIp: 127.0.0.1\ndrwxrwxr-x 7 ds admin 224B Sep 13 14:13 .\ndrwxr-xr-x 18 root wheel 576B Sep 13 14:13 ..\ndrwxr-xr-x 3 ds admin 96B Nov 5 2020 cache\ndrwxrwxr-x 4 ds admin 128B Feb 24 2020 homebrew\ndrwxr-xr-x 3 ds admin 96B Nov 16 2020 log\ndrwxr-xr-x 136 ds admin 4.3K Sep 15 11:37 mongodb\ndrwxr-xr-x 3 ds admin 96B Sep 13 14:13 run\n", "text": "ls -lrt /tmp/mongodb-27017.sockThanks @Ramachandra_Tummala and @Doug_Duncan for the reply I’m a frontend developer so I don’t have too much experience with this sort of thing, so I might need your help a bit more.When I do ls -lrt /tmp/mongodb-27017.sockI get the following srwx------ 1 root wheel 0 Sep 15 11:37 /tmp/mongodb-27017.sock\nso I guess the owner of this is rootI deleted the mongodb-27017.sock and restarted the service (I have stopped all services before this)When I check the ownership again, now I get srwx------ 1 ds wheel 0 Sep 15 11:48 /tmp/mongodb-27017.sock (ds == my user)But I’m still unable to start the mongo-community though.I get the following error:This is my mongod.conf I have the following insideAnd this are the permissions for `usr/local/var/…", "username": "D_S1" }, { "code": "{\"t\":{\"$date\":\"2022-09-15T11:48:26.070+02:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1663235306:69670][4834:0x11c7a5600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.072+02:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1663235306:72132][4834:0x11c7a5600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-09-15T11:48:26.072+02:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1663235306:72436][4834:0x11c7a5600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n/uar/local/var/mongodb/rootls -alh /usr/local/var/mongodb/*rootbrewmongodrootroot", "text": "The above lines show that you’re still having permission problems. This time on files in the data directory (/uar/local/var/mongodb/). My guess is that the files under this path are owned by the root user as well. To check run ls -alh /usr/local/var/mongodb/*. If things are owned by the root user, then you would either need to delete them or change the ownership of the files.I’m assuming you installed via brew and then started the mongod as the root user and that’s why all the permissions are messed up. You should never run (almost) any service as the root user, and only then if you understand, and are willing to accept, the risks involved with doing so.", "username": "Doug_Duncan" }, { "code": "ls -alh /usr/local/var/mongodb/*-rw------- 1 ds admin 47B Nov 16 2020 /usr/local/var/mongodb/WiredTiger\n-rw------- 1 ds admin 21B Nov 16 2020 /usr/local/var/mongodb/WiredTiger.lock\n-rw------- 1 root admin 1.3K Sep 15 11:37 /usr/local/var/mongodb/WiredTiger.turtle\n-rw------- 1 ds admin 424K Sep 15 11:37 /usr/local/var/mongodb/WiredTiger.wt\n-rw------- 1 ds admin 12K Sep 15 11:37 /usr/local/var/mongodb/WiredTigerHS.wt\n-rw------- 1 ds admin 44K Sep 15 11:37 /usr/local/var/mongodb/_mdb_catalog.wt\n-rw------- 1 ds admin 32K Sep 15 11:37 /usr/local/var/mongodb/collection-0--5484561422099317879.wt\n-rw------- 1 ds admin 4.0K Aug 16 17:50 /usr/local/var/mongodb/collection-0--8938186976542024012.wt\n-rw------- 1 ds admin 44K Aug 30 12:20 /usr/local/var/mongodb/collection-0-6646607318980365885.wt\n-rw------- 1 ds admin 36K Aug 4 10:27 /usr/local/var/mongodb/collection-0-828309438715058860.wt\n-rw------- 1 ds admin 4.0K Aug 16 17:50 /usr/local/var/mongodb/collection-1--8938186976542024012.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-177-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-178-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-179-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-180-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-185-997841842072881824.wt\n-rw------- 1 ds admin 36K Aug 31 13:35 /usr/local/var/mongodb/collection-186--6760981820974790918.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-186-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-187-997841842072881824.wt\n-rw------- 1 ds admin 36K Aug 31 13:46 /usr/local/var/mongodb/collection-188--6760981820974790918.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-188-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-193-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/collection-194-997841842072881824.wt\n-rw------- 1 ds admin 52K Aug 4 10:28 /usr/local/var/mongodb/collection-2--5484561422099317879.wt\n-rw------- 1 ds admin 4.0K Aug 16 17:50 /usr/local/var/mongodb/collection-2--8938186976542024012.wt\n-rw------- 1 ds admin 60K Aug 31 13:46 /usr/local/var/mongodb/collection-2-236702729281165765.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-2-7829836989468010820.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-237--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-238--3421962067213660962.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-24-4461830987773809247.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-240--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-241--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-243--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-244--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-245--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-246--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-247--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-248--3421962067213660962.wt\n-rw------- 1 ds admin 36K Aug 4 10:27 /usr/local/var/mongodb/collection-3--4968984876227865773.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-30--7761360505503786663.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-32--7761360505503786663.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-34--7761360505503786663.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-3771--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-3772--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jun 15 15:44 /usr/local/var/mongodb/collection-3778--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jun 15 15:44 /usr/local/var/mongodb/collection-3779--3383466982208816242.wt\n-rw------- 1 ds admin 12K Aug 4 10:27 /usr/local/var/mongodb/collection-3798--3383466982208816242.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-3800--3383466982208816242.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-3802--3383466982208816242.wt\n-rw------- 1 ds admin 12K Sep 1 14:39 /usr/local/var/mongodb/collection-4--5484561422099317879.wt\n-rw------- 1 ds admin 12K Aug 4 10:27 /usr/local/var/mongodb/collection-4-8395434167465204535.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-442-285365223316158836.wt\n-rw------- 1 ds admin 3.3M Aug 4 10:27 /usr/local/var/mongodb/collection-454--3592809970845984021.wt\n-rw------- 1 ds admin 3.3M Aug 4 10:27 /usr/local/var/mongodb/collection-456--3592809970845984021.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/collection-6-8395434167465204535.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-761-6865724622955516652.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-765-6865724622955516652.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-767-6865724622955516652.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/collection-769-6865724622955516652.wt\n-rw------- 1 ds admin 44K Aug 31 13:46 /usr/local/var/mongodb/collection-8-7829836989468010820.wt\n-rw------- 1 ds admin 7.2M Aug 4 10:27 /usr/local/var/mongodb/collection-80--7761360505503786663.wt\n-rw------- 1 ds admin 32K Sep 15 11:37 /usr/local/var/mongodb/index-1--5484561422099317879.wt\n-rw------- 1 ds admin 36K Aug 30 12:20 /usr/local/var/mongodb/index-1-6646607318980365885.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-1-828309438715058860.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-10--4968984876227865773.wt\n-rw------- 1 ds admin 36K Aug 31 13:35 /usr/local/var/mongodb/index-118-6865724622955516652.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-181-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-182-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-183-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-184-997841842072881824.wt\n-rw------- 1 ds admin 36K Aug 31 13:35 /usr/local/var/mongodb/index-187--6760981820974790918.wt\n-rw------- 1 ds admin 36K Aug 31 13:46 /usr/local/var/mongodb/index-189--6760981820974790918.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-189-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-190-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-191-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-192-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-196-997841842072881824.wt\n-rw------- 1 ds admin 4.0K Jun 21 17:24 /usr/local/var/mongodb/index-197-997841842072881824.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-2-828309438715058860.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-239--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-242--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-249--3421962067213660962.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-25-4461830987773809247.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-250--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-251--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-252--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-253--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-254--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-255--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-256--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-257--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-258--3421962067213660962.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-259--3421962067213660962.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-26-4461830987773809247.wt\n-rw------- 1 ds admin 36K Aug 4 10:28 /usr/local/var/mongodb/index-3--5484561422099317879.wt\n-rw------- 1 ds admin 44K Aug 31 13:46 /usr/local/var/mongodb/index-3-236702729281165765.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-3-7829836989468010820.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-31--7761360505503786663.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-33--7761360505503786663.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-35--7761360505503786663.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-3773--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-3774--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-3775--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-3776--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-3777--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jun 15 15:44 /usr/local/var/mongodb/index-3780--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jun 15 15:44 /usr/local/var/mongodb/index-3781--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jun 15 15:44 /usr/local/var/mongodb/index-3782--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jun 15 15:44 /usr/local/var/mongodb/index-3783--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Jun 15 15:44 /usr/local/var/mongodb/index-3784--3383466982208816242.wt\n-rw------- 1 ds admin 12K Aug 4 10:27 /usr/local/var/mongodb/index-3799--3383466982208816242.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-3801--3383466982208816242.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-3803--3383466982208816242.wt\n-rw------- 1 ds admin 4.0K Aug 16 17:50 /usr/local/var/mongodb/index-4--8938186976542024012.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-443-285365223316158836.wt\n-rw------- 1 ds admin 492K Aug 4 10:27 /usr/local/var/mongodb/index-455--3592809970845984021.wt\n-rw------- 1 ds admin 492K Aug 4 10:27 /usr/local/var/mongodb/index-457--3592809970845984021.wt\n-rw------- 1 ds admin 12K Sep 1 14:39 /usr/local/var/mongodb/index-5--5484561422099317879.wt\n-rw------- 1 ds admin 4.0K Aug 16 17:50 /usr/local/var/mongodb/index-5--8938186976542024012.wt\n-rw------- 1 ds admin 12K Aug 4 10:27 /usr/local/var/mongodb/index-5-8395434167465204535.wt\n-rw------- 1 ds admin 12K Sep 1 15:14 /usr/local/var/mongodb/index-6--5484561422099317879.wt\n-rw------- 1 ds admin 4.0K Aug 16 17:50 /usr/local/var/mongodb/index-6--8938186976542024012.wt\n-rw------- 1 ds admin 32K Aug 4 10:27 /usr/local/var/mongodb/index-7-8395434167465204535.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-762-6865724622955516652.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-766-6865724622955516652.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-768-6865724622955516652.wt\n-rw------- 1 ds admin 4.0K Jul 6 12:13 /usr/local/var/mongodb/index-770-6865724622955516652.wt\n-rw------- 1 ds admin 508K Aug 4 10:27 /usr/local/var/mongodb/index-81--7761360505503786663.wt\n-rw------- 1 ds admin 44K Aug 31 13:46 /usr/local/var/mongodb/index-9-7829836989468010820.wt\n-rw------- 1 ds admin 0B Sep 15 11:37 /usr/local/var/mongodb/mongod.lock\n-rw------- 1 ds admin 52K Sep 15 11:37 /usr/local/var/mongodb/sizeStorer.wt\n-rw------- 1 ds admin 114B Nov 16 2020 /usr/local/var/mongodb/storage.bson\n\n/usr/local/var/mongodb/diagnostic.data:\ntotal 400800\ndrwx------ 35 ds admin 1.1K Sep 1 15:14 .\ndrwxr-xr-x 136 ds admin 4.3K Sep 15 11:37 ..\n-rw------- 1 ds admin 8.2M Nov 29 2021 metrics.2021-11-18T11-49-09Z-00000\n-rw------- 1 ds admin 1.2M Nov 30 2021 metrics.2021-11-29T14-41-45Z-00000\n-rw------- 1 ds admin 10M Dec 14 2021 metrics.2021-11-30T14-16-00Z-00000\n-rw------- 1 ds admin 10M Jan 1 2022 metrics.2021-12-14T09-18-59Z-00000\n-rw------- 1 ds admin 10M Jan 15 2022 metrics.2022-01-01T16-09-07Z-00000\n-rw------- 1 ds admin 8.9M Jan 25 2022 metrics.2022-01-15T10-11-43Z-00000\n-rw------- 1 ds admin 3.4M Jan 31 2022 metrics.2022-01-25T16-09-27Z-00000\n-rw------- 1 ds admin 1.7M Feb 1 2022 metrics.2022-01-31T12-45-14Z-00000\n-rw------- 1 ds admin 10M Feb 13 2022 metrics.2022-02-01T13-04-03Z-00000\n-rw------- 1 ds admin 4.0M Feb 16 2022 metrics.2022-02-13T14-12-22Z-00000\n-rw------- 1 ds admin 1.1M Feb 17 2022 metrics.2022-02-16T18-39-49Z-00000\n-rw------- 1 ds admin 7.0M Feb 24 2022 metrics.2022-02-17T13-36-11Z-00000\n-rw------- 1 ds admin 10M Mar 7 2022 metrics.2022-02-24T12-53-15Z-00000\n-rw------- 1 ds admin 4.0M Mar 11 2022 metrics.2022-03-07T10-52-13Z-00000\n-rw------- 1 ds admin 10M Mar 25 08:58 metrics.2022-03-14T11-48-19Z-00000\n-rw------- 1 ds admin 1.2M Mar 28 10:00 metrics.2022-03-25T07-58-11Z-00000\n-rw------- 1 ds admin 10M Apr 7 12:57 metrics.2022-03-28T08-04-37Z-00000\n-rw------- 1 ds admin 10M Apr 16 11:41 metrics.2022-04-07T10-57-17Z-00000\n-rw------- 1 ds admin 10M Apr 26 09:42 metrics.2022-04-16T09-41-27Z-00000\n-rw------- 1 ds admin 8.7M May 5 13:05 metrics.2022-04-26T07-42-58Z-00000\n-rw------- 1 ds admin 90K May 5 13:31 metrics.2022-05-05T11-05-16Z-00000\n-rw------- 1 ds admin 1.8M May 8 22:06 metrics.2022-05-05T13-06-17Z-00000\n-rw------- 1 ds admin 10M May 19 19:38 metrics.2022-05-09T12-10-40Z-00000\n-rw------- 1 ds admin 10M Jun 15 13:19 metrics.2022-05-19T17-38-50Z-00000\n-rw------- 1 ds admin 158K Jun 15 15:44 metrics.2022-06-15T11-19-30Z-00000\n-rw------- 1 ds admin 5.2M Jun 21 17:24 metrics.2022-06-16T07-45-34Z-00000\n-rw------- 1 ds admin 7.8M Jul 6 12:13 metrics.2022-06-27T09-46-37Z-00000\n-rw------- 1 ds admin 13K Jul 6 12:14 metrics.2022-07-06T10-14-28Z-00000\n-rw------- 1 ds admin 909K Jul 19 10:16 metrics.2022-07-18T08-41-51Z-00000\n-rw------- 1 ds admin 10M Aug 22 21:04 metrics.2022-08-04T08-27-31Z-00000\n-rw------- 1 ds admin 10M Sep 1 11:53 metrics.2022-08-22T19-04-17Z-00000\n-rw------- 1 ds admin 449K Sep 1 15:13 metrics.2022-09-01T09-53-05Z-00000\n-rw------- 1 ds admin 10K Sep 1 15:14 metrics.interim\n\n/usr/local/var/mongodb/journal:\ntotal 24\ndrwx------ 5 ds admin 160B Sep 15 11:37 .\ndrwxr-xr-x 136 ds admin 4.3K Sep 15 11:37 ..\n-rw------- 1 root admin 100M Sep 15 11:37 WiredTigerLog.0000000048\n-rw------- 1 root admin 100M Sep 15 11:37 WiredTigerPreplog.0000000001\n-rw------- 1 root admin 100M Sep 15 11:37 WiredTigerPreplog.0000000002\nroot-rw------- 1 root admin 1.3K Sep 15 11:37 /usr/local/var/mongodb/WiredTiger.turtle\n-rw------- 1 root admin 100M Sep 15 11:37 WiredTigerLog.0000000048\n-rw------- 1 root admin 100M Sep 15 11:37 WiredTigerPreplog.0000000001\n-rw------- 1 root admin 100M Sep 15 11:37 WiredTigerPreplog.0000000002\nmongodbrewroot", "text": "ls -alh /usr/local/var/mongodb/*When I run ls -alh /usr/local/var/mongodb/*I get the following:So I have 4 files owned by rootHow can I change the permissions back to my user ?I did installed mongod with brew but don’t remmeber running anything as root mosty likely I types something by mistake without knowing", "username": "D_S1" }, { "code": "chown <$USER> <fileName>", "text": "@Doug_Duncan I guess I can change ownership of a filme with chown <$USER> <fileName> ??", "username": "D_S1" }, { "code": "sudo chown ds /usr/local/var/mongodb/WiredTiger.turtle\nsudo chown ds /usr/local/var/mongodb/journal/WiredTiger*\n", "text": "You would need to run the following commands:", "username": "Doug_Duncan" }, { "code": "mongod{\"t\":{\"$date\":\"2022-09-15T15:52:43.582+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.601+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.601+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.605+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":16054,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"DSs-MacBook-Pro.local\"}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\"},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.615+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.616+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/usr/local/var/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:43.617+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3584M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.627+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1010}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.628+02:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.629+02:00\"},\"s\":\"I\", \"c\":\"WT\", \"id\":4366408, \"ctx\":\"initandlisten\",\"msg\":\"No table logging settings modifications are required for existing WiredTiger tables\",\"attr\":{\"loggingEnabled\":true}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.710+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.717+02:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"4.4\\\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.717+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.718+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.718+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"initandlisten\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"initandlisten\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"initandlisten\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.719+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.720+02:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"initandlisten\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"initandlisten\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.721+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"initandlisten\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.975+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":254}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.975+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"initandlisten\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.976+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.976+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.976+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":62}}\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.717+02:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"4.4\\\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}brew services listmongodb-community error 15872 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistdsroot", "text": "Ok this work, I changed all file permissions,Howeever still unable to run mongodthe logs tell me{\"t\":{\"$date\":\"2022-09-15T15:52:44.717+02:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"4.4\\\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}so don’t know is that’s still the cause or is the fact that when I try to run brew services list I still see mongodb-community error 15872 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist does this file also need to be owned by ds or root ?", "username": "D_S1" }, { "code": "{\"t\":{\"$date\":\"2022-09-15T15:52:43.611+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n...\n{\"t\":{\"$date\":\"2022-09-15T15:52:44.717+02:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"4.4\\\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}\n/usr/local/var/mongodbmongod", "text": "You had an older version of MongoDB installed (4.4) that created the database files and you are now trying to run MongoDB 6.0.1 on your system. Below are the relevant log entries showing this:Do you need the data from the older version? If not you can go into /usr/local/var/mongodb and delete all the files/folders. Those should get recreated the next time you start the mongod process. If you need/want to keep the data, then you would need to install MongoDB 5.0.x to let the FeatureCompatabilityVersion get updated to 5.0, and then you can reinstall MongoDB 6.0.", "username": "Doug_Duncan" }, { "code": "brew services listmongodb-community error 15872 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistdsrootplist", "text": "when I try to run brew services list I still see mongodb-community error 15872 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist does this file also need to be owned by ds or root ?The plist file should be owned by your user. It seems you’ve got you system in a weird state. I am not sure how that would have happened. I don’t think that there should be anything in your home directory that is not owned by you.", "username": "Doug_Duncan" }, { "code": "", "text": "Ok, let me try those 2 things you mentioned and will come here an update.1.deleting all the previous data (I can actually reimported again from a dump)\n2. changing also the permissions on my home folder", "username": "D_S1" }, { "code": "sudo chown osUserName /opt/homebrew/var/mongodb/WiredTiger.turtle\nsudo chown osUserName /opt/homebrew/var/mongodb/journal/WiredTiger* \n", "text": "If you have an m1 your location will be different (see SO post).So:", "username": "Moritz_Wallawitsch" } ]
Unable to start mongo instance on Mac OS Monterrey 12.6 using homebrew
2022-09-13T15:37:39.560Z
Unable to start mongo instance on Mac OS Monterrey 12.6 using homebrew
10,290
null
[ "dot-net" ]
[ { "code": "", "text": "Good day. I’m working on some first steps in my development so mongo is still somewhat complex.\nNevertheless, I got to a point in my C# console application where in it’s simplest format takes data from a local SQL DB and pushes the documents to MongoDB. I believe I’m using an M2 basic elastic environment while I practice so maybe that’s my problem?In one example I have ~190K documents to publish to MongoDB. It can be more or less but this is a case I’m using. at first I tried to push it all, which is several years of data, but that didn’t work. I figured maybe I surpassed the 16MB doc size limit so I broke it down into a throttle by years. In this case it’s 4 years of data. I feel using batching is how I’ll do most of my writes at this point.This is an initial load so it has a larger dataset, which allows me to flesh out these limits. In production the data sets are relatively tiny. i.e 200-300 records a day.Essentially\nI’m generating a list of WriteModels as such:var listWrites = new List<WriteModel>();And publishing one year at a time to throttle it as such:var result = await _salesCollection.BulkWriteAsync(listWrites);A year is roughly 40-55K records(documents).What I noticed first was that 2019 failed with 56K records which was the largest but subsequent (2020, 2021,2022) succeeded? They were all smaller (~40K records).Then I changed from yearly to a throttle number (100K documents) and published chunks.\nIn this case a total of 193,519 documents the first 100K got in as expected, albeit slow, but the next 93,519 didn’t make it with the following exceptions.Exception:\nAn exception occurred while receiving a message from the serverException Inner Message:\nAttempted to read past the end of the stream.Inner Exception.Stack:at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)I haven’t managed to find an answer to help me understand what’s happening. It’s referring to an error “receiving a message”, but I don’t know exactly what the message is or how to trap it and resolve it? It’s talking about CenellationTokens etc but still not clear.It feels like maybe I’m hitting a limit as 100K sounds familiar from documentation but I’d like to understand what sort of throttling I need to implement and on what limits?At this point I’m ok with slow performance but for it to just fail? idk.I feel this is “basic” for the pro’s out here and over time I’ll get more exposure but MAN I’m in trouble if I can’t even do a write without hitting a wall Thanks in advance\nCPT", "username": "Colin_Poon_Tip" }, { "code": "", "text": "UPDATE. I’d still like to understand what I’m asking regarding the limits etc and what these errors represent.I set my throttle down to 20K per batch and it all got through.\n~193K of documents took:\nDone Writting - Elapsed = 00:07:45.4516660in 20,000 document batches.Delighted to hear an experts opinion on how to get that to be more efficient, or is it upgrade or nothing?NOW, lets see how long 7 years takes!!TANX!!\nCPT", "username": "Colin_Poon_Tip" } ]
An exception occurred while receiving a message from the server
2022-09-23T16:20:42.943Z
An exception occurred while receiving a message from the server
1,642
null
[ "replication", "atlas-cluster" ]
[ { "code": "", "text": "Hi All, I am new for the mangodb. i try to connect my db via inbuild console. its showing below error. what was wrong here.\n*** You have failed to connect to a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.Error: connect failed to replica set atlas-cv5ibi-shard-0/ac-drtrgkc-shard-00-01.ehkbpun.mongodb.net:27017,ac-drtrgkc-shard-00-00.ehkbpun.mongodb.net:27017,ac-drtrgkc-shard-00-02.ehkbpun.mongodb.net:27017 :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1", "username": "Bharathi_raja" }, { "code": "", "text": "Where it is saying it is a password issue?\nHave you whitelisted your IP or allowed access from anywhere?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi Ramachandra , I entered my IP only", "username": "Bharathi_raja" }, { "code": "", "text": "HI Actually Now again I created new DB and I Assgined IP as 0.0.0.0/0 now its working, thank you Ram. Early I mentioned some IP address there.\n\nmandob_cluster1544×846 48.4 KB\n", "username": "Bharathi_raja" }, { "code": "", "text": "when I use 0.0.0.0/0 (Allow access from any where its working properly) but when use my current IP its triggered Error\n\nip with Error1868×672 67.2 KB\n", "username": "Bharathi_raja" }, { "code": "", "text": "\nErr21548×777 49.1 KB\n\nAlso Please check below another one image. (per single post it allow only one img so posted one by one)", "username": "Bharathi_raja" }, { "code": "", "text": "IP config\nErr31613×526 25.1 KB\n", "username": "Bharathi_raja" }, { "code": "", "text": "The IP you are whitelisting may not be the public facing one\nCheck with your network team or others with network knowledge can help you better\nDoes ip you have whitelisted match with whatismyipaddress?", "username": "Ramachandra_Tummala" }, { "code": "0.0.0.0/0", "text": "A note here, the IDE for the course is not using your IP address. You need to open up 0.0.0.0/0 unfortunately for using the course. You can make that change temporary (I would recommend this) by toggling the slider in the lower left corner and choosing one of the available time values:\nimage1388×852 67.6 KB\n", "username": "Doug_Duncan" }, { "code": "", "text": "OK thanks Doug_Duncan", "username": "Bharathi_raja" } ]
Connection string in your command line password issue
2022-09-23T09:34:29.959Z
Connection string in your command line password issue
2,500
null
[ "replication" ]
[ { "code": "", "text": "I need to compact a collection in a db that uses replica set. All the documentation I’ve found says to compact the secondaries first. How do I connect to a secondary in order to run the compact command?", "username": "Mark_De_May" }, { "code": "mongomongoshmongosh mongodb://<URI or IP of secondary>:<port> -u <user> -p <other parameters as needed>\n", "text": "Hi @Mark_De_May and welcome to the MongoDB Community forums. To connect to a secondary member you would use mongo (installed 5.0.x and earlier) or mongosh (installed with 6.0.x and later). Something like the following:Note that we are not passing in the password so it will prompt you for that. This is the safe way as your password is not exposed on the command line or in the command history.Once you connect, and authenticate to the secondary node you can do your compaction.", "username": "Doug_Duncan" }, { "code": "", "text": "Note that we are not passing in the password so it will prompt you for that. This is the safe way as your password is not exposed on the command line or in the command history.This is a very good advice.", "username": "steevej" }, { "code": "", "text": "Thanks for the help. I tried this and got a MongoServerSelectionError error. Any thoughts on what’s wrong?", "username": "Mark_De_May" }, { "code": "", "text": "Can you post a screenshot showing the error as I’m not familiar with that one. Please blur out host name and user name (if visible) as that’s not needed and will help protect your instance.", "username": "Doug_Duncan" }, { "code": "", "text": "\nScreen Shot 2022-09-23 at 9.18.16 AM947×74 21.5 KB\n", "username": "Mark_De_May" }, { "code": "--tls", "text": "Thanks for the screenshot Mark. Add --tls on to the command as Atlas forces secure connections.", "username": "Doug_Duncan" }, { "code": "", "text": "That worked!! Thanks again for the help.", "username": "Mark_De_May" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Compacting a replica set
2022-09-20T20:08:59.534Z
Compacting a replica set
2,808
null
[ "aggregation", "queries", "atlas-search" ]
[ { "code": "arrayFieldOfObjectIds : [ObjectId(62ff26c349a3c47656765434), ObjectId(62ff26c349a3c47656765435)]arrayFieldOfObjectIds : []1. Either arrayFieldOfObjectIds doesn't exist.\n2. If arrayFieldOfObjectIds does exist, it must be empty.\n3. If arrayFieldOfObjectIds does exist, it must be equal to some specified value.\n{\n \"compound\": {\n \"should\": [\n {\n \"compound\": {\n \"mustNot\": [\n {\n \"exists\": {\n \"path\": \"arrayFieldOfObjectIds1\"\n }\n }\n ]\n }\n },\n {\n \"equals\": {\n \"value\": ObjectId(\"62ff26c349a3c47656765434\"),\n \"path\": \"arrayFieldOfObjectIds1\"\n }\n }\n ],\n \"minimumShouldMatch\": 1\n }\n}\narrayFieldOfObjectIds$search$match$search", "text": "I needed some help with the atlas search aggregation query.\nI want to use $search syntax while doing the atlas search :\nI have one collection; inside which I have one array field of ObjectIds like so :arrayFieldOfObjectIds : [ObjectId(62ff26c349a3c47656765434), ObjectId(62ff26c349a3c47656765435)]Now the array fields can also be empty in some cases like so :\narrayFieldOfObjectIds : []I have defined the correct index mapping for this field.\nMy goal is to get all the documents that met the below conditions:The query I have for arrayFieldOfObjectIds is :This query doesn’t give me those documents where arrayFieldOfObjectIds does exist and is empty.\nNote: I need to use $search syntax and I also don’t want to combine $match with $search as it kills the query performance altogether.Thanks in advance.", "username": "pawan_saxena1" }, { "code": "", "text": "Hi there,Atlas Search currently does not have an operator which checks for empty arrays. A suggested workaround is to add logic to your application which inserts a default value (e.g. boolean value false) to empty arrays, which you can check for using the equals operator, as you are already doing to check for ObjectIds.I would encourage you to provide some feedback about your needs here so that others can also vote for it, which will help us drive it forward!Hope this helps.", "username": "amyjian" } ]
Search for empty arrays in mongodb atlas
2022-08-20T21:49:38.065Z
Search for empty arrays in mongodb atlas
2,714
https://www.mongodb.com/…e_2_1024x512.png
[ "node-js", "realm-web" ]
[ { "code": " at Runtime._loadModule (node_modules/jest-runtime/build/index.js:1218:29)\n at bindings (node_modules/bindings/bindings.js:112:48)\n at getRealmConstructor (node_modules/realm/lib/index.js:28:37)\n", "text": "Hi All,We have started the integration testing for our atlas functions as per the below link.const { app_id } = require(“…/…/realm_config.json”);\nconst Realm = require(“realm”);\nconst app = new Realm.App(app_id);We have already installed realm sdk.\nNow when we execute the command ‘npm test’ then we get bellow error message.Can you help on this ?Error:\\?\\C:\\Dev-Code\\Triggers\\node_modules\\realm\\build\\Release\\realm.node is not a valid Win32 application. \\?\\C:\\Dev-Code\\Triggers\\node_modules\\realm\\build\\Release\\realm.nodeThanks", "username": "passion_km_mongatlas" }, { "code": "", "text": "It’s working fine with ‘npm install [email protected]’.", "username": "passion_km_mongatlas" } ]
Realm.node is not a valid Win32 application - During Integration Testing
2022-09-23T08:55:55.865Z
Realm.node is not a valid Win32 application - During Integration Testing
2,996
https://www.mongodb.com/…_2_1024x833.jpeg
[]
[ { "code": "", "text": "I have a collection called “activities” the activity in each collection needs to be assign automaticaly to collection “listings, contacts and requests” like so based on an array of ids.if I have an activiy or more that containts the fields “listingsIds: [1, 10, 90, 3]”, “contactsIds: [c1, c9]” , “requestsIds: [req2, re3, req9]” the documents with the corresponding ids need to be updated in each collection with an array field like so “activitiesId: [a1]” in wichi later I’ll more activities. How can I achieve that?what I’ve done so far, but I’m not sure it’s okay is this.\n\nScreenshot 2022-09-22 at 15.30.571504×1224 205 KB\n", "username": "Mingo" }, { "code": "", "text": "Rather than describing your documents, please provide real sample documents we can cut-n-paste. It help us being more efficient in helping you.Same thing with your code. A text version we can cut-n-paste in our test and replies is more usefull than an image.", "username": "steevej" } ]
Insert / Update find in document in multiple collections
2022-09-22T12:31:28.316Z
Insert / Update find in document in multiple collections
790
null
[ "atlas-functions", "react-native", "app-services-hosting" ]
[ { "code": "", "text": "Hi guys, i have a question about architecture,\nMy project are 2 react apps, similar to ecommerce shopify, the store admin app (aka app-admin) for store owners and the public store itself (aka app-customer), Im starting to work on the app-customer app and i need to use few of endpoints from app-admin then ideally i would like to use same realm app, but im not sure how to do that with the realm hosting, i need to deploy two differents apps and i would like to set a subdomain store.mydomain.com for the app-customer, is that possible?thanks.", "username": "Juan_Jose_N_A" }, { "code": "", "text": "Juan,Basically you really just need to distinguish between regular users and admin users. Upon login an admin user would go down one pathway, while a regular user would go down another. Presumably, you will have a user collection that records whether a user is a regular user or an admin user. At the beginning, you could just set the admin property for a user in Atlas. Later on, when you want to get fancy you could add an admin panel to set this property directly in a UI.", "username": "Richard_Krueger" }, { "code": "", "text": "The only distinction that should be made is between regular users and admin users. A regular user would follow one pathway after logging in, while an admin user would follow another. You probably have a user collection that tracks whether a user is an admin user or an ordinary user. In the beginning, you could only modify a user’s admin property in Atlas. However, if you are in Greater Kailash 1 (new delhi) and looking for Spa then, visit: Spa In GK1", "username": "Zyur_Thaispa" }, { "code": "", "text": "In contrast to an ordinary user, the user would take one pathway. You probably have a user collection that tracks whether a user is an admin user or an ordinary user. In the beginning, you could only modify a user’s admin property in Atlas. However, if you are in Greater kailash 2 and looking for spa then, visit: best spa in GK2", "username": "Zyur_Thaispa" }, { "code": "", "text": "A regular user would follow one pathway after logging in, while an admin user would follow another. You probably have a user collection that tracks whether a user is an admin user or an ordinary user. However, if you are in Safdarjung Enclave New Delhi and looking for Spa centre then, visit: Spa centre near me", "username": "Abhay_N_A1" } ]
2 SPA apps in one realm app?
2020-12-27T16:48:48.272Z
2 SPA apps in one realm app?
3,839
null
[ "server", "configuration" ]
[ { "code": "", "text": "Good morning\nI enabled mongo authentication by editing the /etc/mongod.conf file by adding the following piecesecurity:\nauthorization: enabledBecause every time I reboot the machine I have to run the following command mongod -f /etc/mongod.conf\notherwise the authentication doesn’t work?What’s wrong?Thanks Alessio", "username": "Alessio_Rossato" }, { "code": "", "text": "I think you are starting your mongod manually\nIf you want the changes effect everytime you reboot you have to run your mongod as a service", "username": "Ramachandra_Tummala" }, { "code": "", "text": "mongo I set it with sudo systemctl enable mongod but it doesn’t go anywaywhat commands should i launch?Thanks", "username": "Alessio_Rossato" }, { "code": "", "text": "You enabled it but did you start?\nsudo systemctl start mongod\nThen check status\nBefore starting the service make sure you don’t have any mongod running which you might have started from command line", "username": "Ramachandra_Tummala" } ]
Authentication error security
2022-09-23T09:52:13.311Z
Authentication error security
2,136
null
[ "server", "storage" ]
[ { "code": "022-08-29T16:07:27.164+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3356M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],\n2022-08-29T16:07:27.784+0200 I STORAGE [initandlisten] WiredTiger message [1661782047:784353][1732977:0x7f886f9cdc00], txn-recover: Recovering log 1 through 3\n2022-08-29T16:07:27.784+0200 E STORAGE [initandlisten] WiredTiger error (0) [1661782047:784401][1732977:0x7f886f9cdc00], txn-recover: WT_COMPRESSOR.decompress: **stored size exceeds source size** Raw: [1661782047:784401][1732977:0x7f886f9cdc00], txn-recover: WT_COMPRESSOR.decompress: stored size exceeds source size\n2022-08-29T16:07:27.784+0200 E STORAGE [initandlisten] WiredTiger error (-31802) [1661782047:784431][1732977:0x7f886f9cdc00], txn-recover: __wt_txn_recover, 710: Recovery failed: WT_ERROR: non-specific WiredTiger error Raw: [1661782047:784431][1732977:0x7f886f9cdc00], txn-recover: __wt_txn_recover, 710: Recovery failed: WT_ERROR: non-specific WiredTiger error\n2022-08-29T16:07:27.784+0200 E STORAGE [initandlisten] WiredTiger error (0) [1661782047:784490][1732977:0x7f886f9cdc00], connection: __wt_cache_destroy, 346: cache server: exiting with 1 pages in memory and 0 pages evicted Raw: [1661782047:784490][1732977:0x7f886f9cdc00], connection: __wt_cache_destroy, 346: cache server: exiting with 1 pages in memory and 0 pages evicted\n", "text": "Hi,I am using MongoDB version 4.2 and it was working fine. But recently when I try to start the server it is failing with below error. Please can you let me know what the issue is?Thanks,\nAkshaya Srinivasan", "username": "Akshaya_Srinivasan" }, { "code": "WT_COMPRESSOR.decompress: stored size exceeds source sizedbPath", "text": "Hi @Akshaya_SrinivasanThe error WT_COMPRESSOR.decompress: stored size exceeds source size basically means that when WiredTiger tries to decompress data, it found that the uncompressed size is smaller than the compressed size. This is not supposed to happen, and it is pretty unexpected.Although there is no guarantee that this can be fixed other than restoring from a backup, could you provide more details:Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks Kevinadi.\nThis issue was seen when I tried to restore data from my backup.\nMy MongoDB server version is 4.2.14.\nI took backup from same machine and restored to the same machine.\nI took backup of dbPath with fsyncLock acquired and then unlocked it. Other than that there was no change made manually in the dbPath.\nAny steps to proceed further? Thanks in advance.Akshaya Srinivasan", "username": "Akshaya_Srinivasan" } ]
MongoDB server fails to start with Wiredtiger error stored size exceeds source size
2022-09-23T05:51:25.909Z
MongoDB server fails to start with Wiredtiger error stored size exceeds source size
2,442
null
[ "data-modeling", "atlas-device-sync", "storage" ]
[ { "code": "", "text": "Hi,I know the question in a similar form has already been asked. Nevertheless I want to ask it myself because I couldn’t get my head around it.I have an app wich uses a Team partition strategy for chats, a private (user) partition strategy for personal data and a “public” partition value for data that everyone can see but only the owner can write/change.My first question is about the user. If I don’t want to share all the data from the user in the “public” partition, I would need to create two similar collections in my schema (one collection for the publicUser and on for the privateUser). Is there a simpler way than creating two collections that are actually the same (beside the values)?I understand Realm that every Realm partition I open in the app, saves all the data from that partition on the device. That would mean for my public partition, that every user that creates data in the public partition, would force all the other users to save the created data on their local device as soon as the app opens the “public” Realm.If that’s correct, then how is an app with Realm Sync and MongoDB like Airbnb possible, which has a lot of public data that can’t be stored locally on devices. Is there a solution to have an only Online partition OR to never save data on the device when there is only a read permission?Thanks!", "username": "Jannis_Gunther" }, { "code": "downloadBeforeOpen .never", "text": "I actually have a very similar question… I have a realm with a PUBLIC partition, accessible by all users. Should I expect an issue in the future if this realm grows significantly in size? I set the downloadBeforeOpen parameter, while opening the realm, on .never but I am not 100% sure it alleviates the potential problem.", "username": "Sonisan" }, { "code": "", "text": "Hi @Sonisan,Since the original question was posted, Flexible Sync has been introduced, that solves the general scenario, please have a look at its features.", "username": "Paolo_Manna" }, { "code": "", "text": "Hi @Paolo_MannaThank you for your reply!\nI see… I’ll take a look. Hopefully the transition from partitions is not too painful. ", "username": "Sonisan" }, { "code": "onAppear().OnDisappear()", "text": "Hi @Paolo_Manna,I managed to transition to flexible sync (which is great by the way!). I am now wondering what would be the “best” way to remove subscriptions in a SwiftUI app.For example, I have an iOS app where one tab is showing user-generated content (the PUBLIC part I was referring to earlier) that I would like to purge from user’s local storage at some point. At the moment, I am trying to add subscriptions through the MyPublicTab.onAppear() and remove them with MyPublicTab.OnDisappear(). But unfortunately, it looks like removing subscriptions takes a while, which puts the app in an inconsistent state, if the user navigates quickly through the app. The worst case is to have a “clear the cache” option, but any better suggestion is appreciated. ", "username": "Sonisan" }, { "code": "awaitPUBLIC", "text": "Hi @Sonisan,Yes, adding and removing subscriptions that move potentially large data sets is not ideal for performance… You can await for the subscription to be updated, but that’s not great user experience either.Perhaps you should devise a subscription that can trickle data in and out in small chunks, that would go (almost) unnoticed. For example, you can always be subscribed to the PUBLIC part, but put a time limit (say, since last day, or last visit), and update the limit (up or down) only when the user gets near to it. Nudging subscriptions, instead of removing and creating them, should be much more efficient.Take all the above with a grain of salt, of course, as Flexible Sync is relatively new, in its current form: for more information, you can read the technical articles from one of the engineers that built it, there are a lot of hints you can take from there!", "username": "Paolo_Manna" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Having a lot of data in one "public" partition Realm
2021-10-16T15:36:25.290Z
Having a lot of data in one &ldquo;public&rdquo; partition Realm
3,512
null
[ "dot-net", "crud" ]
[ { "code": "static void Main(string[] args)\n{\n var cnx = new MongoClient();\n var db = cnx.GetDatabase(\"mq\");\n db.DropCollection(\"lock\");\n var col = db.GetCollection<BsonDocument>(\"lock\");\n\n ObjectId id = ObjectId.GenerateNewId();\n BsonDocument t = new BsonDocument().Add(\"hello\", \"world\");\n Random rnd = new Random();\n var fb = Builders<BsonDocument>.Filter;\n var ub = Builders<BsonDocument>.Update;\n var ud = ub.CurrentDate(\"dt\").Inc(\"run\", 1).SetOnInsert(\"data\", t);\n\n List<Task> tasks = new List<Task>();\n List<double> mean = new List<double>();\n\n for (int i = 0; i < 100; i++)\n {\n int j = i;\n tasks.Add(Task.Run(async () =>\n {\n await Task.Delay(rnd.Next(10, 100));\n DateTime start = DateTime.Now;\n BsonDocument d = await col.FindOneAndUpdateAsync(\n fb.Eq(\"_id\", id),\n ud,\n new FindOneAndUpdateOptions<BsonDocument>()\n { IsUpsert = true, ReturnDocument = ReturnDocument.After }\n );\n mean.Add((DateTime.Now - start).TotalMilliseconds);\n\n if (d[\"run\"].AsInt32 == 1)\n {\n Console.WriteLine($\"Process {j} have the lock\");\n await Task.Delay(rnd.Next(10, 100));\n await col.DeleteOneAsync(fb.Eq(\"_id\", id));\n }\n }));\n }\n try\n {\n Task.WaitAll(tasks.ToArray());\n }\n catch (Exception ex)\n {\n do\n {\n Console.WriteLine(ex.Message);\n ex = ex.InnerException;\n }\n while (ex != null);\n }\n\n Console.WriteLine($\"Average = {mean.Average()}, Min = {mean.Min()}, Max = {mean.Max()}\");\n Console.WriteLine(\"Press return to exit.\");\n Console.ReadLine();\n}\n", "text": "Hi,I’m trying to use mongodb to have a multi-process lock system. I use FindOneAndUpdate atomicity for that.\nThis is not to be use for high speed lock, I use this sometime in my code.\nI’m using the latest C# driver from Github and mongodb for Windows version v3.6.10.Sometime I get exception “E11000 duplicate key error collection”.\nIt’s quite easy to reproduce, run the process 2 or 3 time to get it.Why is it possible to get this exception ?\nHow to fix this ?RemiProcess 88 have the lock\nProcess 60 have the lock\nOne or more errors occurred.\nCommand findAndModify failed: E11000 duplicate key error collection: mq.lock index: id dup key: { : ObjectId(‘632d4d5ed0371a56dcdd187a’) }.\nAverage = 200,778591919192, Min = 59,0069, Max = 313,0022\nPress return to exit.", "username": "Remi_Thomas" }, { "code": "", "text": "I reply to myself.The problem is the version of mongodb engine I have to use.\nWith a more recent version I don’t have the problem.Remi", "username": "Remi_Thomas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why this code create an exception?
2022-09-23T06:14:17.549Z
Why this code create an exception?
1,259
null
[ "dot-net", "compass" ]
[ { "code": "string SSHServerUserName = \"ubuntu\";\nstring SSHServerHost = \"x.xx.xxx.xxx\"; //public IP of ubuntu server as per AWS\nPrivateKeyFile keyFile = new PrivateKeyFile(@\"H:\\MongoDB.pem\", \"filePass\"); \nPrivateKeyAuthenticationMethod authenticationMethod = new PrivateKeyAuthenticationMethod(SSHServerUserName, keyFile);\n\nConnectionInfo connectionInfo = new ConnectionInfo(SSHServerHost, SSHServerUserName, authenticationMethod); //uses DefaultPort = 22\n\nSshClient sshClient = new SshClient(connectionInfo);\nsshClient.ErrorOccurred += delegate { Debug.WriteLine(\"SSH ERROR OCCURRED\"); };\nsshClient.HostKeyReceived += delegate { Debug.WriteLine(\"SSH HOST KEY RECEIVED\"); };\nsshClient.Connect();\n\nif (sshClient.IsConnected) {\n\n string MongoDBHost = \"xx.xx.xx.xx\"; // **PRIVATE IP OF UBUNTU AWS EC2? \n uint MongoDBPort = 27017;\n\n ForwardedPortLocal forwardedPortLocal = new ForwardedPortLocal(\"127.0.0.1\", 5477, MongoDBHost, MongoDBPort);\n\n forwardedPortLocal.Exception += delegate { Debug.WriteLine(\"FORWARDED PORT LOCAL EXCEPTION\"); };\n forwardedPortLocal.RequestReceived += delegate { Debug.WriteLine(\"FORWARDED PORT REQUEST RECEIVED\"); };\n\n sshClient.AddForwardedPort(forwardedPortLocal);\n forwardedPortLocal.Start();\n\n MongoClientSettings mongoSettings = new MongoClientSettings();\n mongoSettings.Server = new MongoServerAddress(\"localhost\", 5477); //IS THIS RIGHT?\n MongoClient mongoClient = new MongoClient(mongoSettings);\n\n var iMongoDatabase = mongoClient.GetDatabase(\"test\");\n\n var profiles = iMongoDatabase.GetCollection<IConvertibleToBsonDocument>(\"profiles\");\n\n Debug.WriteLine(\"GOT THE COLLECTION:\"); //debugs out okay...\n\n var document = new BsonDocument {\n { \"name\", \"userName\" },\n { \"type\", \"newUser\" },\n { \"count\", 1 },\n { \"info\", new BsonDocument\n {\n { \"x\", 203 },\n { \"y\", 102 }\n } }\n };\n\n profiles.InsertOne(document);\n\n Debug.Write(\"ATTEMPTED TO INSERT ONE\"); //does not debug out\n}\nsshClient.IsConnected()ATTEMPTED TO INSERT ONEFORWARDED PORT REQUEST RECEIVED", "text": "I have two AWS EC2 instances:I am attempting to write a program in C# .NET that will be able to connect to the Ubuntu server and its MongoDB when run from my local computer or from the Windows Server.I am using Renci SSH.NET and the standard MongoDB driver. For networking, I found one old guide here which helped me get part way. I have so far:I am getting sshClient.IsConnected() as true. However, I do not know if my forwarded port is working correctly or to what extent it is or isn’t connecting to the Mongo Database. I can see from MongoDB Compass (GUI database navigator) nothing is being inserted, and I can’t get the ATTEMPTED TO INSERT ONE debug code to write so it’s certainly breaking somewhere before that point.Questions:This is my first database and EC2 configuration. Thanks for any help.", "username": "MikeM" }, { "code": "", "text": "I see a thread here where someone says they are trying to do the same thing and they were told it is impossible. Similarly, here they were told it’s impossible.However that was 8 years ago. Is it still impossible to connect to a MongoDB by C#? I can get an SSH connection to my EC2 instance as shown. However, I can’t see any method to connect to the MongoClient. I have TLS/SSL disabled and no username or password when I connect in Compass.If we are not able to connect to a MongoDB by C#, how are we supposed to connect? C# is my preferred language but if I have to learn another language to use MongoDB I will grudgingly do so.Any solution or ideas? How do you set up a server or program that can insert documents or manipulate a MongoDB?", "username": "MikeM" }, { "code": "SSH tunnelssh", "text": "Hello @MikeM ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, then you can try achieving your goal using SSH tunnel which could be setup outside the script instead of trying to do so programatically via C#. You can set up a forward tunnel from the Windows machine to the Ubuntu machine, or a reverse tunnel from the other direction. These can be set up using the ssh command in Ubuntu (for a reverse tunnel), or Putty in Windows (for a forward tunnel).Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connecting from Windows EC2 server to a Ubuntu MongoDB EC2 server via SSH and port forwarding?
2022-09-10T12:00:55.126Z
Connecting from Windows EC2 server to a Ubuntu MongoDB EC2 server via SSH and port forwarding?
1,936
null
[]
[ { "code": "", "text": "Hello;We have authorized developers to create indexes in our system, but we want to be informed when they create an index.Do we have an option to get an alarm when an index is created on the database?I’m open to your suggestions on this subject.Thaks", "username": "Sercan_Ersan" }, { "code": "", "text": "Hi @Sercan_Ersan - Welcome to the community Can you provide the following information:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hello @Jason_Tran, thank you for your answer.Regards,\nSercan", "username": "Sercan_Ersan" }, { "code": "\ndb = db.getSiblingDB(\"admin\");\ndbs = db.runCommand({ \"listDatabases\": 1}).databases;\n\n\ndbs.forEach(function(database) {\ndb = db.getSiblingDB(database.name);\ncols = db.getCollectionNames();\n\ncols.forEach(function(col) {\n\n indexes = db[col].getIndexes();\n\n indexes.forEach(function(idx) {\n print(\"Database:\" + database.name + \" | Collection:\" +col+ \" | Index:\" + idx.name);\n printjson(indexes);\n });\n\n\n });\n\n});\n", "text": "@Jason_TranI think there is no development where we can get alerts in this way, at least for mongodb 4.4… versions.I also wanted to do a work like this, but for now I have a problem there too.With the above code block, we can see the indexes in all databases under the cluster.Our query output is like this;\nIs there a way for me to insert this output into a collection?", "username": "Sercan_Ersan" }, { "code": "", "text": "Hi @Sercan_Ersan,Thanks for clarifying those details. There currently isn’t an Atlas alert that can be created for when an index creation is submitted. If you would like this feature to be added, I would suggest you to file a feature request via feedback.mongodb.com being sure to include all your use case details. From that platform, you will be able to interact with our Product Management team, keep track of the progress of your request, and make this visible to other users.Is there a way for me to insert this output into a collection?One idea based off what you have provided from the output screenshot is to log the namespace details & index details (from the output) on the application side and insert them to a collection so that you can use to monitor indexes. This could then be made to be a CRON / scheduled job so that you can check every now and then to verify the appropriate indexes are in place. Additionally, you can consider limiting access to this collection to only certain database users. Please see Configure Custom Database Roles for more information on this.However, this leads to another problem - How or when to know when an index is removed?I believe it may be better to solve this from a workflow perspective. One suggestion could be:Allowing your developers to create indexes in the dev environment and have some auditing to pick up when a new index is created. Since you’re on MongoDB Atlas, you could Set up Database Auditing to audit createIndex and dropIndex actions although this feature is only available on M10+ tier clusters. You can then retrieve the mongodb-audit-log’s and filter for if these actions have occurred periodically.Example:\n\nimage832×690 14.5 KB\nDisallow creating indexes in the production environment outside of an approved deployment process. I.e. Only indexes that you’ve approved (through an internal process) from Dev can be created on Production.Of course, all the above would depend on your use case and requirements. Would you be able to advise if the indexes are being created manually or via code?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can i get an alarm when an index created?
2022-09-05T06:56:18.942Z
How can i get an alarm when an index created?
1,794
null
[ "replication" ]
[ { "code": "\"t\": {\n \"$date\": \"2022-09-07T14:49:27.984+00:00\"\n },\n \"s\": \"I\",\n \"c\": \"NETWORK\",\n \"id\": 4712102,\n \"ctx\": \"ReplicaSetMonitor-TaskExecutor\",\n \"msg\": \"Host failed in replica set\",\n \"attr\": {\n \"replicaSet\": \"mongors\",\n \"host\": \"mongodb3:27017\",\n \"error\": {\n \"code\": 202,\n \"codeName\": \"NetworkInterfaceExceededTimeLimit\",\n \"errmsg\": \"Couldn't get a connection within the time limit of 524ms\"\n },\n \"action\": {\n \"dropConnections\": false,\n \"requestImmediateCheck\": false,\n \"outcome\": {\n \"host\": \"mongodb3:27017\",\n \"success\": false,\n \"errorMessage\": \"NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit of 524ms\"\n }\n }\n }\n", "text": "Keep on getting following error in the log but no impact in accessing the system", "username": "Rekha_Jadala" }, { "code": "", "text": "Welcome to The MongoDB Community Forums @Rekha_Jadala ! Could you please provide more details of your environment including:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Deployment Environment - EC2 instances\nMongodb version 4.4.15\nPreviously working\npatch applied", "username": "Rekha_Jadala" }, { "code": "mongodb3:27017", "text": "Thank you @Rekha_Jadala , could you also share the output for rs.status() and rs.conf()?Also, please correct me If I am wrong in understanding your use-case, you are able to connect to your replica set node mongodb3:27017 without any issue but keep on seeing this message in the logs?", "username": "Tarun_Gaur" } ]
ReplicaSetMonitor-TaskExecutor Host failed in replica set
2022-09-07T15:05:59.719Z
ReplicaSetMonitor-TaskExecutor Host failed in replica set
2,511
null
[ "dot-net", "android", "flexible-sync" ]
[ { "code": "[libc] /buildbot/src/android/ndk-r25-release/toolchain/llvm-project/libcxx/../../../toolchain/llvm-project/libcxxabi/src/abort_message.cpp:72: abort_message: assertion \"terminating with uncaught exception of type realm::LogicError: Binary too big\n[libc] Exception backtrace:\n[libc] <backtrace not supported on this platform>\" failed\n", "text": "Language: C#Nuget Version: 10.15.1Server Type: M2When I the data is synced, the android app crashes and following error is printed in console.", "username": "Ahmad_Pasha" }, { "code": "", "text": "Hi, as the message says you’re trying to insert something that is too big. What kind of data are you trying to store?", "username": "Andrea_Catalini" }, { "code": "", "text": "Thanks for reply,\nI am not uploading anything when this error occurs.\nIt occurs when I subscribe to the topic", "username": "Ahmad_Pasha" }, { "code": "", "text": "Mmmm. This is strange and doesn’t ring any bell.\nCould you show us some code? There should be at leastThanks", "username": "Andrea_Catalini" }, { "code": " public async Task<Realm> GetDbInstanceAsync()\n {\n if (_realmApp != null && _realmApp.CurrentUser != null)\n {\n if (_flexibleSyncConfiguration == null)\n {\n _flexibleSyncConfiguration = new FlexibleSyncConfiguration(_realmApp.CurrentUser);\n Realm.Compact(_flexibleSyncConfiguration);\n }\n\n Realm realm = null ;\n\n // The async call below will only return once sync has run. If we have no internet, it won't return!\n // Therefore, use the async call when we have no database file.\n if (!File.Exists(_flexibleSyncConfiguration.DatabasePath))\n realm = await Realm.GetInstanceAsync(_flexibleSyncConfiguration);\n else\n realm = Realm.GetInstance(_flexibleSyncConfiguration);\n\n return realm;\n }\n\n return null;\n }\n //Check if subscriptions are already been subscribed or not.\n var userSubscriptionName = \"user_id\";\n var userSubscription = realm.Subscriptions.FirstOrDefault(x => x.Name == userSubscriptionName);\n var shouldSubscribeForUser = userSubscription == null;\n\n var chatsterSubscriptionName = \"all_chatsters\";\n var chatstersSubscription = realm.Subscriptions.FirstOrDefault(x => x.Name == chatsterSubscriptionName);\n var shouldSubscribeForChatsters = chatstersSubscription == null;\n\n var conversationsSubscriptionName = \"conversations\";\n var conversationsSubscription = realm.Subscriptions.FirstOrDefault(x => x.Name == conversationsSubscriptionName);\n var shouldSubscribeForConversations = conversationsSubscription == null;\n\n var conversationPrefSubscriptionName = \"conversationsPrefs\";\n var conversationPrefSubscription = realm.Subscriptions.FirstOrDefault(x => x.Name == conversationPrefSubscriptionName);\n var shouldSubscribeForConversationPref = conversationPrefSubscription == null;\n\n var contactEntrySubscriptionName = \"contactEntries\";\n var contactEntriesSubscription = realm.Subscriptions.FirstOrDefault(x => x.Name == contactEntrySubscriptionName);\n var shouldSubscribeForContactEntries = contactEntriesSubscription == null;\n\n var timelineSharedSubscriptionName = \"timelinesShared\";\n var timelineSharedSubscription = realm.Subscriptions.FirstOrDefault(x => x.Name == timelineSharedSubscriptionName);\n var shouldSubscribeForTimelineShared = timelineSharedSubscription == null;\n\n var selfTimelineSubscriptionName = \"selfTimelines\";\n var selfTimelineSubscription = realm.Subscriptions.FirstOrDefault(x => x.Name == selfTimelineSubscriptionName);\n var shouldSubscribeForSelfTimeline = selfTimelineSubscription == null;\n\n if (shouldSubscribeForUser || shouldSubscribeForChatsters || shouldSubscribeForConversations || shouldSubscribeForConversationPref || shouldSubscribeForContactEntries || shouldSubscribeForTimelineShared || shouldSubscribeForSelfTimeline)\n {\n realm.Subscriptions.Update(() =>\n {\n try\n {\n if (shouldSubscribeForUser)\n {\n var queryUsers = realm.All<User>().Where(o => o.Id == id);\n realm.Subscriptions.Add(queryUsers, new SubscriptionOptions()\n {\n Name = userSubscriptionName,\n UpdateExisting = true\n });\n }\n //If I Comment out this block, app starts working\n if (shouldSubscribeForChatsters)// This Subscription is causing the crash\n {\n var queryChatster = realm.All<Chatster>();//.Where(o => o.UserName != \"\");\n realm.Subscriptions.Add(queryChatster, new SubscriptionOptions()\n {\n Name = \"all_chatsters\",\n UpdateExisting = true\n });\n }\n\n if (shouldSubscribeForConversations)\n {\n var queryConversation = realm.All<Conversation>().Where(o => o.AuthorID == id);\n realm.Subscriptions.Add(queryConversation, new SubscriptionOptions()\n {\n //Name = \"conversation\",\n Name = \"conversations\",\n UpdateExisting = true\n });\n }\n\n if (shouldSubscribeForConversationPref)\n {\n var queryConversation = realm.All<ConversationPreferences>().Where(o => o.AuthorID == id);\n realm.Subscriptions.Add(queryConversation, new SubscriptionOptions()\n {\n //Name = \"conversation\",\n Name = \"conversationsPrefs\",\n UpdateExisting = true\n });\n }\n\n if (shouldSubscribeForContactEntries)\n {\n var queryConversation = realm.All<ContactEntry>().Where(o => o.AuthorID == id);\n realm.Subscriptions.Add(queryConversation, new SubscriptionOptions()\n {\n //Name = \"conversation\",\n Name = \"contactEntries\",\n UpdateExisting = true\n });\n }\n\n if (shouldSubscribeForTimelineShared)\n {\n var querySharedTimeline = realm.All<TimelineShared>().Where(o => o.AuthorID == id);\n realm.Subscriptions.Add(querySharedTimeline, new SubscriptionOptions()\n {\n Name = \"timelinesShared\",\n UpdateExisting = true\n });\n }\n\n if (shouldSubscribeForSelfTimeline)\n {\n var querySelfTimeline = realm.All<Timeline>().Where(o => o.AuthorID == id);\n realm.Subscriptions.Add(querySelfTimeline, new SubscriptionOptions()\n {\n Name = \"selfTimelines\",\n UpdateExisting = true\n });\n }\n }\n catch (Exception e)\n {\n Console.WriteLine(e.Message);\n }\n });\n\n await realm.Subscriptions.WaitForSynchronizationAsync();\n }\n if(shouldSubscribeForUser)\n NotificationsService.GetInstance().NotifySyncCompleted(objectType: ObjectType.User);\n NotificationsService.GetInstance().NotifySyncCompleted(objectType: ObjectType.Chatster);\n NotificationsService.GetInstance().NotifySyncCompleted(objectType: ObjectType.ConversationPrefs);\n NotificationsService.GetInstance().NotifySyncCompleted(objectType: ObjectType.ContactEntry);\n NotificationsService.GetInstance().NotifySyncCompleted(objectType: ObjectType.Timeline);\n NotificationsService.GetInstance().NotifySyncCompleted(objectType: ObjectType.TimelineShared);\n }\n }\n catch (Exception e)\n {\n Console.WriteLine(e.Message);\n }\n }", "text": "Creation of Configuration and Opening of Realm:And the part where the error is occurring is where the subscription is made:public async Task SetupBasicSubscriptionsAsync()\n{\ntry\n{\nif (_dbService == null)\n{\n_dbService = Ioc.Default.GetService();\n}\nvar realm = await _dbService.GetDbInstanceAsync();//_dbService.GetDbInstance();\nif (realm != null)\n{\nstring id = _dbService.GetRealmApp().CurrentUser.Id;", "username": "Ahmad_Pasha" }, { "code": "", "text": "And Another thing I noticed is that the Chatster collection size is more that 16MB as following.\nand I am subscribing to the whole collection.", "username": "Ahmad_Pasha" }, { "code": "", "text": "Hi @Ahmad_Pasha , can you show use how is your schema defined on the app?", "username": "papafe" }, { "code": "", "text": "@papafe Sorry for late reply.", "username": "Ahmad_Pasha" }, { "code": "", "text": "@papafe I would also like to add that, when I changed the subscriptions method to subscribing one-by-one it started working.\nSo from this behavior, I guess there is a limit to how much data we can subscribe to in a single subscription?And also,Hoping you could tell me if there is any limit to the number of subscriptions you can have at one time?", "username": "Ahmad_Pasha" }, { "code": "{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"avatarImage\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"displayName\": {\n \"bsonType\": \"string\"\n },\n \"lastSeenAt\": {\n \"bsonType\": \"date\"\n },\n \"presence\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [\n \"_id\",\n \"presence\",\n \"userName\"\n ],\n \"title\": \"Chatster\"\n}\n", "text": "@papafe Here is the schema for that collection", "username": "Ahmad_Pasha" }, { "code": "", "text": "@Ahmad_Pasha there shouldn’t be any limit to the number of subscriptions you can have.Regarding your problem it would be really useful to see the stack trace, but unfortunately there is an issue with Android regarding producing stack traces. Would it be possible for you to try to run the project on iOS/MacOS and get the stack trace?", "username": "papafe" }, { "code": "", "text": "@papafe thanks for reply,\nI would not be able to get MacOS/iOS stack trace.", "username": "Ahmad_Pasha" }, { "code": "", "text": "@Ahmad_Pasha No worries.\nI will produce a special nuget for you with debug enabled, so we should be able to get more logging.", "username": "papafe" }, { "code": "", "text": "Ok I will wait @papafe .\nThanks alot", "username": "Ahmad_Pasha" }, { "code": "", "text": "Sorry for the delay, it took some time for the build.\nYou can use package with version 10.15.1-pr-3029.526 from our night builds (instructions here).\nLet me know how it goes", "username": "papafe" }, { "code": "", "text": "@papafe Thanks for response.\nSorry, I would check the new nuget package in the next week,\ncause of an unexpected task which needs immediate attention.", "username": "Ahmad_Pasha" } ]
Binary too big Realm Sync
2022-09-07T09:11:23.756Z
Binary too big Realm Sync
4,115
null
[]
[ { "code": "", "text": "Hi,I am currently in the middle of upgrading major MongoDB Version for my production cluster and base from the [upgrade procedure] (https://www.mongodb.com/docs/atlas/tutorial/major-version-change/) I’m stuck at number 7, where I need to test our application against the upgraded staging cluster before I can finally upgrade my production cluster.I wish to know what is the best way to connect the backend realm function to a staging cluster without affecting our production cluster?The purpose is to test if our backend application is working well after the major upgrade version.", "username": "Najwa_Najihah" }, { "code": "realm-cli", "text": "I wish to know what is the best way to connect the backend realm function to a staging cluster without affecting our production cluster?Hi @Najwa_Najihah,I would deploy a copy of your functions in your staging cluster for validation.You can keep functions in sync using GitHub deployment for Atlas App Services or the App Services Command Line Interface (realm-cli).The information on Setting up a CI/CD Pipeline for App Services may be a useful reference.Regards,\nStennie", "username": "Stennie_X" } ]
How to connect application to MongoDB staging cluster after major upgrade cluster
2022-09-23T01:04:40.425Z
How to connect application to MongoDB staging cluster after major upgrade cluster
1,041
null
[ "replication", "java", "mongodb-shell", "transactions", "kafka-connector" ]
[ { "code": " {\n\"name\": \"mongo-sourceV2\",\n\"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"connection.uri\": \"mongodb://mongo1:27017/?replicaSet=rs0\",\n \"database\": \"quickstart\",\n \"collection\": \"transactionV2\",\n \"pipeline\": \"[{\\\"$match\\\":{\\\"operationType\\\": { \\\"$in\\\": [ \\\"update\\\",\\\"insert\\\" ]}}}]\"\n}}\n{\n\"name\": \"mongo-sinkV2\",\n\"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"connection.uri\": \"mongodb://mongo1:27017/?replicaSet=rs0\",\n \"database\": \"quickstart\",\n \"collection\": \"transactionV1\",\n \"topics\": \"quickstart.transactionV2\",\n \"errors.tolerance\": \"all\",\n \"errors.log.enable\": true,\n \"mongo.errors.tolerance\": \"all\",\n \"mongo.errors.log.enable\": true,\n \"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler\"\n}}\n{\"schema\":{\"type\":\"string\",\"optional\":false},\"payload\":\"{\\\"_id\\\": {\\\"_data\\\": \\\"8262A512F7000000012B022C0100296E5A1004195DB8CC822F4A4FAE4ECCE5917B98A946645F6964006462A512A5F74E67E722B3B6760004\\\"}, \\\"operationType\\\": \\\"update\\\", \\\"clusterTime\\\": {\\\"$timestamp\\\": {\\\"t\\\": 1654985463, \\\"i\\\": 1}}, \\\"ns\\\": {\\\"db\\\": \\\"quickstart\\\", \\\"coll\\\": \\\"transactionV2\\\"}, \\\"documentKey\\\": {\\\"_id\\\": {\\\"$oid\\\": \\\"62a512a5f74e67e722b3b676\\\"}}, \\\"updateDescription\\\": {\\\"updatedFields\\\": {\\\"amount\\\": 10001}, \\\"removedFields\\\": [], \\\"truncatedArrays\\\": []}}\"}\n[2022-06-11 22:11:07,195] ERROR Unable to process record SinkRecord{kafkaOffset=9, timestampType=CreateTime} ConnectRecord{topic='quickstart.transactionV2', kafkaPartition=0, key={\"_id\": {\"_data\": \"8262A512F7000000012B022C0100296E5A1004195DB8CC822F4A4FAE4ECCE5917B98A946645F6964006462A512A5F74E67E722B3B6760004\"}}, keySchema=Schema{STRING}, value={\"_id\": {\"_data\": \"8262A512F7000000012B022C0100296E5A1004195DB8CC822F4A4FAE4ECCE5917B98A946645F6964006462A512A5F74E67E722B3B6760004\"}, \"operationType\": \"update\", \"clusterTime\": {\"$timestamp\": {\"t\": 1654985463, \"i\": 1}}, \"ns\": {\"db\": \"quickstart\", \"coll\": \"transactionV2\"}, \"documentKey\": {\"_id\": {\"$oid\": \"62a512a5f74e67e722b3b676\"}}, \"updateDescription\": {\"updatedFields\": {\"amount\": 10001}, \"removedFields\": [], \"truncatedArrays\": []}}, valueSchema=Schema{STRING}, timestamp=1654985467191, headers=ConnectHeaders(headers=)} (com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData)\n\norg.apache.kafka.connect.errors.DataException: Warning unexpected field(s) in updateDescription [truncatedArrays]. {\"updatedFields\": {\"amount\": 10001}, \"removedFields\": [], \"truncatedArrays\": []}. Cannot process due to risk of data loss.\nat com.mongodb.kafka.connect.sink.cdc.mongodb.operations.OperationHelper.getUpdateDocument(OperationHelper.java:99)\nat com.mongodb.kafka.connect.sink.cdc.mongodb.operations.Update.perform(Update.java:57)\nat com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler.handle(ChangeStreamHandler.java:84)\nat com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$3(MongoProcessedSinkRecordData.java:99)\nat java.base/java.util.Optional.flatMap(Optional.java:294)\nat com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$4(MongoProcessedSinkRecordData.java:99)\n", "text": "Hi All,\nIam new to MongoDB Kafka connect. I am trying sync the change stream from 1 mongo collection to another using Kafka connectors, both Inserts and updates operations\nUsing the quickstart guide published for kafka-connectors.Source config-Sink Config -Kafka Topic Event for update -My Inserts are streaming fine but the updates are failing on the sink connector side with Exceptionmongodb", "username": "Ambuj_Mehra" }, { "code": "{\n \"name\": \"mongo-simple-source\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"connection.uri\": \"yourMongodbUri\",\n \"database\": \"yourDataBase\",\n \"collection\": \"yourCollection\",\n \"change.stream.full.document\": \"updateLookup\"\n }\n}\n", "text": "I had the same issue here and i could solve that setting up the following configuration at the Source Connector :\"change.stream.full.document\": \"updateLookup\"A Full Exemple:", "username": "Davi_Crystal" } ]
MongoDB Kafka Connect - Sink connector failing for updates
2022-06-12T05:41:11.748Z
MongoDB Kafka Connect - Sink connector failing for updates
3,265
null
[ "java", "change-streams", "kafka-connector" ]
[ { "code": "\"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler\" ERROR Unable to process record SinkRecord{kafkaOffset=3, timestampType=CreateTime} ConnectRecord{topic='quickstart.sampleData', kafkaPartition=0, key={\"_id\": {\"_data\": \"8262A5CD4B000000012B022C0100296E5A1004B80560BF7F114B04962A5F523CEAB5D046645F6964006462A5CC9B84956FD488691BF10004\"}}, keySchema=Schema{STRING}, value={\"_id\": {\"_data\": \"8262A5CD4B000000012B022C0100296E5A1004B80560BF7F114B04962A5F523CEAB5D046645F6964006462A5CC9B84956FD488691BF10004\"}, \"operationType\": \"update\", \"clusterTime\": {\"$timestamp\": {\"t\": 1655033163, \"i\": 1}}, \"ns\": {\"db\": \"quickstart\", \"coll\": \"sampleData\"}, \"documentKey\": {\"_id\": {\"$oid\": \"62a5cc9b84956fd488691bf1\"}}, \"updateDescription\": {\"updatedFields\": {\"hello\": \"moto\"}, \"removedFields\": [], \"truncatedArrays\": []}}, valueSchema=Schema{STRING}, timestamp=1655033166742, headers=ConnectHeaders(headers=)} (com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData)\norg.apache.kafka.connect.errors.DataException: Warning unexpected field(s) in updateDescription [truncatedArrays]. {\"updatedFields\": {\"hello\": \"moto\"}, \"removedFields\": [], \"truncatedArrays\": []}. Cannot process due to risk of data loss.\nat com.mongodb.kafka.connect.sink.cdc.mongodb.operations.OperationHelper.getUpdateDocument(OperationHelper.java:99)\nat com.mongodb.kafka.connect.sink.cdc.mongodb.operations.Update.perform(Update.java:57)\nat com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler.handle(ChangeStreamHandler.java:84)\nat com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$3(MongoProcessedSinkRecordData.java:99)\nat java.base/java.util.Optional.flatMap(Optional.java:294)\n{\"schema\":{\"type\":\"string\",\"optional\":false},\"payload\":\"{\\\"_id\\\": {\\\"_data\\\": \\\"8262A5CD4B000000012B022C0100296E5A1004B80560BF7F114B04962A5F523CEAB5D046645F6964006462A5CC9B84956FD488691BF10004\\\"}, \\\"operationType\\\": \\\"update\\\", \\\"clusterTime\\\": {\\\"$timestamp\\\": {\\\"t\\\": 1655033163, \\\"i\\\": 1}}, \\\"ns\\\": {\\\"db\\\": \\\"quickstart\\\", \\\"coll\\\": \\\"sampleData\\\"}, \\\"documentKey\\\": {\\\"_id\\\": {\\\"$oid\\\": \\\"62a5cc9b84956fd488691bf1\\\"}}, \\\"updateDescription\\\": {\\\"updatedFields\\\": {\\\"hello\\\": \\\"moto\\\"}, \\\"removedFields\\\": [], \\\"truncatedArrays\\\": []}}\"}\ncom.mongodb.kafka.connect.sink.cdc.mongodb.operations.OperationHelper.getUpdateDocument(OperationHelper.java:99)\n", "text": "I am using ChangeStreamHandler in mongo Kafka sink connector to stream changes from mongo source\"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler\"On processing updates events from the source MongoDB change events, the change stream handler is failing with an exceptionBelow is the Change stream event received on the sink sideOn looking at the code in class -It shows that the updateDescription.updatedfields only handles updatedFields & removedFields… support for truncatedArrays is not present. Is this a bug? or I need to tune my source connector to somehow stop sending truncatedArrays in changeEvents.\nCan someone from the community please helpMy Inserts and delete events are successfully streaming from source to sink mongoDB.Building a Datapiple\nSource MongoDb → Change events → kafka connect → Sink MongoDb", "username": "Ambuj_Mehra" }, { "code": "", "text": "We have this in the backlog for the next release. https://jira.mongodb.org/projects/KAFKA/issues/KAFKA-165", "username": "Robert_Walters" }, { "code": "", "text": "Thanks, Robert,\nIs there an ETA for this, as this is failing all the update type CDC events on the sink connector side…", "username": "Ambuj_Mehra" }, { "code": "", "text": "Hi Team,\nWe are facing the same issue in our replication right after we upgraded mongodb FCV from 4.4 to 5.0.\nCan you please update us when we can get a new release of Kafka connect?Regards\nAlan", "username": "Alan_Sun" }, { "code": "", "text": "We attempt to release a new version every quarter. As a work around consider setting publish.full.document.only on the source to true.", "username": "Robert_Walters" }, { "code": "{\n \"name\": \"mongo-simple-source\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"connection.uri\": \"yourMongodbUri\",\n \"database\": \"yourDataBase\",\n \"collection\": \"yourCollection\",\n \"change.stream.full.document\": \"updateLookup\"\n }\n}\n", "text": "I had the same issue here and i could solve that setting up the following configuration at the Source Connector:\"change.stream.full.document\": \"updateLookup\"A Full Exemple:", "username": "Davi_Crystal" } ]
MongoDB Kafka connect ChangeStreamHandler do not support truncatedArrays
2022-06-12T12:34:57.135Z
MongoDB Kafka connect ChangeStreamHandler do not support truncatedArrays
3,840
null
[]
[ { "code": "", "text": "Hii, I can see the Atlas App pricing in this link but this does not mention the pricing for users on App Services.Suppose I build an app and have Email/Pass or Google Auth configured, whats the pricing for 10k users signing up on my app? How are these calculated?", "username": "shrey_batra" }, { "code": "", "text": "Hi @Harshit @henna.s , can you tag relevant people please?", "username": "shrey_batra" }, { "code": "", "text": "Hi @shrey_batraI believe it’s best to use this contact form to obtain the information you’re looking for, since we don’t really have visibility into how these things are calculated Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @shrey_batra – We don’t charge for users or authentication today (it’s included for free within the platform) which is why this is not called out within our billing documentation. However it’s worth noting that we’re not trying to provide a full-featured identity management platform and for more advanced features you may still want to integrate something like Cognito, Auth0, or AAD via our JWT authentication provider.", "username": "Drew_DiPalma" } ]
Atlas App Services Pricing - Users and Auth
2022-09-17T15:08:30.612Z
Atlas App Services Pricing - Users and Auth
2,073
null
[ "swift" ]
[ { "code": "", "text": "Is there a way to count the number of un-synced Asymmetric documents on swift? I would like to be able to show the user the number of documents that are not yet synced to the Realm.", "username": "Tyler_Collins" }, { "code": "", "text": "I don’t believe there’s a simple way to do this, @Tyler_Collins . You could try a couple of approaches:Asymmetric Sync was designed to support heavy insert-only workloads, so I don’t believe the design incorporated a simple way to report on that in a client app. If you want to share more about your use case, perhaps we can come up with another suggestion for you.", "username": "Dachary_Carey" } ]
Count un-syced Asymmetric documents
2022-09-22T15:08:16.515Z
Count un-syced Asymmetric documents
1,327
null
[ "queries", "python" ]
[ { "code": "cursor = (col.find().skip(skip_value).limit(100000)){\n _id: ObjectID\n pID: int\n s1: string\n s2: string\n}\n", "text": "Hey,If I fetch data with pymongo and multiprocessing from a database with 1Mio entries 10x100k do I get all entries or can it be that not all are fetched.I have run a test 500 times and checked if all items are included. Until now it has worked every time.But is there a guarantee for this?I fetch my data with skip and limit:cursor = (col.find().skip(skip_value).limit(100000))Where skip_value will always be 0,100k,200k… 900k .Documents are like:", "username": "Marvin_N_A" }, { "code": "", "text": "If you want to rely on the order of documents you must sort.I did not see anywhere in the documentation that the order is deterministic.", "username": "steevej" }, { "code": "", "text": "Ok, thanks I have also found nothing in the documentation. Then I do it with sort().", "username": "Marvin_N_A" }, { "code": "cursor.sort()ordersdb.orders.find()\n_idsort()", "text": "If we look at the cursor.sort() documentation, under the examples section we can find this blurb hidden in there:The following query, which returns all documents from the orders collection, does not specify a sort order:The query returns the documents in indeterminate order:We also see the following earlier in the document around sort constistency:MongoDB does not store documents in a collection in a particular order. When sorting on a field which contains duplicate values, documents containing those values may be returned in any order.If consistent sort order is desired, include at least one field in your sort that contains unique values. The easiest way to guarantee this is to include the _id field in your sort query.If no sort() is supplied then MongoDB will pull back the data in the most efficient manner possible, wether that’s pulling data as stored in memory or disk. After a lot of inserts/updates/deletes that data order could be changed. My guess is during your testing there wasn’t much in the way of activity going on so you might not have seen any difference in the order.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does find always return the data in the same order?
2022-09-22T08:52:47.887Z
Does find always return the data in the same order?
4,598
null
[ "dot-net" ]
[ { "code": "", "text": "As in the title, do you know if there is an ADO .Net connector allowing to read a Realm database?The goal is to replace SQLite (currently used) with a Realm database (without synchronizing with a MongoDB server). For reading and writing the Realm SDK would be perfect.\nOn the other hand, another application would need to connect to the Realm database (file) via an ADO .Net connector .By this I mean that with Ado .Net, each data provider (SQL, SqLite, ODBC, ORACLE,…) has its own classes prefixed by his name ( SqliteConnectionStringBuilder , SqliteConnection, … Odbc for odbc, Oracle for Oracle …) I would like to find an equivalent for Realm , in order to be able to communicate with Ado.Net towards a Realm database.Do you know if one exist?Thank you so much already, I hope my question is not to blurry ", "username": "Ines_KA" }, { "code": "", "text": "Oh… I forgot to say hi !!", "username": "Ines_KA" }, { "code": "", "text": "Hey, unfortunately, we don’t have an ADO.NET client. We do have a .NET SDK that allows you to interface with the database, but it doesn’t provide integration with ADO.NET.", "username": "nirinchev" }, { "code": "", "text": "Thank you so much for your response ! I m going to study this and will close subject soon if no update ", "username": "Ines_KA" }, { "code": "", "text": "Unfortunately I can’t settle for .Net SDK.\nI absolutely have to go through Ado .Net.I will therefore study other databases.Thanks alot!", "username": "Ines_KA" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there an ADO.Net connector allowing to read a Realm database?
2022-09-22T13:59:29.951Z
Is there an ADO.Net connector allowing to read a Realm database?
1,242
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "I want to remove all elements in results that doesn’t match the id in the array [ObjectId(“605a3a82c8bbb404f4e6b123”),ObjectId(“605a3a82c8bbb404f4e6b125”)]{\n_id: 1,\nresults: [\n{_id: ObjectId(“605a3a82c8bbb404f4e6b643”), item: “A”, score: 5, },\n{_id: ObjectId(“605a3a82c8bbb404f4e6b123”), item: “B”, score: 8 }\n]\n}\n{\n_id: 2,\nresults: [\n{_id: ObjectId(“605a3a82c8bbb404f4e6b643”), item: “A”, score: 5, },\n{_id: ObjectId(“605a3a82c8bbb404f4e6b123”), item: “B”, score: 8 },\n{_id: ObjectId(“605a3a82c8bbb404f4e6b124”), item: “D”, score: 2, },\n{_id: ObjectId(“605a3a82c8bbb404f4e6b125”), item: “C”, score: 1 }\n]\n}", "username": "Kumar_K" }, { "code": "", "text": "I tried addFields with no luck, i am a bit confused{\n$addFields: {\nresults: {\n$filter: {\n“input”: “$results”,\n“as”: “r”,\n“cond”: {\n$in: [\n“$r._id”,\n$removalList\n]\n}\n}\n}\n}\n}", "username": "Kumar_K" }, { "code": "", "text": "{\n$in: [\n“$r._id”,\n$removalList\n]\n}This will be true for all elements that you want to remove. If you look at $filter’s documentation, you will see that cond:true is used to specify elements you want to keep.If removalList is not a field in your document, you should simply used removalList rather than $removalList.", "username": "steevej" } ]
I have an Array of objectId's and would like to get all documents except these ids in an array element in object
2022-09-21T23:40:21.913Z
I have an Array of objectId&rsquo;s and would like to get all documents except these ids in an array element in object
2,955
null
[]
[ { "code": "MongooseServerSelectionError: connect ECONNREFUSED 13.38.168.163:27017", "text": "Hello everyone, so i’m trying to launch my website on my Cpanel but can’t quite manage to make it.here is the error it gives me :MongooseServerSelectionError: connect ECONNREFUSED 13.38.168.163:27017I already opened the PORT 27017 for this IP ( 13.38.168.163 ) on my Cpanel but it still doesn’t work.\nI already allowed all IP’s on my cloud atlas as well.I actually don’t know what else to do hereIf someone has ideas, it would be wonderful, thank you !", "username": "Antoine_Pascual" }, { "code": "", "text": "I forgot to add, that in local, everything is fine.\nSorry for the double post", "username": "Antoine_Pascual" } ]
Can't connect to cloud atlas from my Cpanel
2022-09-22T10:01:53.521Z
Can&rsquo;t connect to cloud atlas from my Cpanel
1,214
null
[ "python", "production" ]
[ { "code": "", "text": "We are pleased to announce the 4.2.0 release of PyMongo - MongoDB’s Python Driver. This release adds support for MongoDB 6.0.See the changelog for a high level summary of what’s new and improved or see the 4.2.0 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!", "username": "Shane" }, { "code": "", "text": "2 posts were split to a new topic: Djongo NotImplementedError: Database objects do not implement truth value testing or bool()", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
PyMongo 4.2.0 Released
2022-07-20T23:27:41.480Z
PyMongo 4.2.0 Released
3,677
https://www.mongodb.com/…511fca8524f3.png
[]
[ { "code": "Principal IT Architect for both Sine Nomine Associates and Direct Systems Support", "text": "2022-10-26T17:00:00Z (Wed, Oct 26, 2022 1:00 PM EDT)Event Type: Online webcastRegister: Automating MongoDB Deployments on MainframesCan MongoDB deployments be automated to run on IBM LinuxONE machines? In this session you will hear about the problems we were able to solve along with the solution that turned into an IBM Redbook making the code available to everyone. I will also touch on other mainframe services offered by SNA that continue to help make these systems more open and available.Principal IT Architect for both Sine Nomine Associates and Direct Systems Support**Kurt Acker joined Direct Systems Support in Jan of 2022 while continuing to represent Sine Nomine Associates (SNA) as their Principal IT Architect in July of 2020.", "username": "Jack_Woehr" }, { "code": "", "text": "FYI about the platform:Mainframes conjure up pictures of punch card machines and green screen terminals, but they really are very modern (huge) machines with awesome CPU core count, net connectivity, storage capacity, speed, security, serviceability, and reliability. LinuxONE is an IBM z-architecture mainframe running a hypervisor (typically KVM) that can field an unbelievable number of virtual Linux s390 instances and/or containers. You can try out a LinuxONE free virtual server instance for 120 days at the LinuxONE Community Cloud.", "username": "Jack_Woehr" }, { "code": "", "text": "Woh what a journey.\nI was leading some parts of the MongoDB (and others) workload testing, spinning up hundreds of VMs with MongoDB in parallel running excessive loads. Amazing power.\nThe date is saved, Michael", "username": "michael_hoeller" } ]
IBM Webcast: Automating MongoDB Deployments on Mainframes (Wed, Oct 26, 2022 1:00 PM EDT)
2022-09-21T12:46:27.186Z
IBM Webcast: Automating MongoDB Deployments on Mainframes (Wed, Oct 26, 2022 1:00 PM EDT)
2,489
null
[]
[ { "code": "", "text": "", "username": "Harinder_Singh1" }, { "code": "M0M2M5{\n collection: <-name of the collection you are tracking->, \n last_updated: <-update this field every time you update the collection->\n}\nupdatedAt{\n ...existing fields and values in the document... ,\n updatedAt: ISODate(\"2022-05-18T14:10:30Z\")\n}\n", "text": "Hi @Harinder_Singh1, welcome to the community.\nThere’s no in-built way(API) that provides the last modified time for a collection or a document.\nHowever, from the top of my head I can think of the following 3 ways to get the last modified time of the collection:To track a document’s last updated time, you can consider adding a new field such as updatedAt which can contain the timestamp of the time it was updated at.\nAn example document would look like this:", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I want to find out if there have been any modifications done to a collection or document in MongoDB
2022-08-04T21:45:46.229Z
I want to find out if there have been any modifications done to a collection or document in MongoDB
8,033
null
[ "java", "spark-connector" ]
[ { "code": "", "text": "https://jira.mongodb.org/browse/JAVA-4551upgrade to 4.6.0+ java driver will fix this ?", "username": "camper42_N_A" }, { "code": "", "text": "Yes, looks like you are hitting the same bug as JAVA-4551", "username": "Ross_Lawley" }, { "code": "", "text": "use mongo-spark-connector-10.0.4.jar with mongodb-driver-sync-4.7.1.jar (download to classpath) fix thisbut pyspark --packages org.mongodb.spark:mongo-spark-connector:10.0.4 will download mongodb-driver-sync-4.5.1.jar by default.waiting a bug fix version", "username": "camper42_N_A" } ]
RejectedExecutionException when use spark mongo connector v10.0.4
2022-09-20T23:49:58.152Z
RejectedExecutionException when use spark mongo connector v10.0.4
2,267
null
[]
[ { "code": "", "text": "at 1.3% complete:\nFailed: connection(\n433000 document(s) imported successfully. 0 document(s) failed to import.", "username": "Joe_Shea" }, { "code": "", "text": "How big is your doc?Check this link", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Failed: connection(cluster0-shard-00-02.vwxk3a.mongodb.net:27016[-16]) unable to write wire message to network: write tcp : write: broken pipe and failed to connect to my server.js why", "username": "omoyajowo_taiwo" }, { "code": "", "text": "Did it work before or this is the first time you are facing this error\nCan you connect by shell?\nIt could be with your env settings/ js code", "username": "Ramachandra_Tummala" } ]
Failed: connection(cluster0-shard-00-02.vwxk3a.mongodb.net:27016[-16]) unable to write wire message to network: write tcp : write: broken pipe
2021-07-06T00:45:33.016Z
Failed: connection(cluster0-shard-00-02.vwxk3a.mongodb.net:27016[-16]) unable to write wire message to network: write tcp : write: broken pipe
4,098
null
[ "100daysofcode" ]
[ { "code": "picoCTF{...}", "text": "Trying something a bit different for my #100DaysOfCode, which is to make it #100DaysOfSecurity instead. I’m a member of the MongoDB Security Champions program (not to be confused with the MongoDB Community Champions program, which I run), and thought it might be fun to delve into security topics.https://picoctf.org/ has a number of practice hacking challenges. My goal is to beat as many of them as possible, writing about the thought process + tools involved, and hopefully educate both myself and others about security along the way! (Note: These will most definitely NOT be 100 sequential days But I shall do my best to get a post out once a week or so!)picoCTF is an example of a Capture The Flag challenge. Somewhere hidden in the challenge is a string that looks like this:picoCTF{...}Your goal is to use cunning and curiosity and security smarts to find it. ", "username": "webchick" }, { "code": "cvpbPGS{arkg_gvzr_V'yy_gel_2_ebhaqf_bs_ebg13_hyLicInt}<?php echo str_rot13(\"cvpbPGS{arkg_gvzr_V'yy_gel_2_ebhaqf_bs_ebg13_hyLicInt}\"); ?>\n", "text": "Today’s challenge is Mod 26. This is a cryptography challenge.Cryptography is about modifying a communication in some way to make it harder (or ideally, impossible) to read by snoopy third parties. Its use goes back even to Ancient Rome (a clue on how to solve this one :)).You’re given the string:cvpbPGS{arkg_gvzr_V'yy_gel_2_ebhaqf_bs_ebg13_hyLicInt}Looks like gibberish, right? How do we approach solving this one?Hint:The hint given by the puzzle itself, “Cryptography can be easy, do you know what ROT13 is?” is actually quite good!ROT13 (“rotate” by 13 places) refers to a special flavour of the Caesar Cipher, where the alphabet is “shifted” by a number of letters to mask a message’s contents.Walkthtrough:This puzzle just uses a straight ROT13 cipher, which shifts the alphabet 13 letters to the right. This means:You could do this by hand with enough time, but it’s a lot easier to use either a web-based tool or use a programming language for this.For example, this PHP one-liner can solve the puzzle:For bonus points, the solution contains a joke — do you get it? Learn more: picoCTF Primer: Substitution ciphers", "username": "webchick" }, { "code": "exif <cc:license rdf:resource='cGljb0NURnt0aGVfbTN0YWRhdGFfMXNfbW9kaWZpZWR9'/>===<?php echo base64_decode('cGljb0NURnt0aGVfbTN0YWRhdGFfMXNfbW9kaWZpZWR9'); ?>", "text": "Today’s challenge is information. This is a forensics challenge (with some bonus crypto too; there’s a clue ;)).Digital Forensics is a branch of forensic science encompassing the recovery, investigation, examination and analysis of material found in digital devices.This challenge will be some of that on a smaller scale: trying to look at a single picture and find the flag that’s somehow hidden within.You’re given the following ADORABLE image: cat.jpgThis image clearly has both fur and tech, but WHERE is the flag…? HintEXIF ( Exchangeable image file format) is a standard for storing metadata about an image. It’s commonly used to document things like the date and time of its creation, what camera settings were used, and specified copyright information about any given photo.WalkthroughInterestingly, if you try and view the metadata with a standard EXIF viewer tool such as exif or macOS Finder, it chokes on invalid input. I found two ways around this:In any event, you’ll see that the “license” property is set to an interesting-looking string: <cc:license rdf:resource='cGljb0NURnt0aGVfbTN0YWRhdGFfMXNfbW9kaWZpZWR9'/>This is suspicious because you’d expect this to be a human-readable string; something like “Public domain” or “CC BY-NC.” This indicates the use of some kind of encoding.A common type of encoding used on the web, especially for binary objects such as images, is Base64. It encodes binary data into text so it can more easily be sent around (for example, as an email attachment). If you’re ever doing a challenge that has a similar string of gobbledygook (alphanumeric characters, and the number of characters is divisible by 4), and especially if that gobbledygook ends in = or ==, it’s a good bet it’s Base64 encoding.However, that which can be encoded can also be decoded. Once again, a PHP one-liner can solve this one:<?php echo base64_decode('cGljb0NURnt0aGVfbTN0YWRhdGFfMXNfbW9kaWZpZWR9'); ?>Or, you can use a web-based tool such as https://www.base64decode.org/", "username": "webchick" }, { "code": "<link rel=\"stylesheet\" type=\"text/css\" href=\"mycss.css\"><script type=\"application/javascript\" src=\"myjs.js\"></script>", "text": "Today let’s tackle Insp3ct0r. This is a Web Exploitation challenge, where you go for attacks that are unique to the magic of the World Wide Web. This one is pretty chill, and you don’t need any special tools (a hint ) to solve it.You’re given a URL to a simple website. Can you poke around and find the flag?\nScreen Shot 2022-04-16 at 4.09.58 PM1790×644 10.2 KB\nHintUse the source, Luke. WalkthroughIf you view the page source in your browser, and inspect the code, you’ll find the website consists of three files:HTML, CSS, and JavaScript each have the ability to add code comments that don’t show up in the visual view.Look for those lines, and ye shall find the flag. I know some of you out there might roll your eyes at the relative low difficulty level of this challenge, but this type of “hidden in plain sight” exploit happens far more often than you’d think. A couple of prominent examples:", "username": "webchick" }, { "code": "/* How can I keep Google from indexing my website? */User-agent: *\nDisallow: /private/\n# I think this is an apache server... can you Access the next flag?httpd.conf# I love making websites on my Mac, I can Store a lot of information there.", "text": "Let’s keep on the Web Exploitation track, and look at Scavenger Hunt.At first glance you may say to yourself, “Why, self! This looks EXACTLY the same as Day 3’s challenge. This will be a cinch!”And indeed it starts the same way—with what even looks like the exact same web page!—but this one requires a bit more poking around.HintFor this one, you’ll need knowledge about other common files found on web servers, not just those embedded in the page itself.Beyond that, read the clues the puzzle gives you carefully; each one contains a distinct hint to point you in the right direction.WalkthroughJust like yesterday’s challenge, you can start piecing the flag together by viewing source on the HTML and CSS files and looking at the code comments.However, you’ll hit a wall when you get to the JS file. Instead of the comment giving you a part of the flag string like before, it will instead ask a cryptic question:/* How can I keep Google from indexing my website? */There is a Robots exclusion standard that exists as a means to communicate with (well-behaving, non-malicious) web crawlers about which areas of the website should and should not be processed or scanned.An example file might look like the following, if it wanted to tell ALL robots not to scan the “private” directory:(Ironically, Google’s own documentation states in bold, red letters: \" Warning : Don’t use a robots.txt file as a means to hide your web pages from Google search results.\" A better approach is a noindex metatag, as that removes the page even if it’s linked to from somewhere else vs. crawled by Google.)ANYWAY. Once you load that file, you’re given another piece of the flag, as well as another cryptic clue:# I think this is an apache server... can you Access the next flag?Apache is a very common web server, and this clue refers to an Apache configuration file that lets you make configuration changes on a per-directory basis, overriding the default Apache configuration found in httpd.conf. You can do things in there such as require a password to access the directory contents or re-write URLs.Once you load THAT file, you’re given another piece of the flag, as well as another cryptic clue:# I love making websites on my Mac, I can Store a lot of information there.Unlike the others, this one isn’t actually a common file found on web servers… at least, not on purpose. Desktop Services Store files are found inside every directory accessed by macOS Finder, and they contain information about the containing folder, including what file names are inside it [!], which can be parsed and crawled by an attacker to find files they ought not have access to.They are also the bane of many web developers’ existence, because they are dotfiles, which means they are hidden by default and thus easily accidentally committed to version control or uploaded to a web server. At any rate, throw that file name at the end of the URL and you’ve got the final part of your flag. ", "username": "webchick" }, { "code": "zsh: exec format error: ./warm$ wget https://mercury.picoctf.net/static/f95b1ee9f29d631d99073e34703a2826/warm$ ./warm-bash: ./warm: Permission denied$ chmod u+x warm./warm", "text": "Wave a flag is less of a hacking challenge and more testing your knowledge of Linux commands. (Hint. :))Your task: extract the flag from this binary file.First, I’d use picoCTF’s built-in webshell for this; when I tried to execute this on my Mac, I received the error:zsh: exec format error: ./warmSecond, if you’re not already familiar with running basic Linux commands and how file permissions work, check out the resources at the bottom.First, you’ll need to download the file to your shell. wget is a useful utility for doing just that! (Failing that, curl can be fun.)$ wget https://mercury.picoctf.net/static/f95b1ee9f29d631d99073e34703a2826/warmNormally, you run an executable file like the following:$ ./warmIf you try that here, you’ll get the error:-bash: ./warm: Permission deniedThis is because the file won’t actually be able to be executed unless you make it executable.The easiest way to do that is with chmod to make the file executable by your user:$ chmod u+x warmNow if you try ./warm again, you should receive better results. Follow the instructions to receive your flag. ", "username": "webchick" }, { "code": "$ python ende.py\nUsage: ende.py (-e/-d) [file]\n-hende.pyelif sys.argv[1] == \"-d\":\n if len(sys.argv) < 4:\n sim_sala_bim = input(\"Please enter the password:\")\n else:\n sim_sala_bim = sys.argv[3]\n\n ssb_b64 = base64.b64encode(sim_sala_bim.encode())\n c = Fernet(ssb_b64)\n\n with open(sys.argv[2], \"r\") as f:\n data = f.read()\n data_c = c.decrypt(data.encode())\n sys.stdout.buffer.write(data_c)\n-h$ python ende.py -d flag.txt.en ac9bd0ffac9bd0ffac9bd0ffac9bd0ff\n", "text": "(Bah. I broke my streak yesterday because I was out with a cold. A COLD. In 2022. After all of this… [gestures broadly at everything]. I am SO ANNOYED. )Python Wrangling is another challenge that’s less about hacking, and more about your knowledge of how Linux commands work. And your ability to get a working Python setup, which can be a challenge all on its own. There are three files involved here:Your task is to combine them together in order to decrypt the flag!Another one that’s easiest using the webshell, since it already has Python and the required modules all ready to go.When you run the command the output is quite cryptic:However, you can pass the handy -h flag from yesterday’s challenge and get more precise instructions!What the challenge is asking you to do is put those three files together in a single Linux command.We want to decrypt a file, so we’ll need to pass the -d flag into ende.py. Let’s check out that branch of code:A couple of things to point out here:It looks like the flag is encrypted with Fernet (symmetric encryption).Though the -h flag doesn’t point this out, you can apparently pass in a 3rd argument to ende.py which is the password from pw.txt itself!Put it all together:…and you’ve got your flag. ", "username": "webchick" }, { "code": "(async () => {\n await new Promise((e => window.addEventListener(\"load\", e))), document.querySelector(\"form\").addEventListener(\"submit\", (e => {\n e.preventDefault();\n const r = {\n u: \"input[name=username]\",\n p: \"input[name=password]\"\n },\n t = {};\n for (const e in r) t[e] = btoa(document.querySelector(r[e]).value).replace(/=/g, \"\");\n return \"YWRtaW4\" !== t.u ? alert(\"Incorrect Username\") : \"cGljb0NURns1M3J2M3JfNTNydjNyXzUzcnYzcl81M3J2M3JfNTNydjNyfQ\" !== t.p ? alert(\"Incorrect Password\") : void alert(`Correct Password! Your flag is ${atob(t.p)}.`)\n }))\n})();\nfor (const e in r) t[e] = btoa(document.querySelector(r[e]).value).replace(/=/g, \"\");\nr.u = \"input[name=username]\"'r.p = \"input[name=password]\"btoa()=replace\"\"btoa()return \"YWRtaW4\" !== t.u ? alert(\"Incorrect Username\") : \n\"cGljb0NURns1M3J2M3JfNTNydjNyXzUzcnYzcl81M3J2M3JfNTNydjNyfQ\" !== t.p ? alert(\"Incorrect Password\")\nYWRtaW4cGljb0NURns1M3J2M3JfNTNydjNyXzUzcnYzcl81M3J2M3JfNTNydjNyfQ", "text": "(Technically this is posted on the same day, but hey, it’s midnight somewhere. )Today, let’s get back into some Web Exploitation fun with the login challenge.You’re given a very simple-looking website with a username and password field.\nA form with username and password fields, and a submit button1560×994 8.76 KB\nBut hoooowwww to extract the flag from this extremely secure system that your dog-sitter’s brother made? Interestingly, when you submit the form, the error comes back as a JavaScript dialog:\nJavaScript alert: Incorrect Username898×262 11.6 KB\nThis is a clue that client-side form validation is in use. If this is the only input validation, that is good news for you, intrepid hacker, because that means everything you need to defeat the challenge is right there in your browser. The JavaScript alert must be coming from somewhere, so View The Source, Luke, to find a pointer to index.js.\nA bunch of jumbled up source code1388×332 41.2 KB\nLook at this ugly JavaScript code all-smashed-together-on-one-line. Yuck! Let’s run it through a pretty-printer, such as https://beautifier.io/ :Ok, that’s more like it. What is this code doing? One important bit is here:This essentially says: for both username (r.u = \"input[name=username]\") and password ('r.p = \"input[name=password]\"), run its value through the btoa() function and strip out any = signs (replace them with an empty string \"\").Hm. Equal signs? Didn’t we talk about this somewhere before…? And what the heck is btoa() anyway?Why, look, it stands for binary-to-ASCII and uses our good friend, Base64 decoding. These lines then:…indicate that whatever the username is, it needs to become YWRtaW4 when base 64 encoded. And the password needs to become cGljb0NURns1M3J2M3JfNTNydjNyXzUzcnYzcl81M3J2M3JfNTNydjNyfQ.Can you use the tools from the other day to solve this one? ", "username": "webchick" }, { "code": "# Hiding this really important number in an obscure piece of code is brilliant!\n\n# AND it's encrypted!\n\n# We want our biggest client to know his information is safe with us.\n\nbezos_cc_secret = \"A:4@r%uL`M-^M0c0AbcM-MFE07b34c`_6N\"\n\ndecode_secret()# Reference alphabet\nalphabet = \"!\\\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ\"+ \\\n \"[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\"\n\ndecode_secret(bezos_cc_secret)\n", "text": "Today, let’s head into our first Reverse Engineering challenge with crackme.py.If you execute this program you’ll see that it’s quite simple and tells you the bigger of two numbers:\nCLI asks for first number, 3 second number, 9 and says the one with the largest positive magnitude is 91484×288 42.4 KB\nThat’s all well and good, but how do we find the flag…?Peek inside the crackme.py file, and you will find an interesting surprise. Seems suspicious. All that’s left to do is decode it, right?Remember learning about ROT-13 back on Day 1? Well here, if the decode_secret() function is to be believed, we appear to be using ROT-47, which is the same deal, except moving ahead 47 places instead of 13.How is that possible, when the alphabet itself only has 26 letters? Because here, we’re using a special alphabet:Now. You could painstakingly do the work of taking each character in Bezos’s secret credit card number and counting 47 places ahead in the above string. Or, use an online tool like CyberChef.Or, you could be super lazy, like me, and just toss the following near the bottom of the file:…and let our good friend Python do the hard work for you. ", "username": "webchick" }, { "code": "username_trial = \"FRASER\"\nbUsername_trial = b\"FRASER\"\n \nkey_part_static1_trial = \"picoCTF{1n_7h3_|<3y_of_\"\nkey_part_dynamic1_trial = \"xxxxxxxx\"\nkey_part_static2_trial = \"}\"\nkey_full_template_trial = key_part_static1_trial + key_part_dynamic1_trial + key_part_static2_trial\ndef enter_license():\n user_key = input(\"\\nEnter your license key: \")\n user_key = user_key.strip()\n \n global bUsername_trial\n \n if check_key(user_key, bUsername_trial):\n decrypt_full_version(user_key)\n else:\n print(\"\\nKey is NOT VALID. Check your data entry.\\n\\n\")\ncheck_key(user_key, bUsername_trial)truedef check_key(key, username_trial):\n \n global key_full_template_trial\n \n if len(key) != len(key_full_template_trial):\n return False\nkey_full_template_trialpicoCTF{1n_7h3_|<3y_of_xxxxxxxx}\n # Check static base key part --v\n i = 0\n for c in key_part_static1_trial:\n if key[i] != c:\n return False\n\n i += 1\nkey_part_static1_trialpicoCTF{1n_7h3_|<3y_of_ if key[i] != hashlib.sha256(username_trial).hexdigest()[4]:\n return False\n else:\n i += 1\n\n if key[i] != hashlib.sha256(username_trial).hexdigest()[5]:\n return False\n else:\n i += 1\n92d7ac3c9a0cf9d527a5906540d6c59c80bf8d7ad5bb1885f5f79b5b24a6d387check_key()key_full_template_trial", "text": "Next on our Reverse Engineering quest, let’s look at keygenme.py. This is essentially a “trialware” game, and one of the options is locked unless you enter a valid license key:\nArcane Calculator menu, offering options to estimate astral projection. It&#39;s marked as the trial version.1710×1156 107 KB\nYour task: determine the license key to use and you’ll get your flag!If we take a peek under the hood, there’s a lot more code here than in yesterday’s challenge. There’s code to generate the program menu, do the arcane calculations, deal with license keys, and write out the full version of the program if the key is found to be correct.Up near the top of the file, we see a few interesting pieces that stand out:Almost a whole flag right there, we now just need to figure out what the xxxxxs are.The money seems to be at:So if check_key(user_key, bUsername_trial) returns true, we’re in business.Let’s jump over there.First check: is the key we entered equal in length to key_full_template_trial from up above?If we recall, key_full_template_trial just smooshed together all of those flaggy-looking pieces, so right now it’s:That’s 32 characters. Which means our key needs to be exactly that long.Ah, but wait, the next check:… indicates that not just ANY 32 character string will do; it has to start with exactly the same characters as there are in key_part_static1_trial, which MEANS the first few characters are picoCTF{1n_7h3_|<3y_of_ That’s a great start!OK what’s next? A bunch of lines like this:This is moving one character at a time through the next part of the key (those Xs), and comparing its value. To what? We recall from above that username_trial is “FRASER”. This code takes that name, runs it through a SHA-256 hash function to turn it into a hexadecimal string, then finds the character in the Nth position. (And actually, N + 1 since indexes in Python, like many other languages, start counting from zero.)If you run “FRASER” through a tool like SHA256 Online, you’ll see it results in the following string:92d7ac3c9a0cf9d527a5906540d6c59c80bf8d7ad5bb1885f5f79b5b24a6d387(Note: SHA-256 hashing will always result in the same output for any given input. This is why it’s extremely important to “salt” your hashes so that they cannot be easily reverse-engineered, e.g. when used for things like one-way encrypting passwords.)Reading through the remainder of the check_key() function, you can see that it wants the character in the hash string in the 5th position (“a”), then the 6th (“c”), then the 4th (“7”), … and so on.Once you’ve figured that out, replace “xxxxx” in key_full_template_trial with what you derived, and you have both your flag and your license key! ", "username": "webchick" }, { "code": "mercury.picoctf.net21135<?php\n// Open a connection to the server and store its response.\n$fp = fsockopen(\"mercury.picoctf.net\", 21135);\nfwrite($fp, \"\\n\");\n$numbers = fread($fp, 1000);\nfclose($fp);\n\n// Extract numbers from a long vertical string to an array.\n$numbers = explode(PHP_EOL, $numbers);\n\n// Loop through each ASCII value and convert to character.\nforeach ($numbers as $ascii) {\n if (is_numeric($ascii)) {\n $ascii = trim($ascii);\n echo chr($ascii);\n }\n}\n", "text": "Woohoo! Made it to day 10! It’s the weekend, and yesterday’s was a bit of a doozy, so let’s do a bit more chill challenge this time: Nice netcat…\n(Pro tip: This is NOT the kind of ”net cat” they’re referring to. But rather, netcat, which is a computer networking utility for reading from and writing to network connections.)The challenge directs you to connect to mercury.picoctf.net on port 21135 and decode the bunch of numbers that get returned:\nentered on the shell, returning 112, 105, 99, 111...1624×1264 118 KB\nThe numbers have nothing to do with math. Try and notice patterns. Are the numbers within a certain range? Do any of the numbers repeat? What might those correlate to?Under the hood, computers can’t inherently deal with text, only with numbers. So when we want to send a text character, such as the letter “a” or the symbol “_”, that needs to be encoded so that the computer can read and transmit it.A very common method of encoding is ASCII (abbreviated from American Standard Code for Information Interchange). Each character is assigned a numeric value from 32-126. (Why starting at 32? Because the numbers prior to that are for non-printable control characters such as “end of file” or “line break.”)Once again, this challenge can be solved by either manual conversion of numbers to characters found in an ASCII table, or, you can use an online tool such as Convert ASCII Codes to Characters.(Warning: The line breaks between the different numbers can really throw off results in automated converters in my experience. If it ends up a garbled mess in one tool, try another.)And/or, here’s a simple PHP script to get the job done:If you made it through this challenge, also pick up what’s a net cat? for a bonus 100 points! ", "username": "webchick" }, { "code": "", "text": "This is awesome @webchick Many Congratulations on making it to Day 10 You are now a Code Wrangler Are you enjoying Cryptic puzzles? I remember @Stennie_X telling he enjoys cryptography too Are you following the puzzles from the link you shared, picoctf.org? At one glance, this looks like a lot to me, I guess maybe for the next relay of 100 I will do Python Wish you a fun, cryptic Sunday Cheers ", "username": "henna.s" }, { "code": "", "text": "Woohoo!! Thanks so much!! Yeah, I’ve loved playing around with cybersecurity since I was a teenager. One of the first communities I helped manage was a hacking challenge site way back in the day, actually! I find them interesting because they expose you to so many facets of how computers, networks, operating systems, programming languages, encryption, and more work together, and really challenge you to think “outside the box.”If you’re interested in a similar thing for learning Python programming, Solve Python | HackerRank seems to be a similar setup!", "username": "webchick" }, { "code": "", "text": "So far we haven’t done a Binary Exploitation challenge. Let’s change that today with CVE-XXXX-XXXX !This one is more on general security knowledge than actually breaking into anything, but it’s extremely useful knowledge to have!The challenge is asking you to find a “CVE” for the first recorded remote code execution (RCE) vulnerability in 2021 in the Windows Print Spooler Service.Can you research this one to find the flag?CVE is short for “Common Vulnerabilities and Exposures.” Every publicly disclosed cybersecurity vulnerability, dating back to the early days of the Internet, is given a unique CVE ID, to make it easier for cybersecurity professionals to coordinate fixes and ensure they’re discussing the same vulnerability.There’s a handy keyword search for CVE records. Let’s try searching for “windows print spooler”:\nList of CVE records for Windows Print Spooler2112×1344 481 KB\nYou can see, there have been 58 vulnerabilities reported in Windows Print Spooler since 1999. How can we possibly find the one needle we need in all of that haystack?It’s useful to know that a CVE ID takes the form of:CVE-[YEAR]-[UNIQUE, SEQUENTIAL #]Such as: CVE-2022-12345Since we know that the vulnerability was disclosed in 2021, that reduces our haystack to more like 15 records.Another clue is that the challenge is asking for a “remote code execution (RCE) vulnerability.” Each CVE comes with a description that summarizes the issue. Other common types of vulnerabilities are Elevation of Privilege, Information Disclosure, and Broken Access Control. See https://cwe.mitre.org/ for a comprehensive list.Looks like 2021 was a bad year for Windows Print Spooler, because there are 4 such CVE Records for 2021. The one with the lowest number is the one that was found first.", "username": "webchick" }, { "code": "", "text": "Wow… That is soo awesome… I sometimes wish I was introduced to computers early in the age but I got acquainted when I started going to university… On Communities, I dint know they existed until 6 years ago when I moved to Ireland But I have had the best of experiences, so I am good… Thank you for sharing your resources for Python My husband automates using python, I wish I could have 1 common skill with him, as I have none atm Hope to see you kick-starting this again Happy Sunday… ", "username": "henna.s" }, { "code": "", "text": "This topic was automatically closed after 180 days. New replies are no longer allowed.", "username": "system" } ]
The Journey of #100DaysOfSecurity (@webchick)
2022-04-14T16:56:04.712Z
The Journey of #100DaysOfSecurity (@webchick)
7,144
null
[]
[ { "code": "", "text": "Hello\nI want to know the meaning of this messageConnections % of configured limit has gone above 80", "username": "offers_awtar" }, { "code": "Connections % of configured limit80%", "text": "Hello @offers_awtar ,Welcome to The MongoDB Community Forums! Connections % of configured limit has gone above 80 Connections % of configured limit is an alert that occurs if the number of open connections to the host exceeds the specified percentage. In your case, this limit seems to be 80%.For example: If we have set the maximum number of allowable connections to a MongoDB process to 100 and this connection percentage is set to 80, then, once the limit of 80 connections is reached, this alert will be triggered and once the connection limit of 100 is reached then no new connections can be opened until the number of open connections drops down below the limit.To fix the problem immediately, one can restart the application which will terminates all existing connections opened by the application and allow your cluster to resume normal operations. In general, connection alerts could be a symptom of many things which includes but are not limited toPlease go through Fix Connection Issues to learn more about this issue.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "import { MongoClient } from \"mongodb\";\n\n/**\n * Global is used here to maintain a cached connection across hot reloads\n * in development. This prevents connections growing exponentiatlly\n * during API Route usage.\n * https://github.com/vercel/next.js/pull/17666\n */\nglobal.mongo = global.mongo || {};\nexport async function getMongoClient() {\n if (!global.mongo.client) {\n global.mongo.client = new MongoClient(process.env.MONGODB_URI);\n }\n // It is okay to call connect() even if it is connected\n // using node-mongodb-native v4 (it will be no-op)\n // See: https://github.com/mongodb/node-mongodb-native/blob/4.0/docs/CHANGES_4.0.0.md\n await global.mongo.client.connect();\n return global.mongo.client;\n}\n\nexport default async function database(req, res, next) {\n if (!global.mongo.client) {\n global.mongo.client = new MongoClient(process.env.MONGODB_URI);\n }\n req.dbClient = await getMongoClient();\n req.db = req.dbClient.db(); // this use the database specified in the MONGODB_URI (after the \"/\")\n return next();\n}\n", "text": "Thank you very much for the replyNow I understand the problem, but I’m using MongoDB 5.0 and tire M10 and I think it’s 1500 max connections and the number of users is low.Does that mean I have Flaw in connection code in which connections are opened but never closed ?in this code? And how do I correct it?", "username": "offers_awtar" }, { "code": "", "text": "You have to close connection using method provided in link with .close() method.", "username": "vishwanath_kumbi" }, { "code": "", "text": "As I am not a javascript expert, so I cannot confirm if this piece of code is or is not the root cause of the connection issue you are facing. However, it is possible that repeatedly running the script may lead to multiple copies of the script to open connections to the database and thus trigger the warning. Are you seeing any kind of pattern when this warning was triggered, e.g. an especially busy time, during certain development phase of the app, or other patterns that may coincide with the warning?Also, for MongoDB, we generally recommend not to open/close connections after every use instead use connection pooling to maintain a cache of open, ready-to-use database connections maintained by the driver. Your application can seamlessly get connections from the pool, perform operations, and return connections back to the pool. Connection pools are thread-safe.Please refer this documentation for more information on connection pooling.Tarun", "username": "Tarun_Gaur" }, { "code": "import { MongoClient } from \"mongodb\";\n\nexport async function getMongoClient() {\n /**\n * Global is used here to maintain a cached connection across hot reloads\n * in development. This prevents connections growing exponentiatlly\n * during API Route usage.\n * https://github.com/vercel/next.js/pull/17666\n */\n if (!global.mongoClientPromise) {\n const client = new MongoClient(process.env.MONGODB_URI);\n // client.connect() returns an instance of MongoClient when resolved\n global.mongoClientPromise = client.connect()\n }\n return global.mongoClientPromise;\n}\n\nexport async function getMongoDb() {\n const mongoClient = await getMongoClient();\n return mongoClient.db();\n}\n", "text": "Thank you very much for your helpI think the problem was really in the connection string\nAfter I changed it to the following method, I no longer received that alertThanks", "username": "offers_awtar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the meaning of the message "Connections % of configured limit has gone above 80"?
2022-09-18T14:32:34.475Z
What is the meaning of the message &ldquo;Connections % of configured limit has gone above 80&rdquo;?
2,881
https://www.mongodb.com/…9_2_458x1024.png
[ "react-native", "android" ]
[ { "code": " const {useRealm, useQuery, useObject} = TaskContext;", "text": "\nScreen Shot 2022-05-05 at 12.59.02 PM463×1035 174 KB\n\nGetting RealmObject configuration issue when using @realm/react. It happens when using one of the functions, const {useRealm, useQuery, useObject} = TaskContext;. This issue is observed only on android, for iOS it works well. I referred the document template as specified in the link.", "username": "Tejas_Sadrani" }, { "code": "", "text": "@Tejas_Sadrani Were you able to solve this issue? I am also facing the same.", "username": "Ali_Goher_Shabir" }, { "code": "", "text": "Im having the same problems any updates ?", "username": "karl_gandhi" }, { "code": "", "text": "@karl_gandhi Hello, I was able to resolve the issue by downgrading react-native-reanimated from v2 to v1. I also had to downgrade one more package because it was using reanimated v2. I found the solution here. ", "username": "Ali_Goher_Shabir" }, { "code": "", "text": "I have the same problem, on Android. Once I get the result from useQuery, and the trying to use it as an Array, there is the problem.\nMaybe this Realm object must be converted into an Array?", "username": "Bruno_Ravizzini" }, { "code": "", "text": "It is not only that, it’s a problem with realm, Error: RealmObject cannot be called as a functionkeeps happening all the time, even with the template downloaded frommaster/templates/expo-template-jsRealm is a mobile database: an alternative to SQLite &amp; key-value storesJust using the template and has the same problem.", "username": "Bruno_Ravizzini" }, { "code": "", "text": "It appears to be a problem with the sync and the user, after terminate and restart, it’s working fine. But it is not clear the error when it gives this error: RealmObject cannot be called as a function", "username": "Bruno_Ravizzini" } ]
RealmObject cannot be called as a function - Only Android
2022-05-05T18:23:27.046Z
RealmObject cannot be called as a function - Only Android
4,533
null
[ "dot-net", "android" ]
[ { "code": " AsyncContext.Run(async () =>\n {\n if (user == null || user.State == Realms.Sync.UserState.LoggedOut)\n {\n user = await app.LogInAsync(credentials);\n }\n var config = new PartitionSyncConfiguration(Constants.Partition, user);\n config.ClientResetHandler =\n new DiscardLocalResetHandler()\n {\n OnBeforeReset = HandleBeforeResetCallback,\n OnAfterReset = HandleAfterResetCallback,\n ManualResetFallback = HandleManualResetCallback\n };\n var realm = await Realm.GetInstanceAsync(config);\n});\n", "text": "I am building a Xamarin app for iOS and Android. The Realm on a login page is instantiated with the correct credentials from the ContentPage. I used Nito.AsyncEx to manage the context to get the realm, as follows……which has been working fine thus far. But doing it this way doesn’t allow me to SubscribeForNotifications to any collections in the sync’d realm.This works well for initially getting the realm, especially from a new installed version of the app, as it takes time to sync. However, using it looks like Nito.AsyncEx instantiates the Realm in a different Context, and any SubscribeForNotifications won’t receive any notifications, which I need in my app. I have tried lots of Task.Run, Task.Run.ContinueWith, and I’m having zero luck. Is there a way to instantiate the Realm within a NitoAsyncEx Context, then later subscribe for notifications to the Realm in a way that I do receive them? Or is there a way to instantiate the Realm without Nito.AsyncEx, but the LoginAsync, PartitionSyncConfiguration , GetInstanceAsync can be done (via a synchronous wrapper, perhaps?) in the proper order and the results from the two async methods (LoginAsync, GetInstanceAsync) are returned without a thread deadlock on the UI Context?", "username": "Josh_Whitehouse" }, { "code": "Nito.AsyncEx", "text": "I may be missing something, but why is it a problem to just call all this code on the main thread instead of using Nito.AsyncEx? You’re mentioning something about a deadlock, but it’s not clear to me what could be causing it since those are all async methods.", "username": "nirinchev" }, { "code": " private void GetRealmCommand(Credentials credentials)\n {\n#if DEBUG\n Debug.WriteLine(\"Entering GetRealmCommand\");\n#endif\n try\n {\n var app = Realms.Sync.App.Create(Constants.MongodPhotoEventsAppID);\n var user = app.CurrentUser;\n AsyncContext.Run(async () =>\n {\n if (user == null || user.State == Realms.Sync.UserState.LoggedOut)\n {\n user = await app.LogInAsync(credentials);\n }\n var config = new PartitionSyncConfiguration(Constants.Partition, user);\n config.ClientResetHandler =\n new DiscardLocalResetHandler()\n {\n OnBeforeReset = HandleBeforeResetCallback,\n OnAfterReset = HandleAfterResetCallback,\n ManualResetFallback = HandleManualResetCallback\n };\n\n var realm = await Realm.GetInstanceAsync(config);\n MongoRealmServices.User = user;\n MongoRealmServices.Config = config;\n MongoRealmServices.Realm = realm;\n });\n }\n catch (Exception e)\n {\n#if DEBUG\n Debug.WriteLine(\"EXCEPTION DURING CONNECTION TO REALM: \" + e.ToString());\n#endif\n MessagingCenter.Send<LoginPage, string>(this, \"GetRealmCommand\",\n\"Exception:\\n\" + e.ToString());\n }\n }\nAfter this function is called by the event handler (for the guest login, for instance) then Shell.Current.GoToAsync() is called to go to the Main Page, and in the constructor of the Main Page, attempts to access the realm fail. If I setup a property like this in a class, set it from the login page (but only when I use Nito.AsyncEx, as a way to use the realm in the Main Page, it works, but SubscribeToNotifications won't for the Main Page.\nnamespace StellaEvents.Services\n{\n public class MongoRealmServices\n {\n private static Realm realm;\n static public Realm Realm\n {\n get { return realm; }\n set { realm = value; }\n }\n}\n", "text": "I’m not. sure what the cause is, exactly myself. The login screen comes up in the app, and when the user logs in - email, apple sign, or guest, creates the credentials (based on whether the login is thru email, apple sign in, or guest, then I use the function here to to retrieve the realm.If I try an use Realm.GetInstanceAsync directly in the Main Page constructor it’s null when the method trying to use Realm.All<> tries to access the Realm. I’m not sure how to get a valid Realm in the MainPage constructor at this point.", "username": "Josh_Whitehouse" }, { "code": "private void GetRealmCommand(Credentials credentials)\n{\n GetRealmCommandAsync();\n}\n\nprivate async Task GetRealmCommandAsync()\n{\n // Show some loading indicator in the UI\n // Set RealmCommand.CanExecute to false\n try\n {\n var app = App.Create(Constans.MongodPhotoEventsAppID);\n var user = app.CurrentUser;\n if (user == null)\n {\n user = await app.LogInAsync(credentials);\n }\n\n var config = ...\n \n var realm = await Realm.GetInstanceAsync(config);\n }\n catch (Exception e)\n {\n\n }\n finally\n {\n // Hide loading indicator\n // Set RealmCommand.CanExecute to true\n }\n}\nMongoRealmServices.Realmclass MainPage()\n{\n private Realm _realm;\n\n public MainPage()\n {\n _realm = Realm.GetInstance(MongoRealmServices.Config);\n }\n}\n", "text": "So there are several things here. First, you’re using Nito.AsyncEx to make an async method synchronous. That should not be necessary and is probably freezing your app while the login/download is happening. Instead what you could do is something like:That way you setup everything on the UI thread and can use it in your pages/viewmodels. If you don’t want to do that, then you can still use Nito.AsyncEx, but don’t set the Realm you just downloaded to your MongoRealmServices.Realm. Instead, in the page/viewmodel, you just use the config to open a Realm instance synchronously.If you want a more thorough deep dive in best practices when using Realm in Xamarin.Forms (although that’s applicable to any xaml-based UI framework), you can check out this blog post by @papafe. It goes into a lot of detail about structuring your app but also compares and contrasts to a SQLite-based app, so if you have experience with that, it will help even more. It does focus on the local database + an http-based API, but the main principles still hold as after the authentication part, there’s pretty much no difference between using a local or a synchronized Realm.", "username": "nirinchev" }, { "code": "", "text": "you can close this one, the static properties is what was killing it!", "username": "Josh_Whitehouse" }, { "code": "", "text": "Thanks, I will read and apply it going forward.!", "username": "Josh_Whitehouse" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Xamarin C# iOS and Android App, GetRealmAsync and SubscribeForNotifications issue
2022-09-20T17:28:44.935Z
Xamarin C# iOS and Android App, GetRealmAsync and SubscribeForNotifications issue
2,179
null
[ "transactions" ]
[ { "code": "try! realm.write {\n topic.questions.append(q)\n }\ntry! realm.write {\n let newUnit = question()\n newUnit.owner_id = \"some owner id\"\n newUnit.displayName = \"this is a test question\"\n $questions.append(newUnit)\n topic.questions.append(q)\n }\n022-09-19 23:26:59.592035-0700 sample_app_dev[34695:1405600] *** Terminating app due to uncaught exception 'RLMException', reason: 'Cannot modify managed RLMArray outside of a write transaction.'\n*** First throw call stack:\n(\n\t0 CoreFoundation 0x0000000111950604 __exceptionPreprocess + 242\n\t1 libobjc.A.dylib 0x000000010f4aaa45 objc_exception_throw + 48\n\t2 sample_app_dev 0x000000010b198de6 _ZL10throwErrorP15RLMManagedArrayP8NSString + 134\n\t3 sample_app_dev 0x000000010b199e20 _ZL15translateErrorsIZL11changeArrayIZL11changeArrayP15RLMManagedArray16NSKeyValueChangemU13block_pointerFvvEE4$_22EvS2_S3_S5_OT_EUlvE_EDaS8_ + 64\n\t4 sample_app_dev 0x000000010b199be8 _ZL11changeArrayIZL11changeArrayP15RLMManagedArray16NSKeyValueChangemU13block_pointerFvvEE4$_22EvS1_S2_S4_OT_ + 88\n\t5 sample_app_dev 0x000000010b1943ad _ZL11changeArrayP15RLMManagedArray16NSKeyValueChangemU13block_pointerFvvE + 77\n\t6 sample_app_dev 0x000000010b193a67 _ZL15RLMInsertObjectP15RLMManagedArrayP11objc_objectm + 327\n\t7 sample_app_dev 0x000000010b1938e0 -[RLMManagedArray addObject:] + 64\n\t8 sample_app_dev 0x000000010b48444c $s10RealmSwift4ListC6appendyyxF + 220\n\t9 sample_app_dev 0x000000010b012834 $s14sample_app_dev21relate_topic_questionV4bodyQrvg7SwiftUI4ViewPAEE9listStyleyQrqd__AE04ListL0Rd__lFQOyAE0M0Vys5NeverOAE7ForEachVy05RealmH07ResultsVyAA0F0CGs6UInt64VAgEE4task8priority_QrScP_yyYaYbctFQOyAgEE11environmentyQrs15WritableKeyPathCyAE17EnvironmentValuesVqd__G_qd__tlFQOyAA0d5_row_e1_F0V_AP0Q0VQo__Qo_GG_AE05InsetmL0VQo_yXEfU0_A10_yXEfU_A9_ATcfU_yyYaYbcfU_yyXEfU_ + 468\n\t10 sample_app_dev 0x000000010afcd5df $ss5Error_pIgzo_ytsAA_pIegrzo_TR + 15\n\t11 sample_app_dev 0x000000010b0189b4 $ss5Error_pIgzo_ytsAA_pIegrzo_TRTA + 20\n\t12 sample_app_dev 0x000000010b4e37c3 $s10RealmSwift0A0V5write16withoutNotifying_xSaySo20RLMNotificationTokenCG_xyKXEtKlF + 275\n\t13 sample_app_dev 0x000000010b0124d2 $s14sample_app_dev21relate_topic_questionV4bodyQrvg7SwiftUI4ViewPAEE9listStyleyQrqd__AE04ListL0Rd__lFQOyAE0M0Vys5NeverOAE7ForEachVy05RealmH07ResultsVyAA0F0CGs6UInt64VAgEE4task8priority_QrScP_yyYaYbctFQOyAgEE11environmentyQrs15WritableKeyPathCyAE17EnvironmentValuesVqd__G_qd__tlFQOyAA0d5_row_e1_F0V_AP0Q0VQo__Qo_GG_AE05InsetmL0VQo_yXEfU0_A10_yXEfU_A9_ATcfU_yyYaYbcfU_TY0_ + 354\n\t14 sample_app_dev 0x000000010b0183c1 $s14sample_app_dev21relate_topic_questionV4bodyQrvg7SwiftUI4ViewPAEE9listStyleyQrqd__AE04ListL0Rd__lFQOyAE0M0Vys5NeverOAE7ForEachVy05RealmH07ResultsVyAA0F0CGs6UInt64VAgEE4task8priority_QrScP_yyYaYbctFQOyAgEE11environmentyQrs15WritableKeyPathCyAE17EnvironmentValuesVqd__G_qd__tlFQOyAA0d5_row_e1_F0V_AP0Q0VQo__Qo_GG_AE05InsetmL0VQo_yXEfU0_A10_yXEfU_A9_ATcfU_yyYaYbcfU_TATQ0_ + 1\n\t15 SwiftUI 0x0000000112a435e3 $s7SwiftUI13_TaskModifierV05InnerD033_293A0AF83C78DECE53AFAAF3EDCBA9D4LLV4body7contentQrAA05_ViewD8_ContentVyAFG_tFyycfU_yyYaYbcfU_TQ0_ + 1\n\t16 SwiftUI 0x0000000112a49a1a $s7SwiftUI13_TaskModifierV05InnerD033_293A0AF83C78DECE53AFAAF3EDCBA9D4LLV4body7contentQrAA05_ViewD8_ContentVyAFG_tFyycfU_yyYaYbcfU_TATQ0_ + 1\n\t17 SwiftUI 0x0000000112a45cd5 $sxIeghHr_xs5Error_pIegHrzo_s8SendableRzs5NeverORs_r0_lTRyt_Tg5TQ0_ + 1\n\t18 SwiftUI 0x0000000112a48ec1 $sxIeghHr_xs5Error_pIegHrzo_s8SendableRzs5NeverORs_r0_lTRyt_Tg5TATQ0_ + 1\n\t19 libswift_Concurrency.dylib 0x000000010f8dc941 _ZL23completeTaskWithClosurePN5swift12AsyncContextEPNS_10SwiftErrorE + 1\n)\n", "text": "I can’t figure out the root cause of issue here. I really could use some help.When I call .append on the questions: List property of my topic instance, it is stating that I “Cannot modify managed RLMArray outside of a write transaction.” However, you can clearly see I am in a write transaction. I’ve spent a lot of time on this and have run out of ideas what is wrong.Given I have other code that is working, I ended up taking a piece and fuzzing it next to the same line of code. The $questions.append call here actually succeeds and does not claim it it outside of a write transaction. The following line still errors out.’Could there be a bug with List<> functionality?", "username": "Joseph_Bittman" }, { "code": "item.isFavorite.toggle()$settry! realm.write {\n$topic.questions.append(q)\n}\n", "text": "Well, forums for the win.A “similar topic” on this thread was displayed for:From Jason on that thread:\n\" item.isFavorite.toggle() will not work because you need to call the projected value (using the dollar sign). Using the $ will effectively open a write transaction, enabling the set behaviour previously mentioned.\"In my code, I added a $ and now it works.@Jason_Flax in the context of my issue, why would adding a $ fix it, given I had already been using the realm.write syntax to open a write transaction?", "username": "Joseph_Bittman" }, { "code": "", "text": "Hm, still an issue with a different permutation.I gave simplified code. In my full use case, I am passing topic.questions as a RealmSwift.List parameter to a child view named “topic_questions”. This child view is calling topic_questions.append(q). This still throws the same error. Trying to add a $ on the child view doesn’t compile. If I pass the entire topic object to the child view, and then call $topic.questions.append in the child view, that works.Any way to be able to pass a realm List to a child view and modify it?", "username": "Joseph_Bittman" }, { "code": "", "text": "So this topic is marked as solved. Is it or is there more to it?", "username": "Jay" }, { "code": "", "text": "I marked it solved given a workaround is listed in the comment.I do have two remaining questions though:@Jason_Flax in the context of my issue, why would adding a $ fix it, given I had already been using the realm.write syntax to open a write transaction?How can I update a RealmSwift.List from a child view as a parameter? I get the same error that it needs to be within a write transaction, although it is. The $ workaround from #1 does not work for this case.", "username": "Joseph_Bittman" }, { "code": ".write", "text": "I think some further context is needed.When coding in SwiftUI, the property wrappers like @ObservedResults make it so writes can can be done without explicitly opening a write transaction. In the initial question, .write is being called. So are you not using SwiftUI? Are you/are you not using @ObservedResults and other wrappers?Can you provide a short and complete coding example so we understand your use case? I am not sure $ is the answer here.", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error: cannot update obj outside of write transaction, while in a write transaction
2022-09-20T06:33:17.621Z
Error: cannot update obj outside of write transaction, while in a write transaction
3,759
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "", "text": "Hi there,\nI am working on a nodejs project and have setup mongodb. I need help with finding the data from mongodb based on certain conditions. The condition is that, in “stations” collection, there is “stationContent” array having objects in it. Each object has a type property. I want to read this type and fetch the values based on it. For example, if the type = “Explanation Module”, I want to remove the “options” property from this specific object. I want to do the same conditional checks with all of the objects “stationContent” array.\nYour help would greatly be appreciated.", "username": "Haseeb_Udeen" }, { "code": "", "text": "please provide sample input documents and sample results. to come up with a solution we need to experiment with real documents that we can cut-n-paste into our system", "username": "steevej" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and update your documents so that we can use them.", "username": "steevej" }, { "code": "", "text": "your quotes are still wrong", "username": "steevej" }, { "code": "{\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bbffa\"\n\n },\n\n \"content\": {\n\n \"$oid\": \"632035cff65b9178d8386e64\"\n\n },\n\n \"stationContent\": [\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bbffb\"\n\n },\n\n \"type\": \"Explanation Module\",\n\n \"question\": \"This is a test question\",\n\n \"options\": [],\n\n \"pairs\": [],\n\n \"single_strings\": [],\n\n \"single_strings_two\": []\n\n },\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bbffc\"\n\n },\n\n \"type\": \"Multiple choice\",\n\n \"question\": \"This is 2nd test question\",\n\n \"options\": [\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bbffd\"\n\n },\n\n \"trueFalse\": true,\n\n \"description\": \"This is description of first option\",\n\n \"createdAt\": {\n\n \"$date\": {\n\n \"$numberLong\": \"1663227892923\"\n\n }\n\n }\n\n },\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bbffe\"\n\n },\n\n \"trueFalse\": false,\n\n \"description\": \"This is description of second option\"\n\n }\n\n ],\n\n \"comment\": {\n\n \"comment_by\": true,\n\n \"comment_text\": \"this is comment text\"\n\n },\n\n \"pairs\": [],\n\n \"single_strings\": [],\n\n \"single_strings_two\": []\n\n },\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bbfff\"\n\n },\n\n \"type\": \"Fill in the blank\",\n\n \"question\": \"question for fill in blank\",\n\n \"true_false\": false,\n\n \"comment\": {\n\n \"comment_by\": true,\n\n \"comment_text\": \"test text for fill blank\"\n\n },\n\n \"options\": [],\n\n \"pairs\": [],\n\n \"single_strings\": [],\n\n \"single_strings_two\": []\n\n },\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bc000\"\n\n },\n\n \"type\": \"True/False question\",\n\n \"question\": \"quesitno for true/false\",\n\n \"question_type_true\": true,\n\n \"true_false\": false,\n\n \"options\": [],\n\n \"pairs\": [],\n\n \"single_strings\": [],\n\n \"single_strings_two\": []\n\n },\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bc001\"\n\n },\n\n \"type\": \"Choose the blanks\",\n\n \"question\": \"question for choose blanks\",\n\n \"true_false\": true,\n\n \"string_one\": \"sdlfjsdlkfsdlk\",\n\n \"options\": [],\n\n \"pairs\": [],\n\n \"single_strings\": [],\n\n \"single_strings_two\": []\n\n },\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bc009\"\n\n },\n\n \"type\": \"Pairs\",\n\n \"question\": \"test question for pairs type\",\n\n \"pairs\": [\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bc00a\"\n\n },\n\n \"string_one\": \"sdlkfjsdlkf\",\n\n \"string_two\": \"sdklfsdfkjlk\",\n\n \"createdAt\": {\n\n \"$date\": {\n\n \"$numberLong\": \"1663227892923\"\n\n }\n\n },\n\n \"updatedAt\": {\n\n \"$date\": {\n\n \"$numberLong\": \"1663227892923\"\n\n }\n\n }\n\n },\n\n {\n\n \"_id\": {\n\n \"$oid\": \"6322d7f4acdcb979e04bc00b\"\n\n },\n\n \"string_one\": \"dfsdfsfsd\",\n\n \"string_two\": \"fgdfgdfgdfgf\",\n\n \"createdAt\": {\n\n \"$date\": {\n\n \"$numberLong\": \"1663227892923\"\n\n }\n\n },\n\n \"updatedAt\": {\n\n \"$date\": {\n\n \"$numberLong\": \"1663227892923\"\n\n }\n\n }\n\n }\n\n ],\n\n \"options\": [],\n\n \"single_strings\": [],\n\n \"single_strings_two\": [],\n\n \"createdAt\": {\n\n \"$date\": {\n\n \"$numberLong\": \"1663227892923\"\n\n }\n\n },\n\n \"updatedAt\": {\n\n \"$date\": {\n\n \"$numberLong\": \"1663227892923\"\n\n }\n\n }\n\n }\n\n ],\n\n \"__v\": 0\n\n}\n", "text": "Hi, sorry for the wrong syntax. Now I have verified and corrected the json. Please check it.\nAs required, here is the sample document of “stations” collection that I am referring to in the question:Basically, I want to query for all the documents of the “stations” collection. And in each document of stations collection, there is “stationContent” array; in that array, there are objects, each object has a type property, and this is what I want to base my conditions on.\nSo for example, if the object has type as “Explanation Module”, I want to deselect the options, pairs, single_strings, and single_strings_two properties. Because these are empty. I don’t want to send the empty data to the frontend.\nSimilarly, if the object has type “Multiple choice”, I want to deselect the pairs, single_strings, and single_strings_two properties.\nHope I explained it as expected.\nThank you so much", "username": "Haseeb_Udeen" }, { "code": "", "text": "What you need is an $addFields stage. This stage will use $map on the stationContent array. The expression will use a $cond on the type field to map $$this to a new object that exclude the fields you want to get remove.Personally, I leave this type of data cosmetic to the front end or application layer. Especially, when it is only to remove empty data.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Finding certain data based on certain conditions
2022-09-15T08:00:26.129Z
Finding certain data based on certain conditions
4,587
null
[ "upgrading" ]
[ { "code": "", "text": "When upgrading my M5 cluster to M10, I am getting below error\n“Configuring analytics nodes specific auto-scaling is not yet supported.”", "username": "Adarsh_Madrecha1" }, { "code": "", "text": "It’s been more than 3 days since I have contacted support team via Chat. There is no reply.\nDon’t have paid support plan. So no access to Support Ticketing", "username": "Adarsh_Madrecha1" }, { "code": "", "text": "Hi @Adarsh_Madrecha1 welcome to the community!Sorry your experience has been sub-optimal. This has been raised internally and hopefully a resolution can be achieved soon. Thanks for your patience!Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "The issue got resolved.For future visitor: This was an internal issue with Atlas. The same was fixed by support team. There was no action required from my part.", "username": "Adarsh_Madrecha1" } ]
Not able to upgrade from M5 to M10
2022-09-17T09:10:06.998Z
Not able to upgrade from M5 to M10
2,470
null
[ "data-modeling" ]
[ { "code": "", "text": "Hey all, I’m fairly new to MongoDB coming from SQL and relational land. We’re working on a game that allows users to create a clan, which can hold an arbitrary number of users (we’re thinking 25 initially but should be able to scale up to 100+). The thing we’re torn on is whether we should create a clan document that has an array of users as ids, or whether to embed the clan id reference inside the user then when we need all users we just query all users that have that matching clan id.What’s the best practice way to go here? In terms of querying, we’ll need know an entire clan’s users often as well as the “active” playing user in that clan.", "username": "Charles_Kelly" }, { "code": "", "text": "Is doing both a valid approach?", "username": "Charles_Kelly" } ]
Best Schema Design for mapping a Clan
2022-09-21T13:52:16.315Z
Best Schema Design for mapping a Clan
1,058
null
[ "aggregation", "atlas-search", "text-search" ]
[ { "code": "{'index': 'default',\n 'compound': \n {'should': [\n {'text': \n {'query': 'artificial', 'path': 'child',\n 'fuzzy': {'maxEdits': 1, 'prefixLength': 4,\n 'maxExpansions': 512}}},\n {'text': \n {'query': 'intelligence', 'path': 'child',\n 'fuzzy': {'maxEdits': 1, 'prefixLength': 4,\n 'maxExpansions': 512}}},\n {'text': \n {'query': 'articlies', 'path': 'child',\n 'fuzzy': {'maxEdits': 1, 'prefixLength': 4,\n 'maxExpansions': 512}}}]},\n 'highlight': { 'path': 'child'}\n}\nchild: \"articles\"\nhighlights: Object\nscore: 1.408543348312378\npath: \"child\"\ntexts:\nArray: {value:\"articles\"}\n", "text": "Hi everyone, I stumbled on this issue, I have a search query composed of multiple should clauses:This returns this match from the collection:Ok now I know that I matched articles, but how can I know that I matched ‘articlies’ (expecially fuzzy matches) in the original query?", "username": "Leo_Pret" }, { "code": "", "text": "Quick clarifying question: was this the only result returned?", "username": "Elle_Shwer" }, { "code": "", "text": "No there were multiple matches, this was just an example.\nI have a collections of entities and I was using atlas search to implement a fast fuzzy matching algorithm.\nThe issue is then I need to remove the matched words from the original query.", "username": "Leo_Pret" }, { "code": "", "text": "I’m not sure I’m following the goal. Can you share more about your use case? Are you by chance looking specifically for spelling errors to correct them?", "username": "Elle_Shwer" } ]
Atlas Search highlight query matches, not content matches
2022-09-20T14:46:17.752Z
Atlas Search highlight query matches, not content matches
2,597
null
[ "server", "containers" ]
[ { "code": "systemLog:\n destination: file\n path: /opt/homebrew/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /opt/homebrew/var/mongodb\nnet:\n bindIp: 127.0.0.1,172.17.0.1\n ipv6: false\n{\"t\":{\"$date\":\"2022-09-21T15:35:17.125+03:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Can't assign requested address\"}}}\nbrew services restart mongodb-community && brew services listmongodb-community error 12288 ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist", "text": "I’m trying to use my local mongodb inside my docker container.To allow access, I have edited my mongo.conf as follows:but when running mongo db restart, I get this in the mongodb.log:Can’t assign requested addressand running brew services restart mongodb-community && brew services list", "username": "rawand_ahmad" }, { "code": "Can't assign requested addressmongodmongodbindIPmongodbrew services rrestart mongodb-communityhost.docker.internalmongoshhost.docker.internalmongod", "text": "Hi @rawand_ahmad and welcome to the MongoDB Community forums! Can't assign requested addressThis means that you’re trying to bind the mongod process to an IP address that it can’t bind to. 172.17.0.1 belongs to the docker networking stack for containers to talk to each other. The local mongod cannot bind to that IP as that IP space is not accessible outside of Docker. Remove that from the list of bindIPs and you should be able to restart the mongod process using brew services rrestart mongodb-community.You can use the host.docker.internal address to connect from your Docker container to processes running on your local machine. The below screenshot shows this in action:\nimage1667×519 64.3 KB\nIn the screenshot we see the following:", "username": "Doug_Duncan" }, { "code": "", "text": "host.docker.internalThank you, thank you, you saved me hours of work.", "username": "rawand_ahmad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb: assigning docker gateway to mongo.conf
2022-09-21T12:43:24.126Z
Mongodb: assigning docker gateway to mongo.conf
2,860
null
[]
[ { "code": " $category_amount = array(\n array(\"UserName\"=>\"User1\",\"Leads\"=>32000,\"Closed Won\"=>20000,\"Closed Lost\"=>12000),\n array(\"UserName\"=>\"User2\",\"Leads\"=>43000,\"Closed Won\"=>36000,\"Closed Lost\"=>7000),\n array(\"UserName\"=>\"User3\",\"Leads\"=>54000,\"Closed Won\"=>39000,\"Closed Lost\"=>15000),\n array(\"UserName\"=>\"User4\",\"Leads\"=>23000,\"Closed Won\"=>18000,\"Closed Lost\"=>5000),\n array(\"UserName\"=>\"User5\",\"Leads\"=>12000,\"Closed Won\"=>6000,\"Closed Lost\"=>6000),\n );\nusername,stage_en,price <== Columns Header\nAshraf Shehadeh,Closed Lost,58\nMohammad Allan,Closed Lost,580\nJamil Mahmoud ,Closed Lost,556.8\nAshraf Shehadeh,Closed Won,406\nLeen Ibrahim,Closed Lost,0\nAshraf Shehadeh,Closed Lost,928\nMohammad Allan,Closed Lost,928\nMohammad Allan,Closed Won,522\nLeen Ibrahim,Closed Lost,0\nAgent 16,Closed Lost,0\n", "text": "Hi,I’m new to MongoDB and I have the challenge to convert CSV files to the below format.CSV Sample:What I need is to convert the list to a formatted array to be able to apply it as a Chart datasource.Thanks,", "username": "Ahmad_Abujoudeh" }, { "code": "mongoimport --db=users --type=csv --headerline --file=/opt/backups/contacts.csv\n", "text": "Hi @Ahmad_Abujoudeh and welcome to the MongoDB community!!If I try to understand correctly, are you trying to import a CSV file to the data source in MongoDB charts?\nYou can import the csv file directly to the collection using CSV import using mongoimport and further use the collection as datasource.For further understanding, please refer to the documentation on Data Source in MongoDB charts which would be a good reference for more details.Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Prepare data to Charts
2022-08-28T14:09:52.683Z
Prepare data to Charts
908
null
[ "aggregation" ]
[ { "code": "{\n \"day\": 1,\n \"title\": \"Lorem ipsum dolor sit amet\",\n \"data\": [\n {\n \"type\": \"body\",\n \"content\": \"Lorem ipsum dolor sit amet\"\n },\n {\n \"type\": \"content_list\",\n \"content\": [\n {\n \"id\": \"6312b5bd0fb68141c6bdc4d0\",\n },\n {\n \"id\": \"6311c6c50fb68141c6b97710\",\n },\n ]\n },\n {\n \"type\": \"body\",\n \"content\": \"Lorem ipsum dolor sit amet\"\n },\n {\n \"type\": \"content_list\",\n \"content\": [\n {\n \"id\": \"6312b5bd0fb68141c6bdc4d0\",\n },\n {\n \"id\": \"6311c6c50fb68141c6b97710\",\n },\n ]\n },\n", "text": "Hello, so I stumbled on this issue. I have this kind of document:There is a list of different content types in data, if the content type is “content_list” then I would like to do a $lookup using the id on another collection.\nIs it possible to do a conditional $lookup ?", "username": "Leonardo_Pratesi" }, { "code": "$unwinddata$matchdata.type", "text": "I believe you could $unwind your data array, then $match on data.type for the value you’re looking for and then perform the necessary join.It is hard, however, to say if that’s the right approach without seeing data from the other collection and know what the final output your looking for.", "username": "Doug_Duncan" }, { "code": "", "text": "I’ve shared the question also on stackoverflow, I leave here the link.\nI need the structure to stay the same so there were some more trasformations to be made", "username": "Leonardo_Pratesi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Conditional $lookup on multiple fields in the same document
2022-09-16T07:41:11.087Z
Conditional $lookup on multiple fields in the same document
2,793
https://www.mongodb.com/…b_2_1024x512.png
[]
[ { "code": "newOrder = await newOrder.save();\n\n //console.log(newOrder);\n\n console.log(\"ID: \" + newOrder.id);\n console.log(\"orderId 1: \" + newOrder.orderId);\n\n let updatedOrder = await Order.findById(newOrder.id).populate({\n path: 'orderItems',\n populate: { path: 'menuItem', select: \"name\" }//description\n });\n\n console.log(\"orderId 2: \" + updatedOrder.orderId);\n\n updatedOrder = await Order.findById(newOrder.id).populate({\n path: 'orderItems',\n populate: { path: 'menuItem', select: \"name\" }//description\n });\n\n updatedOrder = await Order.findById(newOrder.id).populate({\n path: 'orderItems',\n populate: { path: 'menuItem', select: \"name\" }//description\n });\n\n console.log(\"orderId 3: \" + updatedOrder.orderId);\n\nID: 61f8358055301a3970af47f4\n\norderId 1: undefined\n\norderId 2: undefined\n\norderId 3: 68\n", "text": "Hi,I’m prototyping an node.js app, using the free tier, using Mongoose library (I don’t think this is the cause of the issue though).I followed instructions here:Learn how to implement auto-incremented fields with MongoDB Atlas triggers following these simple steps.to create an auto increment orderId field in my orders collection documents upon create.When I save() the object, I get the object back from the call, the orderId is not present.If I go check the the object in Atlas, it is there. I decided then to retrieve the object by Id again, to see if it would appear. It would appear some times.I then retrieved the object a 2nd time, and it would appear more often.I then retrieved the object a 3rd time, and now I see the orderId consistently.I’m not familiar with the workings of mongodb and how they do these triggers, so I assume there is some sort of timing issue.Upon save I’m returned the object before the trigger has executed. I tried to do a {j; true} option to see if it would return properly, but was not successful.I saw that if the trigger is called a lot, it might be put on an execution queue, but I’m just testing out the api, so that should not be the issue.Could it be because I’m on the free tier?Am I missing something here?Thanks for your help,\nRichOutput", "username": "Richard_Cook" }, { "code": "const client = new MongoClient(uri);\n\n await client.connect();\n const database = client.db('ordering');\n const orders = database.collection('orders');\n\n const testOrder = { location: 3, completed: false };\n \n const result = await orders.insertOne(testOrder);\n\n const newId = result.insertedId;\n\n console.log(\"object Id: \" + newId);\n\n const query = { _id: newId };\n const latestOrder = await orders.findOne(query);\n\n console.log(\"orderId test: \" + latestOrder.orderId);\nobject Id: 61f842807d4e78c006ca75a3\norderId test: undefined\n", "text": "Also, to double check that it was not an issue with Mongoose, I went ahead and added some test code that used the mongodb node driver directly. I see the same issue. orderId is undefined when I retrieve the order immediately after inserting it. If I check atlas, I can see it has one. I suspect if I keep retrieving the object, it will eventually appear.Rich", "username": "Richard_Cook" }, { "code": "object Id: 61f843ba2d5c561249524ec9\n\norderId test: undefined\n\norderId test: 82\n\norderId test: 82\n\norderId test: 82\n\norderId test: 82\n\nobject Id: 61f843c82d5c561249524f65\n\norderId test: undefined\n\norderId test: undefined\n\norderId test: undefined\n\norderId test: 84\n\norderId test: 84\n", "text": "Additionally, I tried doing muliple lookups and you can see the behavior varies.Thanks,\nRich", "username": "Richard_Cook" }, { "code": "", "text": "Anyone have any clue on this? Support from Mongo said they’d get back to me. Haven’t heard from them.", "username": "Richard_Cook" }, { "code": "", "text": "As an experiment, I upgraded my cluster to an M10, thinking that this would resolve the issue. I thought maybe because it was underpowered, the trigger was not working in a timely fashion.It did not.I don’t understand how mongo can return the document upon insert to me with the value NOT.Am I missing something here?Thanks for any help,\nRich", "username": "Richard_Cook" }, { "code": "", "text": "I don’t understand how mongo can return the document upon insert to me with the value NOT.That should be:I don’t understand how mongo can return the document upon insert to me with the value NOT set.", "username": "Richard_Cook" }, { "code": "", "text": "Did you ever figure this out? I’m having the same issue. Thanks.", "username": "Mary_Luksetich" }, { "code": "", "text": "I never did resolve this issue. I ended up not using the auto-increment feature. I find it really weird that this function is not working.Every SQL DB that I know of can us an auto increment function for creating primary keys (Identity). I would think this is something that MongoDB could handle as well.", "username": "Richard_Cook" } ]
OrderId field populated by Trigger does not appear on returned object from initial save(). I'm forced to retrieve object multiple times before field appears
2022-01-31T19:18:53.896Z
OrderId field populated by Trigger does not appear on returned object from initial save(). I&rsquo;m forced to retrieve object multiple times before field appears
4,238
null
[ "aggregation", "queries", "atlas-search" ]
[ { "code": "{\n restaurantName: String, //SAMPLE 'The Serenity'\n restaurantTitle: String, //SAMPLE 'Gorgeous vegan restaurant with river views'\n restaurantDesc: String, //SAMPLE 'The Serenity is a three-michelin-starred restaurant in the heart of Soho...'\n restaurantType: String //SAMPLE 'Chinese'\n area: String //SAMPLE 'Mayfair'\n city: String // SAMPLE 'London'\n country: String // SAMPLE 'UK\"\n}\n$search: {\n index: 'defaultRestaurantSearch',\n compound: {\n should: [\n {\n text: {\n query: req.query.searchTerm,\n path: 'restaurantTitle',\n fuzzy: {},\n score: {\n boost: {\n value: 9,\n },\n },\n },\n },\n {\n text: {\n query: req.query.searchTerm,\n path: 'restaurantDescription',\n fuzzy: {},\n score: {\n boost: {\n value: 8,\n },\n },\n },\n },\n ],\n },\n },\n{\n \"mappings\": {\n \"dynamic\": true\n }\n}\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"restaurantName\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"string\"\n }\n ],\n \"restaurantTitle\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"string\"\n }\n ],\n ...//OTHER FIELDS HERE\n }\n }\n}\n", "text": "Hello MongoDB,I would like to run a compound search query which returns relevant results, but at the moment my query is returning every result and I don’t understand why. My issue/problem is that the search-term may be contained in a number of fields, and therefore I can’t stipulate a ‘must’ contain in the query.Let’s say my data looks like this:At the moment my query is as follows (once this bit works I will obviously build it out). I’m just trying with two fields and a ‘should’ condition, I thought meaning return a result if the searchTerm is in either:In the app there’s already a filtering system to allow users to filter restaurants. However, we would like to have an open text field search where we can basically expect users to type anything, from the restaurant name to something such as ‘chinese in Soho’.At the moment, if I just enter ‘West End’ into the field, all records are returned, even those with no mention of Soho in the name, title, description, area etc…Is there an obvious error in the code? Or is this even possible? I have been studying the sample Restaurant app at https://www.atlassearchrestaurants.com/ to get a feel for the compound query (even though the use-cases are slightly different) but I can’t get it to work.I thought the problem may be with the Index Definition, and have tried bothandBut neither have made a difference.\nI’ve used Atlas search a number of times before, but more simple searches on a single name field. Any help would be very much appreciated.Cheers,\nMatt", "username": "Matt_Heslington1" }, { "code": "", "text": "Hiya! Can you tell me more about the results and your expectations?Are there results returning with Soho at all? Are they boosted to the top? Or are there other results appearing at the top that don’t include Soho at all?Is the issue that there are results that don’t include SoHo appearing?", "username": "Elle_Shwer" }, { "code": "", "text": "Hello Elle,Thank you for your reply.I’ve just edited the original question to change the search term to ‘West End’, as a bit more investigating this end has shown it’s something to do with spaces in the searchTerm.I would like back an array of records where, in the above example, ‘West End’ is in either the restaurantTitle or restaurantDescription, but I’m getting an array of all the records in the collection. Further down the line I would like to expand the search to the ‘area’ field for example, which would be a more natural fit for ‘West End’, but I’m just trying to get it to work with two fields at the moment.EDIT: Further testing has revealed if I search for ‘West’ or ‘End’ instead of ‘West End’ it does return only those records where the phrase ‘West End’ is in the restaurantTitle or restaurantDescription, but ‘West End’ with the space returns all the records.Cheers,\nMatt", "username": "Matt_Heslington1" }, { "code": "minimumShouldMatchshouldphrasetextslopfuzzy", "text": "Hi Matt, thanks for sharing your newest findings!Can you try adding the minimumShouldMatch parameter to the compound query to ensure that the results match at least one 1 of the should clauses?Additionally, if you want an exact match on the query, that can be achieved using the phrase operator instead of text in your query. The slop parameter achieves a similar effect to fuzzy. You can also read more about exact matches here.Hope this helps!", "username": "amyjian" }, { "code": "", "text": "Hello Amyjian,Thank you for your reply, this has worked. The ‘phrase’ operator is exactly what I was looking for, which I wasn’t aware off, and becomes really powerful when used with the ‘slop’ parameter.Thanks again,\nMatt", "username": "Matt_Heslington1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Returning Scored Results from Atlas Search Across Multiple Fields
2022-09-20T14:38:48.392Z
Returning Scored Results from Atlas Search Across Multiple Fields
2,068
null
[ "aggregation", "queries", "time-series" ]
[ { "code": "{\n timestamps: number\n metadata: {\n [key: metricName]: { // an array of measurement with different sensor for a same metric\n value: number,\n units: string,\n sensor: {\n name: string,\n height: {\n value: number,\n units: string\n }\n }\n }[]\n }\n}\n{\n \"metadata.MEASURE1\": {\n $elemMatch: {\n 'sensor.height.value': 0,\n 'sensor.height.units': 'm'\n }\n }\n}\n", "text": "Hi here\nOn mongo 6.0.0 and 6.0.1\nI have a timeseries with a schema like thatIf I try to queryDoesnt work.\nIf I do a similar request on non timeseries. Works fine.\nSomeont met this issues on timeseries ?", "username": "ACHACHE_FRANCOIS" }, { "code": "[\n {\n timestamp: ISODate(\"2021-05-18T00:00:00.000Z\"),\n metadata: { sensorId: 5578, type: 'temperature' },\n _id: ObjectId(\"632aa2743a5ebcfafa682a85\"),\n results: [ 1, 2, 3 ],\n temp: 12\n },\n {\n timestamp: ISODate(\"2021-05-18T00:00:00.000Z\"),\n metadata: { sensorId: 5578, type: 'temperature' },\n _id: ObjectId(\"632aa27e3a5ebcfafa682a86\"),\n results: [ 82, 85, 88 ],\n temp: 12\n },\n {\n timestamp: ISODate(\"2022-09-20T10:54:17.944Z\"),\n results: [ 82, 85, 88 ],\n _id: ObjectId(\"63299bd941f051910f8be559\")\n },\n {\n timestamp: ISODate(\"2022-09-20T10:54:26.328Z\"),\n results: [ 1, 2, 3 ],\n _id: ObjectId(\"63299be241f051910f8be55a\")\n }\n]\ndb.test.find( { results: { $elemMatch: { $gte: 80, $lt: 85 } } })\n[\n {\n timestamp: ISODate(\"2021-05-18T00:00:00.000Z\"),\n metadata: { sensorId: 5578, type: 'temperature' },\n _id: ObjectId(\"632aa27e3a5ebcfafa682a86\"),\n results: [ 82, 85, 88 ],\n temp: 12\n },\n {\n timestamp: ISODate(\"2022-09-20T10:54:17.944Z\"),\n results: [ 82, 85, 88 ],\n _id: ObjectId(\"63299bd941f051910f8be559\")\n }\n]\n", "text": "Hi @ACHACHE_FRANCOIS and welcome to the MongoDB community!!Could you please help me with the sample document for the above schema mentioned which would help me to reproduce the problem on the local environment.In the meantime, I created my own sample documents to check for $elemMatch functionality in a time series collection:\nSample DocumentQuery using $elemMatchand it works as expected in the MongoDB version 6.0.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed after 180 days. New replies are no longer allowed.", "username": "system" } ]
elemMatch in timeseries
2022-09-16T14:01:22.213Z
elemMatch in timeseries
1,580
null
[ "queries", "node-js", "mongoose-odm", "graphql", "typescript" ]
[ { "code": "async function createServer() {\n try {\n // Create mongoose connection\n await createSession()\n\n // Create express server\n const app = express()\n\n // Allow CORS from client app\n const corsOptions = {\n origin: 'http://localhost:4000',\n credentials: true,\n }\n app.use(cors(corsOptions))\n\n // Allow JSON requests\n app.use(express.json())\n\n // Initialize GraphQL schema\n const schema = await createSchema()\n\n // Create GraphQL server\n const apolloServer = new ApolloServer({\n schema,\n context: ({ req, res }) => ({\n req,\n res,\n connectionName: 'graphql1',\n }),\n introspection: true,\n // Enable GraphQL Playground with credentials\n plugins: [\n process.env.NODE_ENV === 'production'\n ? ApolloServerPluginLandingPageProductionDefault({ footer: false })\n : ApolloServerPluginLandingPageGraphQLPlayground({\n endpoint: '/graphql1',\n settings: {\n 'request.credentials': 'include',\n },\n }),\n ],\n })\n\n const apolloServerTrading = new ApolloServer({\n schema,\n context: ({ req, res }) => ({\n req,\n res,\n connectionName: 'graphql2',\n }),\n introspection: true,\n // Enable GraphQL Playground with credentials\n plugins: [\n process.env.NODE_ENV === 'production'\n ? ApolloServerPluginLandingPageProductionDefault({ footer: false })\n : ApolloServerPluginLandingPageGraphQLPlayground({\n endpoint: '/graphql2',\n settings: {\n 'request.credentials': 'include',\n },\n }),\n ],\n })\n\n await apolloServer.start()\n await apolloServerTrading.start()\n\n apolloServer.applyMiddleware({ app, cors: corsOptions, path: '/graphql1' })\n apolloServer.applyMiddleware({ app, cors: corsOptions, path: '/graphql2' })\n\n // Start the server\n app.listen({ port }, () => {\n info(\n `🚀 GraphQL 1 running at http://localhost:${port}${apolloServer.graphqlPath}`,\n )\n info(\n `🚀 GraphQL 2 running at http://localhost:${port}${apolloServer.graphqlPath}`,\n )\n })\n } catch (err) {\n error(err)\n }\n}\n\ncreateServer()\n", "text": "Hello friends,\nSorry to bother you but after an afternoon of trying to find a solution, I just found a brain on fire >_<What am I trying to do?\nI’d like to query data on my GraphQL Apollo server from the 2 databases.So far I have managed to get the entities created on my second database. But the queries and mutation are not displayed in my graphql playground. I thought it was normal, my apollo server is probably connected to the 1st db only.But then how to do ?The only documentation I found was with Ben Amad here. Unfortunately, I can’t really adapt it to my stack (Mongoose, typegraphql, typegoose, express, apollo), in other word: it doesn’t work for me Here is my code for my server if you want to see what I tried:Thank you so much Have a good night dev!", "username": "Mael_LE_PETIT" }, { "code": "", "text": "Did you get this working ? im looking to build something similar", "username": "Patrick_O_Sullivan" } ]
Create Single GraphQL Server with Multiple Databases
2022-02-10T23:39:45.715Z
Create Single GraphQL Server with Multiple Databases
4,073
null
[ "java" ]
[ { "code": "java.lang.IllegalStateException: Cannot load configuration class: mflix.config.MongoDBConfiguration\nat org.springframework.context.annotation.ConfigurationClassPostProcessor.enhanceConfigurationClasses(ConfigurationClassPostProcessor.java:414) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanFactory(ConfigurationClassPostProcessor.java:254) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:284) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:128) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:694) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:532) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.boot.SpringApplication.refresh(SpringApplication.java:762) ~[spring-boot-2.0.4.RELEASE.jar:2.0.4.RELEASE]\nat org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:398) ~[spring-boot-2.0.4.RELEASE.jar:2.0.4.RELEASE]\nat org.springframework.boot.SpringApplication.run(SpringApplication.java:330) ~[spring-boot-2.0.4.RELEASE.jar:2.0.4.RELEASE]\nat org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:139) ~[spring-boot-test-2.0.3.RELEASE.jar:2.0.3.RELEASE]\nat org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:108) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:190) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:132) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:246) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:227) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) ~[junit-4.12.jar:4.12]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:246) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) ~[junit-4.12.jar:4.12]\nat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) ~[junit-4.12.jar:4.12]\nat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) ~[junit-4.12.jar:4.12]\nat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) ~[junit-4.12.jar:4.12]\nat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) ~[junit-4.12.jar:4.12]\nat org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.junit.runners.ParentRunner.run(ParentRunner.java:363) ~[junit-4.12.jar:4.12]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) ~[surefire-junit4-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) ~[surefire-junit4-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) ~[surefire-junit4-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) ~[surefire-junit4-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383) ~[surefire-booter-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344) ~[surefire-booter-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) ~[surefire-booter-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417) ~[surefire-booter-2.22.0.jar:2.22.0]\nCaused by: java.lang.ExceptionInInitializerError: null\nat org.springframework.context.annotation.ConfigurationClassEnhancer.newEnhancer(ConfigurationClassEnhancer.java:122) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.annotation.ConfigurationClassEnhancer.enhance(ConfigurationClassEnhancer.java:110) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.annotation.ConfigurationClassPostProcessor.enhanceConfigurationClasses(ConfigurationClassPostProcessor.java:403) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\n… 38 common frames omitted\nCaused by: org.springframework.cglib.core.CodeGenerationException: java.lang.reflect.InaccessibleObjectException–>Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not “opens java.lang” to unnamed module @3c60b7e7\nat org.springframework.cglib.core.ReflectUtils.defineClass(ReflectUtils.java:464) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.AbstractClassGenerator.generate(AbstractClassGenerator.java:336) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.AbstractClassGenerator$ClassLoaderData$3.apply(AbstractClassGenerator.java:93) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.AbstractClassGenerator$ClassLoaderData$3.apply(AbstractClassGenerator.java:91) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.internal.LoadingCache$2.call(LoadingCache.java:54) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]\nat org.springframework.cglib.core.internal.LoadingCache.createEntry(LoadingCache.java:61) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.internal.LoadingCache.get(LoadingCache.java:34) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.AbstractClassGenerator$ClassLoaderData.get(AbstractClassGenerator.java:116) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:291) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.KeyFactory$Generator.create(KeyFactory.java:221) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.KeyFactory.create(KeyFactory.java:174) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.KeyFactory.create(KeyFactory.java:153) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.proxy.Enhancer.(Enhancer.java:73) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\n… 41 common frames omitted\nCaused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not “opens java.lang” to unnamed module @3c60b7e7\nat java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354) ~[na:na]\nat java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297) ~[na:na]\nat java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199) ~[na:na]\nat java.base/java.lang.reflect.Method.setAccessible(Method.java:193) ~[na:na]\nat org.springframework.cglib.core.ReflectUtils$1.run(ReflectUtils.java:61) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat java.base/java.security.AccessController.doPrivileged(AccessController.java:569) ~[na:na]\nat org.springframework.cglib.core.ReflectUtils.(ReflectUtils.java:52) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:243) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.cglib.core.AbstractClassGenerator.generate(AbstractClassGenerator.java:329) ~[spring-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]\n… 53 common frames omitted\n\n2022-04-15 13:05:26.574 INFO 1520 — [ main] o.s.w.c.s.GenericWebApplicationContext : Closing org.springframework.web.context.support.GenericWebApplicationContext@6d07a63d: startup date [Fri Apr 15 13:05:25 IST 2022]; root of context hierarchy\n2022-04-15 13:05:26.581 ERROR 1520 — [ main] o.s.test.context.TestContextManager : Caught exception while allowing TestExecutionListener [org.springframework.test.context.web.ServletTestExecutionListener@6f46426d] to prepare test instance [mflix.api.daos.ConnectionTest@110844f6]\n\njava.lang.IllegalStateException: Failed to load ApplicationContext\nat org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:125) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:108) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:190) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:132) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:246) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:227) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) ~[junit-4.12.jar:4.12]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:246) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) ~[junit-4.12.jar:4.12]\nat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) ~[junit-4.12.jar:4.12]\nat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) ~[junit-4.12.jar:4.12]\nat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) ~[junit-4.12.jar:4.12]\nat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) ~[junit-4.12.jar:4.12]\nat org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.junit.runners.ParentRunner.run(ParentRunner.java:363) ~[junit-4.12.jar:4.12]\nat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) ~[surefire-junit4-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) ~[surefire-junit4-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) ~[surefire-junit4-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) ~[surefire-junit4-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383) ~[surefire-booter-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344) ~[surefire-booter-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) ~[surefire-booter-2.22.0.jar:2.22.0]\nat org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417) ~[surefire-booter-2.22.0.jar:2.22.0]\nCaused by: java.lang.IllegalStateException: Cannot load configuration class: mflix.config.MongoDBConfiguration\nat org.springframework.context.annotation.ConfigurationClassPostProcessor.enhanceConfigurationClasses(ConfigurationClassPostProcessor.java:414) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanFactory(ConfigurationClassPostProcessor.java:254) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:284) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:128) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:694) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:532) ~[spring-context-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.boot.SpringApplication.refresh(SpringApplication.java:762) ~[spring-boot-2.0.4.RELEASE.jar:2.0.4.RELEASE]\nat org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:398) ~[spring-boot-2.0.4.RELEASE.jar:2.0.4.RELEASE]\nat org.springframework.boot.SpringApplication.run(SpringApplication.java:330) ~[spring-boot-2.0.4.RELEASE.jar:2.0.4.RELEASE]\nat org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:139) ~[spring-boot-test-2.0.3.RELEASE.jar:2.0.3.RELEASE]\nat org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\nat org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117) ~[spring-test-5.0.7.RELEASE.jar:5.0.7.RELEASE]\n… 27 common frames omitted\nCaused by: java.lang.ExceptionInInitializerError: null\n", "text": "Hi guys, I’m posting this help in case others go through the same situation okSolution for error in Intellij IDE:\nFile > Project Structure > Project > then change SDK to 1.8 > Then apply.That was my problem the SDK was in a different version than the project.", "username": "Rafael_Vieira" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error when performing the first application tests in java (Correção)
2022-09-20T11:19:47.300Z
Error when performing the first application tests in java (Correção)
3,910
null
[ "queries", "mongodb-shell", "database-tools" ]
[ { "code": "sort -n /path/date.txt | tail -1 --query '{\n\n \"memberCard.cardMbrDtlsUpdatedTms\": {\n\n \"$gte\": $x\n", "text": "start_dt=sort -n /path/date.txt | tail -1\nI am converting this date to ISODate beacause it’s is in ISODate formate in mongodb\nx=‘ISODate(\"’$start_dt’\")’mongoexport --ssl --sslCAFile $sslCAFile --host $host -u $username -p $password --collection $collectionMember --db $database --limit 5 \\}}’ \\–out $outputI am getting the output with 0 records when we pass the dates manually It’s fetching 5 records based on limt.", "username": "Yugandhar_Prathi" }, { "code": "date{\"$gte\": {\"$date\": \"2015-01-01T00:00:00.000+0000\"}}\n", "text": "Hello @Yugandhar_Prathi ,I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, then you can try below.ISODate() is a javascript function, hence will not work here.x=‘ISODate(“’$start_dt’”)’According to the mongoexport docThe query must be in Extended JSON v2 format (either relaxed or canonical/strict mode), including enclosing the field names and operators in quotes:So you can try using the date likeLet me know if you have any more questions.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I am reading date from one .txt file and need to compared inside the mongoexport query
2022-09-13T06:34:32.812Z
I am reading date from one .txt file and need to compared inside the mongoexport query
1,442
null
[ "aggregation", "atlas-search", "text-search" ]
[ { "code": "", "text": "Hello,We’re using MongoDB Atlas, and we’re already using ElasticSearch.However, we face a lot of problems with the added burden of syncing data and schema changes between MongoDB and ElasticSearch, plus the extra cost of maintaining an ElasticSearch self-hosted instance.We’re seeking the best full text-search across multiple collections, with suggestions and auto-complete functionality.We are now considering MongoDB Atlas Search.Here are the main issues we face with Atlas Search:1 - Atlas Search requires Searching to happen as the first stage in an aggregation pipeline. This puts a limitation because sometimes we want to do an aggregation first (lookups, match, etc) then do the search on the data from the previous stages. Does anyone know if this is still a limitation or has things changed ? And what is the best approach in this case ? i.e. How can we search across multiple collections for example using the same search query ?2 - Does Atlas Search provide the same auto-complete functionality as ElasticSearch ?3 - Does Atlas Search provide the same search-suggestions functionality as ElasticSearch ?Many thanks for your time and support", "username": "Mohamed_Heiba" }, { "code": "$unionWith$search$lookupmoreLikeThis", "text": "Hi @Mohamed_Heiba - Welcome to the community 1 - Atlas Search requires Searching to happen as the first stage in an aggregation pipeline. This puts a limitation because sometimes we want to do an aggregation first (lookups, match, etc) then do the search on the data from the previous stages. Does anyone know if this is still a limitation or has things changed ? And what is the best approach in this case ? i.e. How can we search across multiple collections for example using the same search query ?As of MongoDB version 6.0, you will be able to run cross-collection searches. You may find the following documentation / pages useful regarding this:2 - Does Atlas Search provide the same auto-complete functionality as ElasticSearch ?As per the autocomplete operator documentation:The autocomplete operator performs a search for a word or phrase that contains a sequence of characters from an incomplete input string. You can use the autocomplete operator with search-as-you-type applications to predict words with increasing accuracy as characters are entered in your application’s search field.Were there more specifics regarding what functionality you were after specifically or does the above described (or the documentation as well) cover the core functionality you are after with Atlas search specific to auto-complete?3 - Does Atlas Search provide the same search-suggestions functionality as ElasticSearch ?Could you provide some further use case details regarding this question? E.g. Is the search input “Naw York” and the expected output (or suggestion) “New York”?There are a few features which may provide the functionality you are after but it will depend on the use case:Having said those, I would like to state that I’m not an expert in ElasticSearch (we are a MongoDB forum after all ). Regarding Atlas Search, one resource that might be of interest to you is the tutorials on Atlas Search that should provide a general overview of Atlas Search’s capabilities as of today.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ElasticSearch vs MongoDB Atlas Search?
2022-09-18T05:33:22.698Z
ElasticSearch vs MongoDB Atlas Search?
2,966
null
[ "monitoring" ]
[ { "code": "", "text": "Hello ,On altlas for CPU which matrics we should concern ?i gone through with mongo Doc and they suggest to check normalize System CPU and normalize process CPU .on my servers normalize System CPU shows almost ident between 10 -20 % while normalize process CPU is upto 60% . should it is concern?Regards,Daksh", "username": "Dasharath_Dixit" }, { "code": "", "text": "ers normalize System CPU shows almost ident between 10 -20 % while normalize process CPU is upto 60% . shHi @Dasharath_Dixit,Thank you for your question! System CPU and Normalized Process CPU are both useful in estimating processor requirements. Of the two, Normalized Process CPU may be easier to understand as it already factors in the number of CPU cores and presents the data as a percentage between 0 and 100%.Regarding your normalized process CPU, our default threshold on this metric is currently 95%. I don’t believe 60% is a concern unless you see that it continues to grow and nears 95% or you see that the maximum value regularly exceeds 95%.Thanks,\nFrank", "username": "Frank_Sun" }, { "code": "", "text": "Hello @Frank_Sun ,Thanks for the response!got it . but mostly normalize system CPU is between 10-20 % and normalize process CPU is much higher .second for both metrics it is mention that , the value is divide by number of core . so the metrics on GUI is already shown the value after the calculation based on VCPU ?Regards\nDash", "username": "Dasharath_Dixit" }, { "code": "", "text": "Hi @Dasharath_Dixit,Ah sorry I misread your original question. Yes, both the normalized system CPU and normalized process CPU metrics do divide by the number of vCPUs. If you are seeing normalized system CPU lower than your normalized process CPU, could you please open a ticket and include your group ID so we can further investigate?Thanks,\nFrank", "username": "Frank_Sun" }, { "code": "\nPlease note that the **Normalized Process CPU** displays the following information:\n\n* `user` displays the percentage of time that the CPU spent servicing the **MongoDB process** , scaled to a range of 0-100% by dividing by the number of CPU cores.\n* `kernel` displays the percentage of time the CPU spent servicing operating system calls for the **MongoDB process** , scaled to a range of 0-100% by dividing by the number of CPU cores.\n\nWhile the **Normalized System CPU** to monitor CPU usage as this displays the CPU usage of **all processes on the node** , scaled to a range of 0-100% by dividing by the number of CPU cores.\n\nGenerally, for best clarity, we recommend that you use the **Normalized System CPU** as this displays the CPU usage of **all processes on the node** , scaled to a range of 0-100% by dividing by the number of CPU cores that the cluster has.\n\n", "text": "lt threshold on this metric is currently 95%. I don’t believe 60% is a concern unless you see thahello @Frank,Yeah i already raised a ticket to mongo and got the update as belowI normally noticed that my all cluster server has normalize system CPU around 10 -25% while nomalize process CPU is higher . which i guess normal but i really dont understand why there are multiple CPU related matrics and what exactly difference between them …regards\nDash", "username": "Dasharath_Dixit" }, { "code": "", "text": "Hi @Dasharath_Dixit,but i really dont understand why there are multiple CPU related matrics and what exactly difference between them …You can see definition of each metric within the metrics page of your cluster(s) and select the info icon as detailed here in this post. It is a bit of an older post but I believe you can see the definition for each in the same way described in it.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cpu usage on atlas
2022-09-12T12:44:57.666Z
Cpu usage on atlas
5,965
null
[]
[ { "code": "", "text": "My name is Tomas and I live in Sweden. I have just recently started learning MongoDb and I think it is great.\nI’m currently practicing while making a simple bug tracker.\nLooking forward to reading some interesting threads.", "username": "CazLaz" }, { "code": "", "text": "Hello @CazLaz and welcome to the MongoDB Community forums! If you have not yet found it, MongoDB University offers free developer and admin courses. These are a great way to learn MongoDB.Definitely ask any questions you may have as you progress on your journey to MongoDB mastery. We have some really smart people around here willing to share their knowledge and get you up to speed.", "username": "Doug_Duncan" } ]
Hello all MongoDb-ians
2022-09-20T09:52:27.766Z
Hello all MongoDb-ians
1,675
null
[ "graphql" ]
[ { "code": "query findProducts{\n product(price_gt: 2000)\n {\n id,\n price\n }\n }\nproduct(query: ProductQueryInput): Product\ninput ProductQueryInput {\n price_gt: String\n}\n{\n \"data\": null,\n \"errors\": [\n {\n \"message\": \"Unknown argument \\\"price_gt\\\" on field \\\"product\\\" of type \\\"Query\\\".\",\n \"locations\": [\n {\n \"line\": 33,\n \"column\": 11\n }\n ]\n }\n ]\n}\n", "text": "With MongoDBAtlas I am trying to query products in my database which are greater than a certain price. Here is my query:Here is the structure of the query:Here is the relevant extract of my GraphQL schema:When I run the code I get the error:I am a beginner with mongoDB so may be missing something obvious, but given the query is structured like it is and price_gt is in the schema as a input, I would have assumed it would just work.", "username": "Rupey_N_A" }, { "code": "product(query: {price_gt: 2000}) {\n id,\n price\n}\n", "text": "Try this", "username": "Rafael_Hernandez" } ]
GraphQL Unkown argument "price_gt" on field of type Query
2022-09-06T11:12:18.355Z
GraphQL Unkown argument &ldquo;price_gt&rdquo; on field of type Query
3,096
null
[ "node-js", "compass", "mongodb-shell", "server" ]
[ { "code": "reason: TopologyDescription {\n type: 'Single',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n error: Error: connect ECONNREFUSED 127.0.0.1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1132:16) {\n name: 'MongoNetworkError'\n },\n roundTripTime: -1,\n lastUpdateTime: 58154302,\n lastWriteDate: null,\n opTime: null,\n type: 'Unknown',\n topologyVersion: undefined,\n minWireVersion: 0,\n maxWireVersion: 0,\n hosts: [],\n passives: [],\n arbiters: [],\n tags: []\n }\n },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\nconnect ECONNREFUSED 127.0.0.1:27017mongodmongoshcommand not found: mongoshPATH=\"/usr/local/opt/[email protected]/bin\" mongo MongoDB shell version v4.4.13Error: couldn't connect to server 127.0.0.1:27017,", "text": "I restarted my computer to try to update it, and before restarting, I was also working on another project. Now, my project with a node.js backend is giving me this error:and my mongodb compass is giving me: connect ECONNREFUSED 127.0.0.1:27017 . My operating system is macOS Big Sur.In a post on stackoverflow, someone suggested restarting the mongod process and connecting again. I tried following those instructionsmac-mongodb , but when I go to 'From a new terminal, issue the following: mongosh ’ my terminal gave me command not found: mongosh . I also tried testing it with PATH=\"/usr/local/opt/[email protected]/bin\" mongo MongoDB shell version v4.4.13 and I still got Error: couldn't connect to server 127.0.0.1:27017, with connection refused.Does anyone know how to fix this? I would really appreciate any help or advice. Thank you", "username": "Pullog" }, { "code": "ls /tmp/mongodb-27017*", "text": "I am not an expert, just another user, but I believe this set of steps could help youUpon start it seems to me that Mongodb server generates a lock that blocks another process to take that port (this may be fully false).Test this ls /tmp/mongodb-27017*, because it is a temporary file you can very well remove it.Then restart the server, and see if it connects.", "username": "Mah_Neh" }, { "code": "", "text": "ECONNREFUSED 127.0.0.1:27017Means that NO mongod instance is running at the given host 127.0.0.1 and port 27017. Yes the solution is to start mongod.because it is a temporary file you can very well remove itNever do that unless you know absolutely what you are doing.Executing the command mongosh or mongo is not how you start mongod. Since you seem uncertain about the difference between the mongod server and the mongosh/mongo client, you might one to take the course MongoDB Courses and Trainings | MongoDB University.", "username": "steevej" }, { "code": "", "text": "Why not? Likely the process shut down unexpectedly on the update and if the process is down I see no trouble doing it…also bc OP is testing in localhost…", "username": "Mah_Neh" }, { "code": "", "text": "You seem to know what you are doing.The original poster do not seem to know, so he asked a question.One day he may delete the file to solve another issue. Restart the server and corrupts its data.That is why I wrote,Never do that unless you know absolutely what you are doing.If you know what you are doing then yes go ahead and delete it.", "username": "steevej" }, { "code": "", "text": "Was anyone able to give a straight answer on this? I have the exact same problem", "username": "Pedro_Faria" }, { "code": "", "text": "The straight answer isECONNREFUSED 127.0.0.1:27017Means that NO mongod instance is running at the given host 127.0.0.1 and port 27017. Yes the solution is to start mongod.", "username": "steevej" }, { "code": "mongodb-community", "text": "no it’s not, this is no answertry to see the exact problem else we’ll go around and around. I even went to compass and reconnect and it’s refusingif I start mongod in anyway I have tried i got always denials like this one:==> Successfully started mongodb-community (label: homebrew.mxcl.mongodb-commu\nMac-do-Rateiro:Mongodb pedro$ mongo\nMongoDB shell version v5.0.6\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\nMac-do-Rateiro:Mongodb pedro$ sudo mongo\nPassword:\nMongoDB shell version v5.0.6\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\nMac-do-Rateiro:Mongodb pedro$ mongod\n{“t”:{\"$date\":“2022-06-12T14:30:39.923+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:\"-\",“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{\"$date\":“2022-06-12T14:30:39.923+01:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:\"-\",“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.924+01:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{\"$date\":“2022-06-12T14:30:39.924+01:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648602, “ctx”:“main”,“msg”:“Implicit TCP FastOpen in use.”}\n{“t”:{\"$date\":“2022-06-12T14:30:39.926+01:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{\"$date\":“2022-06-12T14:30:39.926+01:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“ns”:“config.tenantMigrationDonors”}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.926+01:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“ns”:“config.tenantMigrationRecipients”}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.926+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{\"$date\":“2022-06-12T14:30:39.926+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:4972,“port”:27017,“dbPath”:\"/data/db\",“architecture”:“64-bit”,“host”:“Mac-do-Rateiro”}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.926+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.6”,“gitVersion”:“212a8dbb47f07427dae194a9c75baec1d81d9259”,“modules”:[],“allocator”:“system”,“environment”:{“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.926+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Mac OS X”,“version”:“20.6.0”}}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.926+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{}}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.937+01:00”},“s”:“E”, “c”:“NETWORK”, “id”:23024, “ctx”:“initandlisten”,“msg”:“Failed to unlink socket file”,“attr”:{“path”:\"/tmp/mongodb-27017.sock\",“error”:“Permission denied”}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.937+01:00”},“s”:“F”, “c”:\"-\", “id”:23091, “ctx”:“initandlisten”,“msg”:“Fatal assertion”,“attr”:{“msgid”:40486,“file”:“src/mongo/transport/transport_layer_asio.cpp”,“line”:989}}\n{“t”:{\"$date\":“2022-06-12T14:30:39.937+01:00”},“s”:“F”, “c”:\"-\", “id”:23092, “ctx”:“initandlisten”,“msg”:\"\\n\\n***aborting after fassert() failure\\n\\n\"}orMac-do-Rateiro:Mongodb pedro$ mongoshCurrent Mongosh Log ID: 62a5eaac9f0a57c1d0ed9dd0Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.2MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017", "username": "Pedro_Faria" }, { "code": "", "text": "no it’s not, this is no answerWell, I think it is and what you posted just demontrates it.You try to start mongo twice before trying to start mongod. So you get the error because mongod is not running.You then try to start mongod, but it fails because some permission denied error.You then try to connect with mongosh and still get the error. Of course you get the error mongod was not started. see #2 above.The process mongod must be running before you try to connect.As to why mongod does not start, the it looks like mongod was started by root (or a user different from your current user) and wad not terminated correctly since the shutdown cleanup usually done has not been completed.Try to manually delete the offending file and try to start mongod again. You should use systemctl or something lioe that to start mongod rather than manually as it behaves better when the computer shuts down.", "username": "steevej" }, { "code": "", "text": "Well at least this is more infoSteve, the first time i did it was mongod and it always get the same answer. This is a little bit more complicated for sure.Now, your point on mongod started by root can be a logical cause but I am not sure to manually delete the “offending file” as you said, which leads to my question : what do you mean on the shutdown cleanup and how delete a “what is” offending file", "username": "Pedro_Faria" }, { "code": "", "text": "The offending file is the file mentionned in the error message you get when starting mongod.“msg”:“Failed to unlink socket file”,“attr”:{“path”:\"/tmp/mongodb-27017.sock\",“error”:“Permission denied”}}When mongod terminates gracefully, the socket file above and some other lock files that are used to communicate or ensure the integrity of data directories are deleted so that the next startup can proceed. When mongod terminate abruptly, it cannot delete such files and locks.", "username": "steevej" }, { "code": "", "text": "ok, good, so, how should I manually delete this file without making any mistake?", "username": "Pedro_Faria" }, { "code": "", "text": "/tmp/mongodb-27017.sockCheck owner/permissions on this filels -lrt /tmp/mongodb-27017.sockShutdown all mongods if any running and then remove this fileIf it is owned by root you have to use sudo rm\nAfter removing the file start mongod with sysctl as steevejSteeve Juneau suggested", "username": "Ramachandra_Tummala" }, { "code": "", "text": "it worked after doing this:\nsudo rm -rf /tmp/mongodb-27017.sock\n(delete the file)brew services restart [email protected]\n(restart services)brew services list\n(to check services and went “green” started)mongo\n(to start mongo, as well as connect with the compass, everything went ok after this)thank you all for the patience, hope it won’t happen more often", "username": "Pedro_Faria" }, { "code": "", "text": "well for me, all i had to do is search for ‘services’ on my windows. scrolled down to MongoDB services and started it and went back to reconnect it on compass. it worked", "username": "muyiwa_johnson" }, { "code": "sudo systemctl start mongod", "text": "solution is as steevej said, to start monogDB first, and you can do so sudo systemctl start mongod if you’re using ubuntu/linux, for windows give it a google search, or better yet look into mongoDB docs https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/i just faced this and was able to fix this by starting mongoDB, happy learning ", "username": "Asifuzzaman_Bappy" }, { "code": "", "text": "I was getting the same error and I followed your solution and it worked for me as well.\nThanks ", "username": "Haris_Khan" }, { "code": "", "text": "open the setup there is an option to repair in it\nthat worked for me. i have mongodb v 6", "username": "Abhyuday_N_A" }, { "code": "", "text": "Hello Pedro, I’m having the same issue but I dont see such file at all in the tmp folder on our server (Ubuntu 20.04)", "username": "priyatham_ik" }, { "code": "", "text": "there might be some different since I used a MAcOs terminal, but you have to check which file while you list it in the terminal. might not be the same tmp as mine, can be any other that you need to deletethe main problem is that you have a tmp file that is giving some sort of error and your folder can’t just delete it.Can you put here your log? with the error so we can check which is the file that is giving the error?", "username": "Pedro_Faria" } ]
Connect ECONNREFUSED 127.0.0.1:27017 in Mongodb Compass
2022-05-30T04:24:58.108Z
Connect ECONNREFUSED 127.0.0.1:27017 in Mongodb Compass
275,980
https://www.mongodb.com/…_2_472x1024.jpeg
[ "react-native" ]
[ { "code": "", "text": "Hello guys, I have a project in react native, and I have this error when it comes to consulting the schema dataconst realm = await getRealm();\nconst data = realm.objects(‘Manutencao’);I’m trying to get the data from this schema but this is returning the photo error.\nWhatsApp Image 2020-11-04 at 22.51.15738×1600 90.6 KBAnd there is data in that schema, because I’m in the realm studio and I’m seeing the data haha. Can someone help me please? \nI’m using react native in the latest version", "username": "Lucas_Silva" }, { "code": "", "text": "Hi lucas, any update on this issue, were you able to get it working ?", "username": "Pallavi_V" }, { "code": "", "text": "wondering if this is an issue that the realm community can’t handle even after a year.", "username": "Gbenga_Joseph" } ]
Error when trying to bring the schema data (item.toJSON (index.toString (). Cache), 'item.toJSON' is undefined
2020-11-05T02:43:36.262Z
Error when trying to bring the schema data (item.toJSON (index.toString (). Cache), &lsquo;item.toJSON&rsquo; is undefined
4,359
null
[]
[ { "code": "", "text": "Hi everyone,\ncould you please tell me the best strategy for the following use case?If I have to migrate a huge collection (1TB) is more efficent in terms of time, to create indexes: after or before the restore of the collection (if is better before i’ll use --noIndexRestore option) on the collecion i’ve 40 Indexes (the idea to have such quantity of indexes is not mine ).Thank you in advice for the help.Best Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Hi @Stennie_X || @steevej ,\nCan you help me?Best regards", "username": "Fabio_Ramohitaj" } ]
Migrate a huge collection (1tb)
2022-09-19T16:20:29.981Z
Migrate a huge collection (1tb)
978
null
[ "aggregation", "queries", "rust" ]
[ { "code": "#[derive(Debug, Deserialize, Serialize, Validate)]\npub struct Bill {\n #[serde(\n skip_serializing_if = \"Option::is_none\",\n serialize_with = \"serialize_object_id\",\n rename(serialize = \"id\")\n )]\n pub _id: Option<oid::ObjectId>,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n #[validate(required)]\n pub frequency: Option<String>,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n #[validate(required)]\n pub amount: Option<i32>,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n pub start_date: Option<DateTime<Utc>>,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n pub end_date: Option<DateTime<Utc>>,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n #[validate(required)]\n pub contact_number: Option<String>,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n pub policy_number: Option<i32>,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n #[validate(required)]\n pub title: Option<String>,\n #[serde(\n skip_serializing_if = \"Option::is_none\",\n serialize_with = \"serialize_object_id\"\n )]\n #[validate(required)]\n pub property_id: Option<oid::ObjectId>,\n}\n\nlet collection = db.collection::<Document>(\"bills\");\n\nlet pipeline: Vec<Document> = vec![doc! {\n \"$match\": {\n \"property_id\": oid::ObjectId::parse_str(id)?\n },\n}];\n\nlet cursor: Cursor<Document> = collection.aggregate(pipeline, None).await?;\nlet bills: Result<Vec<Document>, _> = cursor.try_collect().await;\n\n Ok(bills?)\n\n let collection = db.collection::<Bill>(\"bills\");\n\n let pipeline: Vec<Document> = vec![doc! {\n \"$match\": {\n \"property_id\": oid::ObjectId::parse_str(id)?\n },\n }];\n\n let cursor: Cursor<Bill> = collection.aggregate(pipeline, None).await?;\n let bills: Result<Vec<Bill>, _> = cursor.try_collect().await;\n\n Ok(bills?)\n\n`?` operator has incompatible types\n`?` operator cannot convert from `mongodb::Cursor<mongodb::bson::Document>` to `mongodb::Cursor<Bill>`\nexpected struct `mongodb::Cursor<Bill>`\n found struct `mongodb::Cursor<mongodb::bson::Document>`\n\nlet newBills:Vec<Bill> = bills?.iter().map(|&e| from_document::<Bill>(e)).collect();\n\na value of type `Vec<Bill>` cannot be built from an iterator over elements of type `Result<Bill, mongodb::bson::de::Error>`\nthe trait `FromIterator<Result<Bill, mongodb::bson::de::Error>>` is not implemented for `Vec<Bill>`\nthe trait `FromIterator<T>` is implemented for `Vec<T>`\n\n", "text": "I am trying to use the aggregation pipeline in rust driver to return a vector of a specific type. At the moment I am only able to get Vec Document type. How do I transform this to Vec T type?my current code looks like thisI have tried to change the collection type using turbo fish syntax but that only seems to work with basic query syntax like find but not aggregation. Something like thisbut then I get the following errorMy only other thing I can think of is to map over the bills iterator and use from_document bson function to deserialize to the Bill type but that doesnt seem to be performant and also Im struggling to implement it. It currently looks something like this.I then get the following errorAny help would be greatly appreciated", "username": "Dillon_Lee1" }, { "code": "let cursor: Cursor<Document> = collection.aggregate(pipeline, None).await?;\nlet bills: Result<Vec<Document>, _> = cursor.try_collect().await;\n\nlet new_bills: Vec<Bill> = bills?\n .into_iter()\n .map(|e| mongodb::from_document::<Bill>(e).unwrap())\n .collect();\n", "text": "ok so I managed to convert the mongo Documents to Bills by using into_iter() instead which iterates over T instead of &T. Also from_document returns a Result which I needed to unwrap.I still feel like there is a better way of doing this. Collecting a vector then iterating over it doesn’t seem optimal.", "username": "Dillon_Lee1" } ]
Get specific data type from aggregation instead of Document
2022-09-19T10:28:19.428Z
Get specific data type from aggregation instead of Document
2,531
null
[ "java" ]
[ { "code": "", "text": "We are experiencing a similar issue JAVA-3690 in production with MongoDB 3.6.9 and java driver 3.7.2. As per the JIRA ticket this issue is mentioned as applicable from 3.9 and above versions only. But we are experiencing this issue in the old driver 3.7.2 as well. We double checked that the fix is not in 3.7.2 driver code.Please confirm if this fix is applicable to 3.7.2 driver as well. If not are there are any other issues which is resulting \"Timeout waiting for a pooled item after \" errors and all the threads are getting stuck unless the application is restarted.Also is there an easy way to reproduce this issue ?", "username": "Venky_Chowdary" }, { "code": "", "text": "I replied to your similar question in the comments for JAVA-3690. The short answer is that I think whatever you’re experiencing with the 3.7 driver is a different issue. If you post more details on the symptoms, it’s possible that someone will be able to assist.Regards,\nJeff", "username": "Jeffrey_Yemin" } ]
Is JAVA-3690 applicable to MongoDB 3.6.9 and mongodb java driver 3.7.2, we are hitting this issue in this release
2022-09-20T15:14:14.316Z
Is JAVA-3690 applicable to MongoDB 3.6.9 and mongodb java driver 3.7.2, we are hitting this issue in this release
1,030
null
[ "aggregation", "queries", "python", "time-series" ]
[ { "code": "pipeline = [\n { \"$sort\": { \"timestamp\": -1 } },\n { \"$limit\": 10 }\n ]\nquery = mongodb[collection].aggregate(pipeline)\nmongodb[collection].create_index([ (\"timestamp\", -1) ])\n", "text": "Hi,i try to retrieve the most recent X documents inserted into a timeseries collection using pymongo. The query used is the following:This results in a very slow execution time (3-4 seconds) for a ~4GB collection. and about 70ms for a 8MB collection. An explain also shows a COLLSCAN is being performed which would explain the performance hit.\nI have already created indices for the timestamp field like this:i think that this use case is quite common and i don’t get what i am doing wrong.Thank you for your help.", "username": "Daniel_Lux" }, { "code": "system.bucketsdb.<collection>.explain().<query>COLLSCAN....\nexecutionStats: {\n executionSuccess: true,\n nReturned: 11001,\n executionTimeMillis: 16937,\n totalKeysExamined: 0,\n totalDocsExamined: 11001,\n executionStages: {\n stage: 'COLLSCAN',\n nReturned: 11001 \n....\n....\nexecutionStats: {\n executionSuccess: true,\n nReturned: 11001,\n executionTimeMillis: 8607,\n totalKeysExamined: 0,\n totalDocsExamined: 11001,\n executionStages: {\n stage: 'CLUSTERED_IXSCAN',\n filter: {\n '$and': [\n {\n _id: { '$gte': ObjectId(\"5fd343aa0000000000000000\") }\n },\n {\n 'control.max.timestamp': {\n '$_internalExprGte': ISODate(\"2021-01-10T10:02:18.242Z\")\n }\n },\n {\n 'control.min.timestamp': {\n '$_internalExprGte': ISODate(\"2020-12-11T10:02:18.242Z\")\n }\n }\n ]\n },\n nReturned: 11001,\n executionTimeMillisEstimate: 2,\n....\nCOLLSCANexecutionTimeMillisEstimate", "text": "Hello @Daniel_Lux and welcome to the MongoDb community!!MongoDB time series collections are basically a non materialised views under system.buckets backed by the internal collections. Querying in the time-series collection, utilises this format and returns results faster.The query optimisation on the time series collection works differently than the normal collection and hence, the db.<collection>.explain().<query> works on the underlying collection rather on the non-materialised views being created.The time series makes use the clustered index created by default and to further increase the performance, you can manually add the secondary indexes to the collection.However, for the above dataset provided, I tried to reproduce the issue on the MongoDB version 6.0 with dataset of around 1 GB of dataset and observed the following:Using the query :db.sample.explain().aggregate([ {$sort:{timestamp:-1}}, {$limit:10} ])This query takes around 16 seconds to respond with the appropriate documents and uses the COLLSCAN to fetch the documents which explains the delay.However, using the range query as:db.sample.explain().aggregate([ {$match:{timestamp:{$gte:ISODate(“2021-01-10T10:02:18.242Z”)}}}, {$sort:{timestamp:-1}}, {$limit:10} ])subsequently reduced the execution time for the query and makes use of the clustered Index.This further explains that, the range query targets the buckets and unpacks and produces the output, however, query without the range, would basically unpack all and give the output documents which explains the COLLSCAN in the execution status.For the above two query, the executionTimeMillisEstimate field value has efficiently reduced from ~16 seconds to ~8 seconds on average.\nTherefore, the recommendation would be make use of the range query to optimise the response for the collection.In conclusion, a time series collection works differently than a normal collection, and as of MongoDB 6.0, the usual query optimisation methods may not work as expected. Note however that this may change in the futureLet us know if you have any further questions.Best regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari, thank you for answering on this topic!the presented solution cuts down execution time indeed. But it seems rather inefficient to lookup all documents when they are sorted by timestamp anyway (even the clustered documents are sorted by “time window” i guess).\nMy naive approach would be to collscan the first clustered document and if the scan does not return the required amount of documents scan the next clustered document.The issue is that the data that i store is not inserted in equal intervals, so i don’t know the time range to filter first. I could store an average insertion interval for each collection i have elsewhere and use that to calculate an approriate filter first.There is one thing i noticed: your first axamples states executionTimeMillis: 16937 which corresponds to your 16 seconds execution time. The later example shows executionTimeMillis: 21277 which is 5 seconds more? Could you explain this discrepancy in more detail?Thank you for your help.Regards\nDaniel", "username": "Daniel_Lux" }, { "code": "", "text": "Hi @Daniel_LuxThank you for pointing out the difference in the execution time for both the queries.\nFor the same, I have edited the post with the right execution time which has reduced from 16 seconds to nearly 8 seconds on average after using the range query.Let us know if you have any further queries.Regards\nAasawari", "username": "Aasawari" }, { "code": "project_7> db.data.explain().aggregate([{ $match: { \"ts\" : { $gte: ISODate(\"2022-09-14T16:31:05.187Z\") } } },{ $sort: { \"ts\": -1 } },{ $limit: 20 }])\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'project_7.data',\n indexFilterSet: false,\n parsedQuery: { ts: { '$gte': ISODate(\"2022-09-14T16:31:05.187Z\") } },\n queryHash: '26B52568',\n planCacheKey: '26B52568',\n optimizedPipeline: true,\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'SORT',\n sortPattern: { ts: -1 },\n memLimit: 104857600,\n limitAmount: 20,\n type: 'simple',\n inputStage: {\n stage: 'COLLSCAN',\n filter: { ts: { '$gte': ISODate(\"2022-09-14T16:31:05.187Z\") } },\n direction: 'forward'\n }\n },\n rejectedPlans: []\n },\ndb.create_collection(\"data\", timeseries={ \"timeField\": \"ts\", \"metaField\": \"device\", \"granularity\": \"seconds\" })\nproject_7> db.runCommand({ listCollections: 1, filter: { type: \"timeseries\" } })\n{\n cursor: {\n id: Long(\"0\"),\n ns: 'project_7.$cmd.listCollections',\n firstBatch: []\n },\n ok: 1\n}\n", "text": "I tried to investigate that further. My aggregate does not run an CLUSTERED_IXSCAN but a COLLSCAN instead.Note that i replace “timestamp” with “ts”.Is seems like my collection creation code does not create a timeseries collection.I feel stupid for not checking that earlier.", "username": "Daniel_Lux" } ]
TimeSeries last x documents
2022-09-13T08:04:11.170Z
TimeSeries last x documents
2,645