image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "queries", "replication" ]
[ { "code": "readWriteenableShardingshardCollectionrootdemousersmy-user-roleenableShardingshardCollectionuse admin \ndb.createRole(\n {\n role: \"my-user-role\",\n privileges: [\n { resource: { cluster: true }, actions: [ \"enableSharding\", \"shardCollection\" ] }\n ],\n roles: [\n { role: \"readWrite\", db: \"demo\" }\n ]\n }\n)\n\nuse admin\ndb.createUser({\n user: 'demo-db-user',\n pwd: 'somepwd',\n roles: [\n {\n role: 'my-user-role', db: 'demo'\n }\n ]\n})\nMongoServerError: Could not find role: my-user-role@demo\n", "text": "Hi everyone,I have setup a shared cluster and now creating a user for a particular db with readWrite permission. What I also want this user can do is, he can create collections, enableSharding, shardCollection but can not delete the collection/db.I could not find any Built-In-role for this exact purpose. I mean I can give root access to that user, but then he will have access to all the databases which I dont want.I tried creating a user from admin db and credentials but failed. For example, my db is demo and collection is usersSo I created role my-user-role with enableSharding and shardCollection actions which can only access demo db (not sure if this is correct) .the role is created, next I want to create a new user (who only has access to demo db) and assign this role to that user:But I get error like this:I assume, I can not have a user with cluster permission on a db? What is the best way to achive this then?", "username": "VISHWAS_B_ANAND" }, { "code": "enableShardingshardCollectionenableShardingdb.createUser({\n user: \"yourUsername\",\n pwd: \"yourPassword\",\n roles: [\n {\n role: \"readWrite\",\n db: db: <DatabaseName>,\n collection: <CollectionName>,\n privileges: [\n { resource: { db: <DatabaseName>, collection: <CollectionName> }, actions: [\"find\", \"update\", \"insert\"] }\n ]\n },\n {\n role: \"read\",\n db: db: <DatabaseName>,\n collection: <CollectionName>,\n privileges: [\n { resource: { db: <DatabaseName>, collection: <CollectionName> }, actions: [\"find\"] }\n ]\n }\n ]\n})\n", "text": "Hi @VISHWAS_B_ANAND and welcome to MongoDB community forums!!he can create collections, enableSharding, shardCollection but can not delete the collection/db.Lets begin with each of the permissions one by one:You can use the command below to create the user:Please reach out in case you have any further questions.Warm regards\nAasawari", "username": "Aasawari" } ]
Role for a db user who can also `enableSharding` and `shardCollection`
2023-10-10T03:28:05.402Z
Role for a db user who can also `enableSharding` and `shardCollection`
263
null
[ "indexes", "storage" ]
[ { "code": "", "text": "For a long time I thought he was using btree until I saw this document link which says WiredTiger maintains a table’s data in memory using a data structure called a B-Tree (B+ Tree to be specific). But I found the source code of the btree link which doesn’t look like a b+tree(because I can’t find the next or pre pointer for leaf node). This make me confused. Can someone give me some pointers?", "username": "vassago_N_A" }, { "code": "", "text": "According to official manual, mongodb uses B+ tree.", "username": "Kobe_W" }, { "code": "", "text": "I found it, but in source code, no evidence can be found that the leaf nodes are connected", "username": "vassago_N_A" }, { "code": "", "text": "https://groups.google.com/g/wiredtiger-users/c/1YHbNXPw-1A", "username": "vassago_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does index use btree or b+tree?
2023-10-07T08:24:39.282Z
Does index use btree or b+tree?
341
null
[ "aggregation", "compass", "atlas-cluster" ]
[ { "code": "{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$geoNearCursor\": {\n \"queryPlanner\": {\n \"namespace\": \"myDatabase.myCollection\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"address.coordinates\": {\n \"$nearSphere\": {\n \"type\": \"Point\",\n \"coordinates\": [\n -39.99999999999996,\n 31.985213484470865\n ]\n },\n \"$maxDistance\": 7738757\n }\n },\n \"queryHash\": \"EEE5089B\",\n \"planCacheKey\": \"D2C8378B\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"GEO_NEAR_2DSPHERE\",\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"indexVersion\": 2,\n \"inputStages\": [\n {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"address.coordinates\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"address.coordinates\": [\n \"[-8484781697966014464, -8484781697966014464]\",\n ...\n \"[5458362748373041152, 5458362748373041152]\"\n ]\n }\n }\n },\n {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"address.coordinates\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"address.coordinates\": [\n \"[-8863084066665136128, -8863084066665136128]\",\n ...\n \"[5980780305148018688, 5980780305148018688]\"\n ]\n }\n }\n },\n {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"address.coordinates\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"address.coordinates\": [\n \"[-9223372036854775807, -9079256848778919937]\",\n ...\n \"[6557241057451442176, 6557241057451442176]\"\n ]\n }\n }\n },\n {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"address.coordinates\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"address.coordinates\": [\n \"[-9079256848778919935, -8935141660703064065]\",\n ...\n \"[6557241057451442177, 6593269854470406143]\"\n ]\n }\n }\n }\n ]\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1038653,\n \"executionTimeMillis\": 12643,\n \"totalKeysExamined\": 1038900,\n \"totalDocsExamined\": 1038862,\n \"executionStages\": {\n \"stage\": \"GEO_NEAR_2DSPHERE\",\n \"nReturned\": 1038653,\n \"executionTimeMillisEstimate\": 3730,\n \"works\": 2077585,\n \"advanced\": 1038653,\n \"needTime\": 1038931,\n \"needYield\": 0,\n \"saveState\": 2695,\n \"restoreState\": 2695,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"indexVersion\": 2,\n \"searchIntervals\": [\n {\n \"minDistance\": 0,\n \"maxDistance\": 1745064.5992172293,\n \"maxInclusive\": false,\n \"nBuffered\": 970,\n \"nReturned\": 254\n },\n {\n \"minDistance\": 1745064.5992172293,\n \"maxDistance\": 5235193.797651688,\n \"maxInclusive\": false,\n \"nBuffered\": 1005485,\n \"nReturned\": 929329\n },\n {\n \"minDistance\": 5235193.797651688,\n \"maxDistance\": 6980258.396868917,\n \"maxInclusive\": false,\n \"nBuffered\": 32344,\n \"nReturned\": 93204\n },\n {\n \"minDistance\": 6980258.396868917,\n \"maxDistance\": 7738757,\n \"maxInclusive\": true,\n \"nBuffered\": 63,\n \"nReturned\": 15866\n }\n ],\n \"inputStages\": [\n {\n \"stage\": \"FETCH\",\n \"nReturned\": 970,\n \"executionTimeMillisEstimate\": 2,\n \"works\": 976,\n \"advanced\": 970,\n \"needTime\": 5,\n \"needYield\": 0,\n \"saveState\": 2694,\n \"restoreState\": 2694,\n \"isEOF\": 1,\n \"docsExamined\": 970,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 970,\n \"executionTimeMillisEstimate\": 2,\n \"works\": 976,\n \"advanced\": 970,\n \"needTime\": 5,\n \"needYield\": 0,\n \"saveState\": 2694,\n \"restoreState\": 2694,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"address.coordinates\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"address.coordinates\": [\n \"[-8484781697966014464, -8484781697966014464]\",\n ...\n \"[5458362748373041152, 5458362748373041152]\"\n ]\n },\n \"keysExamined\": 976,\n \"seeks\": 6,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n },\n {\n \"stage\": \"FETCH\",\n \"nReturned\": 1005485,\n \"executionTimeMillisEstimate\": 1970,\n \"works\": 1005498,\n \"advanced\": 1005485,\n \"needTime\": 12,\n \"needYield\": 0,\n \"saveState\": 2693,\n \"restoreState\": 2693,\n \"isEOF\": 1,\n \"docsExamined\": 1005485,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 1005485,\n \"executionTimeMillisEstimate\": 825,\n \"works\": 1005498,\n \"advanced\": 1005485,\n \"needTime\": 12,\n \"needYield\": 0,\n \"saveState\": 2693,\n \"restoreState\": 2693,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"address.coordinates\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"address.coordinates\": [\n \"[-8863084066665136128, -8863084066665136128]\",\n ...\n \"[5980780305148018688, 5980780305148018688]\"\n ]\n },\n \"keysExamined\": 1005498,\n \"seeks\": 13,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n },\n {\n \"stage\": \"FETCH\",\n \"nReturned\": 32344,\n \"executionTimeMillisEstimate\": 42,\n \"works\": 32355,\n \"advanced\": 32344,\n \"needTime\": 10,\n \"needYield\": 0,\n \"saveState\": 205,\n \"restoreState\": 205,\n \"isEOF\": 1,\n \"docsExamined\": 32344,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 32344,\n \"executionTimeMillisEstimate\": 21,\n \"works\": 32355,\n \"advanced\": 32344,\n \"needTime\": 10,\n \"needYield\": 0,\n \"saveState\": 205,\n \"restoreState\": 205,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"address.coordinates\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"address.coordinates\": [\n \"[-9223372036854775807, -9079256848778919937]\",\n ...\n \"[6557241057451442176, 6557241057451442176]\"\n ]\n },\n \"keysExamined\": 32355,\n \"seeks\": 11,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n },\n {\n \"stage\": \"FETCH\",\n \"nReturned\": 63,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 71,\n \"advanced\": 63,\n \"needTime\": 7,\n \"needYield\": 0,\n \"saveState\": 25,\n \"restoreState\": 25,\n \"isEOF\": 1,\n \"docsExamined\": 63,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 63,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 71,\n \"advanced\": 63,\n \"needTime\": 7,\n \"needYield\": 0,\n \"saveState\": 25,\n \"restoreState\": 25,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"address.coordinates\": \"2dsphere\"\n },\n \"indexName\": \"address.coordinates\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"address.coordinates\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"address.coordinates\": [\n \"[-9079256848778919935, -8935141660703064065]\",\n ...\n \"[6557241057451442177, 6593269854470406143]\"\n ]\n },\n \"keysExamined\": 71,\n \"seeks\": 8,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n ]\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 1038653,\n \"executionTimeMillisEstimate\": 11423\n }\n ],\n \"serverInfo\": {\n \"host\": \"atlas-lo6otd-shard-00-02.cjrqe.mongodb.net\",\n \"port\": 27017,\n \"version\": \"6.0.10\",\n \"gitVersion\": \"8e4b5670df9b9fe814e57cb5f3f8ee9407237b5a\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"myCollection\",\n \"pipeline\": [\n {\n \"$geoNear\": {\n \"near\": {\n \"type\": \"Point\",\n \"coordinates\": [\n -39.99999999999996,\n 31.985213484470865\n ]\n },\n \"distanceField\": \"address.distCalculated\",\n \"maxDistance\": 7738757,\n \"spherical\": true\n }\n }\n ],\n \"allowDiskUse\": true,\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n \"$db\": \"myDatabase\"\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1696259256,\n \"i\": 1\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"UKqfJI4ISmrPawnwhP2nqP+kq1c=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": {\n \"$numberLong\": \"7234179876700291074\"\n }\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1696259256,\n \"i\": 1\n }\n }\n}\n", "text": "Hello I have a collection with ~1 million documents and my documents have some geographic data (coordinates).\nThe field containing the coordinates has a 2D sphere index, but when I use a large range in my query, the performance decreases significantly.Is there something I can do to improve my query?Thank you in advance!", "username": "Alina_Bolindu1" }, { "code": " \"nReturned\": 1038653,\n \"executionTimeMillis\": 12643,\n \"totalKeysExamined\": 1038900,\n \"totalDocsExamined\": 1038862,\n", "text": "Hello @Alina_Bolindu1 ,Welcome to The MongoDB Community Forums! Based on shared executionStatsIt appears that large number of documents are being examined and returned. Can you please share below mentioned details for me to understand your use-case better?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "maxDistance", "text": "@Tarun_Gaur thank you for your reply.\nI decided to try to implement guardrails on your queries via the maxDistance parameter.\nIf my implementation has satisfactory results, I’ll get back with a reply here.", "username": "Alina_Bolindu1" } ]
$geoNear performance
2023-10-02T15:27:35.832Z
$geoNear performance
308
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hi Guys ,\nNeed Help\nI am having two servers server A and server B\nMy mongodb is installed on server B and mongosh tool is also present there\nAfter that I am connecting server A and server B by using powershell.\nAfter connection when I am trying to connecting mongodb by using mongosh tool I am getting connected but immediately connection getting disconnected and I am coming out of test prompt.\nPlease support if any one having idea regarding this.", "username": "Arshabh_Gujar" }, { "code": "mongosh", "text": "Hi @Arshabh_Gujar and welcome to MongoDB community forums!!I am having two servers server A and server BSince you have mentioned the post in the Atlas category, can you confirm if the above two servers are two different Atlas servers or these are on-prem MongoDB severs ?\nIf these are on prem servers, could you help me with the MongoDB version and deployment configuration.After connection when I am trying to connecting mongodb by using mongosh tool I am getting connected but immediately connection getting disconnected and I am coming out of test prompt.Based on the above statement, it would be helpful if you can help me understand the following:immediately connection getting disconnected and I am coming out of test prompt.Do you observe any error message while the connection is getting terminated? Can you share the error logs while you are seeing the disconnect ?Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi Aasawari,\nBoth servers are hosted on GCP cloud servers.\n5985 port is open between server A and server B\nwhere Mongo DB 5.0 is installed in server which is running on win 16 server os.\nMongosh.exe file is also present in server B.\nI am connecting server A and server B by powershell using port 5985 after successful connection i am going at the path where mongosh module is present and after that i am trying to connect to that module in powershell by using connection string,\nafter hitting connection string test prompt is comming and just in fraction of an sec throwing out of the test prompt without any error message.", "username": "Arshabh_Gujar" } ]
Not able to connect to mongodb server using mongosh on powershell
2023-10-11T07:59:16.251Z
Not able to connect to mongodb server using mongosh on powershell
227
null
[ "compass" ]
[ { "code": "", "text": "Hello, I downloaded MongoDB Compass on my Windows laptop. I want to run MongoDB Compass as a service but I cannot find it in Services. Apparently I would need to reinstall Mongo and to select service option when installing. However, there is no service option upon installation. I am downloading MongoDB Compass from here: https://www.mongodb.com/try/download/compass. The file “mongodb-compass-1.40.3-win32-x64.exe” downloads on my machine. When I click and run this file, MongoDB Compass installs and opens. Of course, when I tried to connect to local, it fails as I need to run it as service. But it cannot be found in Services and no option to select service when installing is provided. Please advice. Thank you very much,", "username": "Astig_Mandalian" }, { "code": "", "text": "Did you install mongodb server inside windows machine.Do check if mongodb properly installed or notOpen cmdTypeMongod --versionIf written internal or external command not found reinstall mongodbOrSet path of mongodb inside Environment Variables", "username": "Bhavya_Bhatt" }, { "code": "", "text": "I want to run MongoDB Compass as a service but I cannot find it in ServicesI do not think that Compass is meant to be run as a service. It is a client application. Perhaps you are confused between Compass, then client, and mongod, the server. The latter can be run as a service. But it is not the same download.Of course, when I tried to connect to localIt fails to connect because mongod, the server, is not running.To install and run mongod, seeIf you still fail to connect, then perhaps, you will be better served with Atlas:Cloud-hosted MongoDB service on AWS, Azure, and GCP", "username": "steevej" }, { "code": "", "text": "Thanks for your reply, indeed it seems that I embarrassingly failed to first install the actual mongod. I went straight for Compass, which is only the UI", "username": "Astig_Mandalian" }, { "code": "", "text": "Of course, it all makes sense now - Compass is merely the UI for mongo! Thanks for pointing that out Steeve, I just installed MongoDB Community which I now can access successfully via the Compass.\nThanks again, Astig", "username": "Astig_Mandalian" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Compass non existing in Services on Windows
2023-10-12T15:48:33.737Z
MongoDB Compass non existing in Services on Windows
240
null
[ "mongoose-odm", "next-js" ]
[ { "code": "`export default async function register(request, response) {\nawait connectDB();\nconst { Firstname, Lastname, Email, Password } = request.body;\nconst stripe = Stripe(process.env.SK_PRIVATE);\nconsole.log(Firstname, Lastname, Email, Password);\ntry {\n //Registro de Usuario de la pagina en la base de datos\n const UserFound = await User.findOne({ Email });\n if (UserFound)\n return response\n .status(400)\n .json({ message: \"El email ya ha sido registrado\" });\n if (!Password || Password.length < 7)\n return response\n .status(400)\n .json({ message: \"At least needs 7 characters\" });\n const hash = await bcrypt.hash(Password, 10);\n\n const newUser = new User({\n firstname: Firstname,\n lastname: Lastname,\n email: Email,\n password: hash,\n });\n\n const userSaved = await newUser.save();\n\n console.log(\"hecho\");\n //Registro de Customer user data en la base de datos\n const customer = await stripe.customers.create({\n name: Firstname,\n email: Email,\n });\n\n const newCustomer = new Customer({\n userId: userSaved._id,\n stripeCustomerId: customer.id,\n });\n\n const customerSaved = await newCustomer.save();\n\n console.log(\"Cliente guardado con éxito:\", customerSaved);\n\n response.json({\n message: \"Se ha registrado correctamente\",\n });\n console.log(\"Datos recibidos en el servidor:\", hash, Email);\n} catch (error) {\n response.status(500).json({ message: error.message });\n console.log(\"Algo a ido mal\", error.message);\n}\n}\n", "text": "I´m having problems when I try to save information of two different schemas in the same project.Context: The project is in Next.js, and I’m using MongoDB as the database provider. It throws the following error: “Something has gone wrong. User validation failed: password: The password is required, email: E-mail is required, lastname: Path lastname is required, firstname: Path firstname is required.” PostData: The user.save() works correctly.My theory is that it is not the best way to fill in schemas in one request. Another aspect is that I’ve deleted the logic of the user to see if the other schemas work, and it seems that it recognizes the request as the User Schema. It could be an exportation problem.", "username": "Juan_N_A" }, { "code": "“Something has gone wrong. User validation failed: password: The password is required, email: E-mail is required, lastname: Path lastname is required, firstname: Path firstname is required.”", "text": "Hey @Juan_N_A,Welcome to the MongoDB Community!Apologies for the late response.“Something has gone wrong. User validation failed: password: The password is required, email: E-mail is required, lastname: Path lastname is required, firstname: Path firstname is required.”It seems that the error you are encountering is likely due to the fact that the required fields (firstname, lastname, email, password) in your User schema are not being populated correctly or are missing in your request. This could be the reason that the variables you are trying to destructure from the request body actually do not exist. Please ensure that the request body contains the properties you are expecting.Also, if you are planning to work with multiple schemas in the same project, make sure to define and handle each schema separately with their respective validation rules and requirements.I hope this helps!Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Troubles doing 'const customerSaved = await newCustomer.save()'
2023-09-21T05:23:00.635Z
Troubles doing &lsquo;const customerSaved = await newCustomer.save()&rsquo;
318
null
[ "aggregation", "php" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"6516706a81f4dcd6d4e188c1\"\n },\n \"bom_data\": {\n \"loginTime\": \"123\",\n \"description\": \"asdf\",\n \"submit\": \"1\"\n },\n \"business_key\": \"Check_In_Process_e13ijeb9\",\n \"created_at\": {},\n \"taskid\": null,\n \"updated_at\": {\n \"$date\": \"2023-09-29T06:36:26.642Z\"\n }\n}\ndb.sample_data.aggregate([\n { $match: { business_key: 'Hourlyprocess_JWIL_67hh3jj0' } },\n {\n $project: {\n _id: 1,\n loginTime: \"'$bom_data.loginTime'\",\n description: \"'$bom_data.description'\",\n business_key: 1,\n taskid: 1,\n updated_at: 1,\n m: { $month: '$updated_at' },\n },\n },\n { $sort: { updated_at: 1 } },\n]);\ncan't convert from BSON type object to Date", "text": "The below data was inserted through Laravel eloquent, updated_at field is populated automaticallya simple aggregate query for hourlyi got the below errorcan't convert from BSON type object to Datehow to group this", "username": "Thavarajan_M" }, { "code": "", "text": "Seems to work on Mongo Playground, are you sure all the document have well formatted dates?Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell" }, { "code": "", "text": "yes definitely all data in that business key is perfect,", "username": "Thavarajan_M" }, { "code": "", "text": "is there any way to identify the malformed date", "username": "Thavarajan_M" }, { "code": "db.collection.aggregate([\n {\n $group: {\n _id: {\n $type: \"$updated_at\"\n },\n total: {\n $sum: 1\n }\n }\n }\n])\n", "text": "", "username": "John_Sewell" }, { "code": "", "text": "Thanks for the input, i have tested it , few of them are stored as object, and few of them stored as date,\nwhen i try to filter using the type as object i get the perfect date alone, but why it was stored as object, really weird to me,\ni am sure all the dates are valid format", "username": "Thavarajan_M" }, { "code": "", "text": "how should i convert it to date type", "username": "Thavarajan_M" }, { "code": "db.collection.aggregate([\n {\n $match: {\n $expr: {\n $eq: [\n {\n $type: \"$updated_at\"\n },\n \"object\"\n ]\n }\n }\n }\n])\n", "text": "It depends on what the actual value is in there, how many were dates and how many objects?If you run a query something like this you can see what the documents with the malformed element look like:If you see how the wrong ones are, then you can craft an update statement to correct them, and probably hunt for why you have inconsistent data in there.", "username": "John_Sewell" }, { "code": "", "text": "yes i already got that, so for all of them are date only, with the proper time and date format", "username": "Thavarajan_M" }, { "code": "db.getCollection('sample_data')\n .find(\n {\n updated_at: { $type: 'object' },\n },\n {\n _id:1,\n updated_at: 1,\n //updated_on:{$toDate:'$updated_at'}\n }\n );\n", "text": "tried it with a different approach", "username": "Thavarajan_M" }, { "code": "", "text": "If the data type reported is object and not date, then it’s not a date. What does one of those documents look like that’s reported to be an object for that field?", "username": "John_Sewell" }, { "code": "", "text": "AS i said earlier, it was auto update prop, there is nothing changed", "username": "Thavarajan_M" }, { "code": "[\n {\n \"_id\": {\n \"$oid\": \"651632ea8a159ebe9cd50dfb\"\n },\n \"updated_at\": {\n \"$date\": \"2023-09-27T11:54:30.868Z\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"651632ea8a159ebe9cd50dfc\"\n },\n \"updated_at\": {\n \"$date\": \"2023-09-27T11:54:30.868Z\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"651632ea8a159ebe9cd50dfd\"\n },\n \"updated_at\": {\n \"$date\": \"2023-09-27T11:54:30.868Z\"\n }\n }\n]\n", "text": "", "username": "Thavarajan_M" }, { "code": "db.collection.aggregate([\n {\n $match: {\n $expr: {\n $eq: [\n {\n $type: \"$updated_at\"\n },\n \"object\"\n ]\n }\n }\n },\n {\n $project: {\n theValue: \"$updated_at\",\n theType: {\n $type: \"$updated_at\"\n }\n }\n }\n])\n", "text": "Can you add the $type output to that query, there must be something going on if the aggregation framework is reporting something is not a date but it’s falling over on date operations when you’re trying to run, something like:", "username": "John_Sewell" }, { "code": "[\n {\n \"_id\": {\n \"$oid\": \"651632ea8a159ebe9cd50dfb\"\n },\n \"theValue\": {\n \"$date\": \"2023-09-27T11:54:30.868Z\"\n },\n \"theType\": \"object\"\n },\n {\n \"_id\": {\n \"$oid\": \"651632ea8a159ebe9cd50dfc\"\n },\n \"theValue\": {\n \"$date\": \"2023-09-27T11:54:30.868Z\"\n },\n \"theType\": \"object\"\n },\n {\n \"_id\": {\n \"$oid\": \"651632ea8a159ebe9cd50dfd\"\n },\n \"theValue\": {\n \"$date\": \"2023-09-27T11:54:30.868Z\"\n },\n \"theType\": \"object\"\n },\n]\n", "text": "", "username": "Thavarajan_M" }, { "code": "db.collection.aggregate([\n {\n $match: {\n $expr: {\n $eq: [\n {\n $type: \"$updated_at\"\n },\n \"object\"\n ]\n }\n }\n },\n {\n $project: {\n theValue: \"$updated_at\",\n theType: {\n $type: \"$updated_at\"\n },\n m: { $month: '$updated_at' }\n }\n }\n])\n", "text": "That’s weird, so adding this will break it?", "username": "John_Sewell" }, { "code": " $type: \"$updated_at\"\n },\n m: { $month: '$updated_at' }\n }\n }\n])\n", "text": "yes, it happens is it possible to update the records of the underlying type", "username": "Thavarajan_M" }, { "code": "", "text": "Looks like the same or related date vs object as Weired Problem with Date field query returning zero results only using Atlas Data API - #9 by steevej.We will never know for sure as the author of the thread I shared, @S_F, never came back for the followup.8-(", "username": "steevej" }, { "code": "", "text": "The only other think I can think of is it the data was somehow stored by the ORM with an actual $date as a field name as opposed to an actual date, in which case it could look like a date when it was not.I crafted this:image705×205 4.02 KBThe first document has an actual date, and the second, I created using $date as a field name, if I export using mongoexport it looks like this:\nimage764×51 2.59 KBI wonder if this is what happened?", "username": "John_Sewell" }, { "code": "db.getCollection(\"Test\").aggregate([\n {\n $match: {\n $expr: {\n $eq: [\n {\n $type: \"$updated_at\"\n },\n \"object\"\n ]\n }\n }\n },\n {\n $project:{\n corrected:{\n $cond:{\n if:{$eq:['date', {$type:'$updated_at'}]},\n then:{$toString:'$updated_at'},\n else:{$objectToArray:'$updated_at'}\n }\n }\n }\n},\n])\n", "text": "What does this show? With my funky data I get this as the output:\n", "username": "John_Sewell" } ]
Unable to use aggregate with date hour
2023-10-03T15:06:09.163Z
Unable to use aggregate with date hour
718
https://www.mongodb.com/…9_2_1023x185.png
[ "node-js", "mongoose-odm", "next-js" ]
[ { "code": "Cannot find module './common'\nRequire stack:\n- /var/task/server/node_modules/debug/src/node.js\n- /var/task/server/node_modules/debug/src/index.js\n- /var/task/server/node_modules/mquery/lib/mquery.js\n- /var/task/server/node_modules/mongoose/lib/query.js\n- /var/task/server/node_modules/mongoose/lib/index.js\n- /var/task/server/node_modules/mongoose/index.js\nDid you forget to add it to \"dependencies\" in `package.json`?\nError: Runtime exited with error: exit status 1\nRuntime.ExitError\nnode_modules/mongoose", "text": "Hey everyone,\nI’ve got a web application that I’m trying to host on vercel. The front end works, but when I’m trying to run my server, I get an error. I believe the error is related to mongoDB in some way, because the error log says:I checked the files inside node_modules, and the only thing that I see that could potentially be an error is this:\nScreenshot 2023-09-27 at 10.33.36 PM1558×282 37.8 KB\nI see this in every require statement in node_modules/mongoose. However, it seems to be a Typescript issue, and my project is Javascript, so I’m not sure this is what’s causing the problems. [It works locally, so it must be okay…]\nI realize this may not be the appropriate place for this question, but I’ve been banging my head against the wall for a couple of days with this, and am very desperate at this point.If you could please share any insights or guidance Thank you!!", "username": "Sam_N_A1" }, { "code": "Did you forget to add it to \"dependencies\" in `package.json`?\npackage.jsonnpm inode_modules/mongoose", "text": "Hi @Sam_N_A1,Welcome to the MongoDB Community!Apologies for the late response.Would you kindly verify whether all the necessary dependencies are correctly added to the package.json file? Also, please delete the ‘node_modules’ folder and then execute the npm i command. This will initiate the download of all the dependencies specified in the package.json file.I see this in every require statement in node_modules/mongoose.I suspect there may be a version compatibility issue. Please ensure that you are using the latest version of Mongoose. Additionally, I would advise reaching out to Vercel community or support for any environment-specific issues that might be causing this error in the deployment.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Cannot find module './common' when hosting on vercel
2023-09-28T06:19:55.337Z
Cannot find module &lsquo;./common&rsquo; when hosting on vercel
409
null
[ "sharding", "mongodb-shell" ]
[ { "code": "sudo apt-get install -y mongodb-org=7.0.2 mongodb-org-database=7.0.2 mongodb-org-server=7.0.2 mongodb-mongosh=7.0.2 mongodb-org-mongos=7.0.2 mongodb-org-tools=7.0.2E: Version '7.0.2' for 'mongodb-mongosh' was not found", "text": "I’m following the instructions found here to upgrade to Mongo Community edition from 6.0 → 7.0.When running:\nsudo apt-get install -y mongodb-org=7.0.2 mongodb-org-database=7.0.2 mongodb-org-server=7.0.2 mongodb-mongosh=7.0.2 mongodb-org-mongos=7.0.2 mongodb-org-tools=7.0.2from the instructions I get the error:E: Version '7.0.2' for 'mongodb-mongosh' was not foundI looked at the package it’s pulling from: https://repo.mongodb.org/apt/ubuntu/dists/jammy/mongodb-org/7.0/multiverse/binary-amd64/and I agree, I don’t see mongodb-mongosh_7.0.2 in there. I’m trying to understand what the correct command is to install this.", "username": "Alex_Scott" }, { "code": "", "text": "Download the mongosh externallyThe MongoDB Shell is a modern command-line experience, full with features to make it easier to work with your database. Free download. Try now!Download and install form this link", "username": "Bhavya_Bhatt" }, { "code": "mongodb-mongosh=7.0.2mongodb-mongosh=2.0.1", "text": "I guess the documentation needs to be updated. Changing mongodb-mongosh=7.0.2 to mongodb-mongosh=2.0.1 seems to work.", "username": "Alex_Scott" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upgrading a standalone installation to Mongo 7.0.2 on Ubuntu 22.04 results in Version '7.0.2' for 'mongodb-mongosh' was not found error
2023-10-13T03:21:49.865Z
Upgrading a standalone installation to Mongo 7.0.2 on Ubuntu 22.04 results in Version &lsquo;7.0.2&rsquo; for &lsquo;mongodb-mongosh&rsquo; was not found error
420
null
[ "queries", "replication", "monitoring" ]
[ { "code": "", "text": "Hello, I need to know how to consult the number of connections in my secondary nodes from the primary node, I have a replica set composed of 3 nodes (one primary node and two secondary nodes), the development colleagues need to consult the databases but it is necessary guarantee that certain reads are performed on secondary nodes only (Read Preference:secondary), how can I know if connections from development users are reaching my secondary nodes? I would need to show the IP or some element that indicates that the origin of the connection is from the development colleagues,\nGreetings!", "username": "Arnaldo_Raxach" }, { "code": "", "text": "how can I know if connections from development users are reaching my secondary nodes?Why you want to know this? if those dev users are using secondary as read preference, they will read from a secondary node. (assume mongdb has no bug in this logic).", "username": "Kobe_W" }, { "code": "", "text": "If you use Altas you can monitore the secondary cluster\nDisk query depth metrics", "username": "Bhavya_Bhatt" } ]
Query connections on secondary nodes from the primary node
2023-10-11T14:52:03.020Z
Query connections on secondary nodes from the primary node
242
null
[ "time-series" ]
[ { "code": "Starting in MongoDB 6.3, MongoDB automatically creates a compound index on the metaField and timeField fields for new collections.", "text": "According to the official documentation of Time Series MongoDB.Starting in MongoDB 6.3, MongoDB automatically creates a compound index on the metaField and timeField fields for new collections.Is their a way to disable this compound index creation behaviour?", "username": "Yashasvi_Pant" }, { "code": "", "text": "Hi @Yashasvi_Pant ,Is their a way to disable this compound index creation behaviour?Currently, we do not have any such feature in 6.3 and later version to disable automatic creation of compound index on the metaField and timeField fields for new collections in time series.May I know, why would you like to disable this?As per documentation on Time Series Secondary Index, one can always create additional secondary indexes to improve query performance for any type of query operations.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Disable default behaviour of Time Series MongoDB of creating compound index
2023-10-12T11:45:00.589Z
Disable default behaviour of Time Series MongoDB of creating compound index
187
null
[ "aggregation", "java" ]
[ { "code": "[\n{\n \"appId\": 3,\n \"totalAmount\": 11\n},\n{\n \"appId\": 7,\n \"totalAmount\": 100\n},\n{\n \"appId\": 5,\n \"totalAmount\": 2\n}\n]\n", "text": "Hi,i tried but still can’t figure it out.\nThe requirement is:I put my example as below for validation before coding:Mongo playground: a simple sandbox to test and share MongoDB queries onlineThe result would be:Any hints are welcome.\nThank you in advance.", "username": "Anderson_Lin" }, { "code": "", "text": "Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "chris" }, { "code": "", "text": "Thank you, Chris!But it seems not solve the problem.", "username": "Anderson_Lin" }, { "code": "db.collection.aggregate([\n {\n $match: {\n year: 2023,\n month: 9,\n day: 15,\n newUser: true\n }\n },\n {\n $group: {\n _id: null,\n distinctUserId: {\n $addToSet: \"$userId\"\n }\n }\n },\n {\n $lookup: {\n from: \"collection\",\n localField: \"distinctUserId\",\n foreignField: \"userId\",\n pipeline: [\n {\n $project: {\n _id: 0,\n appId: 1,\n amount: 1\n }\n }\n ],\n as: \"result\"\n }\n },\n {\n $unset: [\n \"_id\",\n \"distinctUserId\"\n ]\n },\n {\n $unwind: {\n path: \"$result\",\n preserveNullAndEmptyArrays: false\n }\n },\n {\n $project: {\n _id: 0,\n appId: \"$result.appId\",\n amount: \"$result.amount\"\n }\n },\n {\n $group: {\n _id: {\n appId: \"$appId\"\n },\n paidNewUserIncome: {\n $sum: \"$amount\"\n }\n }\n }\n])\n", "text": "Hi,It seems that below script is working:Mongo playground: a simple sandbox to test and share MongoDB queries onlineBut it uses self-lookup,\nI don’t know how the performance while the documents grow.If you have any good idea,\nplease kindly share with me.Thank you.", "username": "Anderson_Lin" }, { "code": "db.collection.aggregate([\n {\n $match: {\n year: 2023,\n month: 9,\n day: 15,\n newUser: true\n }\n },\n {\n $group: {\n _id: null,\n distinctUserId: {\n $addToSet: \"$userId\"\n }\n }\n },\n {\n $lookup: {\n from: \"collection\",\n localField: \"distinctUserId\",\n foreignField: \"userId\",\n pipeline: [\n {\n $project: {\n _id: 0,\n appId: 1,\n amount: 1\n }\n }\n ],\n as: \"result\"\n }\n },\n {\n $unset: [\n \"_id\",\n \"distinctUserId\"\n ]\n },\n {\n $unwind: {\n path: \"$result\",\n preserveNullAndEmptyArrays: false\n }\n },\n {\n $replaceWith: \"$result\"\n },\n {\n $group: {\n _id: {\n appId: \"$appId\"\n },\n paidNewUserIncome: {\n $sum: \"$amount\"\n }\n }\n }\n])\n", "text": "Hi,Another way:", "username": "Anderson_Lin" } ]
Grouping in the same collection after got distinct value
2023-10-12T07:54:58.266Z
Grouping in the same collection after got distinct value
215
https://www.mongodb.com/…83a29de314fa.png
[ "queries" ]
[ { "code": "", "text": "02958×513 207 KBWe are unable to connect the same on the destination server through powershell. We are getting connected to DB & immediately getting disconnected within a second.", "username": "Prathamesh_Satam1" }, { "code": "", "text": "Looks like you’re not getting connected. The connection is hanging then you hit the 2-second timeout you set in the connection string. Probably your mongod instance is configured incorrectly and isn’t servicing the loopback interface.", "username": "Jack_Woehr" }, { "code": "", "text": "Check log files to get error", "username": "Bhavya_Bhatt" } ]
Unable to connect MongoDB through Powershell, PFA screenshot of the error
2023-10-11T07:57:14.341Z
Unable to connect MongoDB through Powershell, PFA screenshot of the error
236
null
[ "node-js", "transactions" ]
[ { "code": "error: Error in runInTransaction:MongoServerError: WiredTigerRecordStore::insertRecord :: caused by :: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction. {\"label\":\"runInTransaction\",\"timestamp\":\"2023-10-03 06:12:20 PM\"}\n", "text": "getting this error while using multi document transaction can some one suggest how to resolve this issue", "username": "biranjan_soni1" }, { "code": "error: Error in runInTransaction:MongoServerError: WiredTigerRecordStore::insertRecord :: caused by :: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction. {“label”:“runInTransaction”,“timestamp”:“2023-10-03 06:12:20 PM”}", "text": "Hi @biranjan_soni1,Welcome to the MongoDB Community error: Error in runInTransaction:MongoServerError: WiredTigerRecordStore::insertRecord :: caused by :: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction. {“label”:“runInTransaction”,“timestamp”:“2023-10-03 06:12:20 PM”}The issue you are facing seems to be related to MongoDB’s document-level concurrency control. MongoDB uses optimistic concurrency control to ensure consistency in its transactions.As you know, if two or more transactions attempt to modify the same document concurrently, MongoDB will throw a WriteConflict error, indicating that one or more transactions failed due to conflicts.To understand more about your error, could you please share the following:Meanwhile, please go through this link to read about In-progress Transactions and Write Conflicts.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Error in runInTransaction:MongoServerError: WiredTigerRecordStore::insertRecord
2023-10-04T10:22:22.532Z
Error in runInTransaction:MongoServerError: WiredTigerRecordStore::insertRecord
288
null
[]
[ { "code": "sudo systemctl restart mongod\nsudo systemctl status mongod\n\n× mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2023-10-11 16:47:01 CST; 51s ago\n Docs: https://docs.mongodb.org/manual\n Process: 5370 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)\n Main PID: 5370 (code=exited, status=1/FAILURE)\n CPU: 91ms\n\n10月 11 16:47:01 ubuntu mongod[5370]: Frame: {\"a\":\"5558F6B41F37\",\"b\":\"5558F49C4000\",\"o\":\"217DF37\",\"s\":\">\n10月 11 16:47:01 ubuntu mongod[5370]: Frame: {\"a\":\"5558F971D477\",\"b\":\"5558F49C4000\",\"o\":\"4D59477\",\"s\":\">\n10月 11 16:47:01 ubuntu mongod[5370]: Frame: {\"a\":\"5558F971D8ED\",\"b\":\"5558F49C4000\",\"o\":\"4D598ED\",\"s\":\">\n10月 11 16:47:01 ubuntu mongod[5370]: Frame: {\"a\":\"5558F6AE371D\",\"b\":\"5558F49C4000\",\"o\":\"211F71D\",\"s\":\">\n10月 11 16:47:01 ubuntu mongod[5370]: Frame: {\"a\":\"5558F68D420E\",\"b\":\"5558F49C4000\",\"o\":\"1F1020E\",\"s\":\">\n10月 11 16:47:01 ubuntu mongod[5370]: Frame: {\"a\":\"7FD9A6429D90\",\"b\":\"7FD9A6400000\",\"o\":\"29D90\",\"s\":\"__>\n10月 11 16:47:01 ubuntu mongod[5370]: Frame: {\"a\":\"7FD9A6429E40\",\"b\":\"7FD9A6400000\",\"o\":\"29E40\",\"s\":\"__>\n10月 11 16:47:01 ubuntu mongod[5370]: Frame: {\"a\":\"5558F6ADEA8E\",\"b\":\"5558F49C4000\",\"o\":\"211AA8E\",\"s\":\">\n10月 11 16:47:01 ubuntu systemd[1]: mongod.service: Main process exited, code=exited, status=1/FAILURE\n10月 11 16:47:01 ubuntu systemd[1]: mongod.service: Failed with result 'exit-code'.\n\n\ncat /lib/systemd/system/mongod.service\n[Unit]\nDescription=MongoDB Database Server\nDocumentation=https://docs.mongodb.org/manual\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nUser=mongodb\nGroup=mongodb\nEnvironmentFile=-/etc/default/mongod\nExecStart=/usr/bin/mongod --config /etc/mongod.conf\nPIDFile=/var/run/mongodb/mongod.pid\n# file size\nLimitFSIZE=infinity\n# cpu time\nLimitCPU=infinity\n# virtual memory size\nLimitAS=infinity\n# open files\nLimitNOFILE=64000\n# processes/threads\nLimitNPROC=64000\n# locked memory\nLimitMEMLOCK=infinity\n# total threads (user+kernel)\nTasksMax=infinity\nTasksAccounting=false\n\n# Recommended limits for mongod as specified in\n# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings\n\n[Install]\nWantedBy=multi-user.target\n\nroot@ubuntu:/var/run# cat /etc/mongod.conf\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n\nwangshuxiang@ubuntu:~$ sudo ls -l /var/lib/mongodb/\ntotal 280\n-rw------- 1 mongodb mongodb 20480 10月 11 15:42 collection-0-4075949709213312323.wt\n-rw------- 1 mongodb mongodb 36864 10月 11 15:42 collection-2-4075949709213312323.wt\n-rw------- 1 mongodb mongodb 4096 10月 11 09:43 collection-4-4075949709213312323.wt\ndrwx------ 2 mongodb mongodb 4096 10月 11 15:42 diagnostic.data\n-rw------- 1 mongodb mongodb 20480 10月 11 15:42 index-1-4075949709213312323.wt\n-rw------- 1 mongodb mongodb 36864 10月 11 15:42 index-3-4075949709213312323.wt\n-rw------- 1 mongodb mongodb 4096 10月 11 09:43 index-5-4075949709213312323.wt\n-rw------- 1 mongodb mongodb 4096 10月 11 09:44 index-6-4075949709213312323.wt\ndrwx------ 2 mongodb mongodb 4096 10月 11 15:42 journal\n-rw------- 1 mongodb mongodb 20480 10月 11 15:42 _mdb_catalog.wt\n-rw------- 1 mongodb mongodb 5 10月 11 15:42 mongod.lock\n-rw------- 1 mongodb mongodb 36864 10月 11 15:42 sizeStorer.wt\n-rw------- 1 mongodb mongodb 114 10月 11 09:38 storage.bson\n-rw------- 1 mongodb mongodb 50 10月 11 09:38 WiredTiger\n-rw------- 1 mongodb mongodb 4096 10月 11 15:42 WiredTigerHS.wt\n-rw------- 1 mongodb mongodb 21 10月 11 09:38 WiredTiger.lock\n-rw------- 1 mongodb mongodb 1465 10月 11 15:42 WiredTiger.turtle\n-rw------- 1 mongodb mongodb 69632 10月 11 15:42 WiredTiger.wt\n\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.139+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":722}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.139+08:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.143+08:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.143+08:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":65530,\"recommendedMinimum\":102400,\"maxConns\":51200},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.144+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.144+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.144+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.145+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.145+08:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/var/lib/mongodb/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.147+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.147+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.148+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.148+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:20.148+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:21.000+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\\n\"}}\n{\"t\":{\"$date\":\"2023-10-11T15:42:21.000+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): Fil\n", "text": "system: ubuntu 22.04.1here is the service filehere is .confi’m try to authorizehere is some logbut still fail to start mongodb…how to fix this issue?", "username": "citrus_Gatsby" }, { "code": "journalctl -u mongod", "text": "Check journalctl -u mongod I think all the good information is in there for this issue.", "username": "chris" } ]
Try to start but get failed (Result: exit-code)
2023-10-11T13:41:10.514Z
Try to start but get failed (Result: exit-code)
233
null
[]
[ { "code": "", "text": "We are looking into using Mongo charts in a web based application. Our customer data is segregated on different data bases. Is there a way to embed a dashboard in an application that reads data from the correct DB based on the user.", "username": "Mark_Peterson" }, { "code": "", "text": "Hi Mark,This is not currently supported out of the box, a potential workaround is to duplicate the dashboard and hook it up with different databases/datasources, and in your application, based on the user, you could choose which embed dashboard to render.If further filter is required within the embed dashboard filter can be applied to each embedded chart: https://www.mongodb.com/docs/charts/filter-embedded-charts/#filter-embedded-charts.Also if you have any ideas on improvement for the SDK or the Charts app in general, please feel free to submit your idea or feedback through here: Charts: Top (194 ideas) – MongoDB Feedback Engine.Many thanks,\nJames", "username": "James_Wang1" }, { "code": "", "text": "Hi @Mark_Peterson, apologies for the delay in responding. You can dynamically apply a filter on a chart based on the user, but there is no way to dynamically select a collection or database. We are aware of this requirement for users who model data in this way, and are looking into options on how to support this in the future.Tom", "username": "tomhollander" }, { "code": "", "text": "@tomhollander any movement on this requirement? Is it any closer?", "username": "Saul_Gowens" }, { "code": "", "text": "Hi @Saul_Gowens, this is something we hope to address next year. Apologies that it’s taken us a while to get to this.", "username": "tomhollander" }, { "code": "", "text": "@tomhollander we would be interested in seeing something like this as well.If the unimplemented proposal is something like just swapping out the cluster and database names by providing authenticated options { ClusterName: ‘foo’, DatabaseName: ‘bar’ } that would be ideal, and the actual Collections would remain the same. At least that’s what I think the OP wants as well.", "username": "Mark_Johnson" }, { "code": "", "text": "Thanks Mark, we definitely understand the importance of this scenario for some users. We’ve seen cases where people want to change any combination of cluster, database and collection. None of these are particularly more difficult than any other; the challenge is how to do this securely. We’ve heard that many users partition different customers’ data into different DBs or clusters, so it wouldn’t be acceptable to have the SDK control the data source on the client. If a user from Pepsi logs in they shouldn’t be able to hack the SDK to view data from Coke’s data source. So we need to find a secure way to do this server side, possibly similar to injected filters.Happy to hear any further ideas or feedback.\nTom", "username": "tomhollander" } ]
Dynamically select DB for Embedded Chart based on user
2022-06-16T14:01:32.684Z
Dynamically select DB for Embedded Chart based on user
2,724
null
[ "flexible-sync" ]
[ { "code": "\n _realm.Subscriptions.Update(() =>\n {\n _realm.Subscriptions.Add(_realm.All<EntryEntity>());\n _realm.Subscriptions.Add(_realm.All<LanguageEntity>());\n _realm.Subscriptions.Add(_realm.All<PhraseEntity>());\n _realm.Subscriptions.Add(_realm.All<SourceEntity>());\n _realm.Subscriptions.Add(_realm.All<TranslationEntity>());\n _realm.Subscriptions.Add(_realm.All<UserEntity>());\n _realm.Subscriptions.Add(_realm.All<WordEntity>());\n });\n", "text": "I am trying to understand the logic behind Subscriptions and query fields in FlexibleSync, as I understand, these are needed to “listen” to changes, so that any change done in that collection will be synced, my predicament however is what kind of subscriptions must I have if I want to listen to all changes?!I tried to configure the app with the following, and as I wrote in adjacent thread, although it worked once, it never worked again and now I am doubting maybe it was because of how I configured the subscriptions…Is this way of configuring, i.e. adding everything to subscriptions is ok? If so, why it’s not a normal behavior, i.e. why would you even need to configure subscriptions if realm can handle all object changes for all objects? If it’s not ok, then how else should I have configured them to listen to changes to any of those entities, regardless of who made the changes and on which field?Thank you.", "username": "Movsar_Bekaev" }, { "code": "", "text": "Hi @Movsar_Bekaev. The way that you’re setting up the subscription seems to be fine, so that should work. If it doesn’t, then we need to investigate what’s happening.Regarding the idea behind flexible sync… The main idea here is that you subscribe only to the objects that you are interested into. Imagine having a todo application with millions of users. Most probably you don’t need to sync all todo items for all users on all devices, because it will be useless, and there won’t be enough space on the device probably. In this case you’d probably add a subscription to sync only the todo items for the current user. This is obviously a simple case, but I hope that you can get the point.\nOverall, the idea here is to sync only the objects that are relevant to the current user at a specific time.Just to add up on that. When you are developing the app probably it’s fine to get all data downloaded, but in a production app this would probably be too much data downloaded, and unnecessary.", "username": "papafe" }, { "code": " await _realm.Subscriptions.WaitForSynchronizationAsync();\n", "text": "I see, so my idea was to ship the local database file to platforms, to reduce first start and then update it because I am building a dictionary app and all the entries must be up to date regardless of who made the changes and who’s viewing them. Is that still going to be a significant problem as to speed?Also, when exactly should I useAs I think I should use it right after start, after acquiring Realm instance and setting up Subscriptions - to download external changes and after I make changes to sync, am I correct?The last one - should I use any other methods like those in SyncSession - WaitForDownload or WaitForUpload etc?My original problem is described here btw, please take a look if you get the chance. This is a blocker for me and I am not even trying to make this work now, just want to understand though, for future tries Thank you", "username": "Movsar_Bekaev" }, { "code": "await _realm.Subscriptions.WaitForSynchronizationAsync()WaitForDownloadWaitForUploadWaitForUpload", "text": "Your use case doesn’t seem problematic for sync. You will start to encounter issues only if the amount of data that needs to be synced from/to the device is too big. If you need to move around only few items it shouldn’t be a problem.await _realm.Subscriptions.WaitForSynchronizationAsync() is a method that returns when the subscriptions have been synchronized, and all the relevant data has been downloaded on the device. It is not mandatory to call this method, as those changes will happen automatically anyways on a background thread, but you have no guarantees about the timing exactly. It’s definitely a good practice to call it after you’re changing subscriptions though, as it will ensure that the device has all the relevant data.Regarding WaitForDownload and WaitForUpload, I would say that most of the times you don’t need them. Those methods return when (respectively) all the sync data has been dowloaded or uploaded from/to the device to the sync server. In the majority of cases you don’t need to do that manually, as the data is synced automatically on a background thread. It could be useful in some specific instances, in which you want to be sure that data has been downloaded/uploaded.\nFor example, you could call WaitForUpload when the user logs out from the application, to be sure that all the data available has been synced.Regarding your original problem, I didn’t give a look yet, but I’ll do it soon.", "username": "papafe" }, { "code": "_realm.Subscriptions.Add(_realm.All<EntryEntity>());\n", "text": "Thank you, this is very helpful, I spent 4 days digging into official documentation but your two posts gave more information I’d like to know also:Do we need to remove and add anew Subscriptions each time we get Realm instance? I mean, in async/await, you have to get Realm instance each time it’s needed in a new task, so, does this affect Subscriptions? If I add them as shown before on app start, do I need to add them again each time I require Realm Instance?Coming back to query fields, if I am adding subscriptions like I do withWill it use _id as default query field? or they don’t matter when subscribing to whole collectionDo I need partition_id field on my models, when using FlexibleSync (I thought it’s obviously “no”, but in quick start example it asks you to specify them no matter which method is chosen - maybe it’s just not updated?)Thank you so much!", "username": "Movsar_Bekaev" }, { "code": "PopulateInitialSubscriptions public static class RealmService\n {\n private const string appId = \"****\";\n\n private static Realms.Sync.App app;\n\n public static void Init()\n {\n app = Realms.Sync.App.Create(appId);\n }\n\n public static Realm GetRealm()\n {\n var config = new FlexibleSyncConfiguration(app.CurrentUser)\n {\n PopulateInitialSubscriptions = (realm) =>\n {\n....\n realm.Subscriptions.Add(query);\n }\n };\n\n return Realm.GetInstance(config);\n }\n\n....\n}\n_id", "text": "Also, regarding your other issue, it will be better if you provide us with your client logs and also the app id, so that we can take a look at the server logs and see what’s happening. If you do so, it makes sense to do in the other thread though, otherwise it’s complicated to follow the whole conversation . Also, if you prefer not to share your appId (or even the whole project if you want), we can provide a confidential channel where you can send that info (or the project).", "username": "papafe" }, { "code": "", "text": "Thank you! Regarding the other issue - I will post in that thread once I set it up once again.\nI’ll share the app id - no problem with that.You’re awesome, have a great day!", "username": "Movsar_Bekaev" }, { "code": "", "text": "You will start to encounter issues only if the amount of data that needs to be synced from/to the device is too big. If you need to move around only few items it shouldn’t be a problem.What’s the best practice if you have 100k users?", "username": "Alexandar_Dimcevski" }, { "code": "UserIdUserId == currentUserId", "text": "If you have 100k users you can’t obviously sync data for all users for all device. In that case you need to find a way to identify which data belongs to which user, for example, and sync only that. Most times that can be, for instance, adding a UserId variable to che classes and sync on that with something like UserId == currentUserId.", "username": "papafe" }, { "code": "", "text": "And if I want to search for other users in this case, I’d do it server side or with a function?", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "@Alexandar_Dimcevski You could do something as described here. By the way, if you have additional questions/doubts I would suggest to open a new topic, otherwise it gets confusing to follow along.", "username": "papafe" }, { "code": "", "text": "Got it. Sorry for missing threads", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "I’d like to clarify what is expected to happen here, now I’ve spent a while getting my head around FlexibleSync.Seems @Movsar_Bekaev wants toSo the big question is - can you distribute a copy of a Flexible Sync Realm?If yes, then his subscriptions will work fine, there will be some initial reconciliation and it will be only new/changed entries going across then network.If no, there are two choices:", "username": "Andy_Dent" } ]
Understanding Realm FlexibleSync
2023-01-12T09:28:04.935Z
Understanding Realm FlexibleSync
2,217
null
[]
[ { "code": "{\n \"subscribers\": {\n \"$elemMatch\": {\n \"$eq\": \"%%user.id\"\n }\n }\n}\n", "text": "Hello,\nI want to give the following write (document level) permission for a collection :The user is denied the write access when removing itself from subscribers. It seems like the write permission is evaluated AFTER the write operation is committed, with the consequence that the operation is reverted because write access is refused.Is this the expected behavior?", "username": "Tanguy_1" }, { "code": "\"%%user.id\"", "text": "Hi @Tanguy_1,Yes, the system is designed to validate the write permissions against the state of the object both before and after the modification (assuming the object is not inserted/deleted). Thus, when the user removes itself from the “subscribers” list, the write permissions check will fail on the updated state because the entry for \"%%user.id\" is no longer present. As you noted, the write is reverted due to the concept of “compensating writes”, which is described here.Let me know if you have any other questions,\nJonathan", "username": "Jonathan_Lee" } ]
Evaluation of write permissions in rules
2023-10-11T12:26:19.164Z
Evaluation of write permissions in rules
213
null
[ "migration" ]
[ { "code": "", "text": "Greetings fine people,I am exploring the migration of an SQL database to MongoDB.One of the first things that I would like to know is if there’s a way to automatically migrate my tables and turn them into collections.I have plenty of data already collected in these tables and I would like to find an easy way to migrate them into MongoDB. I understand the schema of the documents but I just don’t want to manually do the conversion; it’s too many tables, too many records, and too many fields.Any ideas are appreciated.Regards.", "username": "An_Infinite_Loop" }, { "code": "", "text": "Hi @An_Infinite_Loop,\nI think this is the right product:MongoDB Relational Migrator simplifies the process of migrating workloads from relational databases to MongoDB. Download now!Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Thanks @Fabio_Ramohitaj I watched the demo video and it does seem to be what I was looking for. I will give it a try.Regards.", "username": "An_Infinite_Loop" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migrating SQL tables to MongoDB Collections
2023-10-12T18:16:40.441Z
Migrating SQL tables to MongoDB Collections
179
null
[]
[ { "code": "", "text": "Hello, would it be possible to split the final invoice with 2 cards? For example I create the org, invite my business partner, and we split the bill every month.", "username": "M_N_A2" }, { "code": "", "text": "Will anyone be able to help me? That would be great ", "username": "M_N_A2" }, { "code": "", "text": "Hello @M_N_A2 ,Welcome to The MongoDB Community Forums! As per my knowledge, MongoDB Atlas do not currently have a method to achieve what you are describing. However, I would advise you to bring this up with the Atlas support team or connect with support via the in app chat support available in the Atlas UI. They should be able to help you with any payment related queries.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Splitting the Invoice with someone else
2023-10-12T07:13:01.472Z
Splitting the Invoice with someone else
161
https://www.mongodb.com/…8_2_1024x537.png
[]
[ { "code": "", "text": "Hey,As far as I know, AWS removed data transfer costs over VPC peering connections on May 2021.My customer is planning to create VPC peering with different AWS accounts and they are confused who is the payer for data transfer in this situation.\n\nAll accounts are in same AWS Region (Seoul)\n\nA...I see daily 400-500GB of data transfer which I assume most of them for the applications getting data via drivers. So the apps are on the same AWS region and VPC peering is enabled. Why I still see this cost?", "username": "Mete" }, { "code": "", "text": "Hi @Mete and welcome to MongoDB community forums!!As mentioned in the AWS Documentation for VPC pairing, the no cost incurred is implemented when the two accounts (in your case is Atlas and AWS) are in the same availability zone.\nNow even the Atlas cluster cluster is deployed in the region selected, the information about is the AZs(Availability Zone) is not provided.\nHence, in your case the AZs(Availability Zone) between the account A and the account B would be in different.In case you have the paid support, you can reach out to MongoDB support where they can help you in identification for the information on AZ and the pricing in more details.For further details, you can visit the Data Transfer documentation for deeper understanding.Warm regards\nAasawari", "username": "Aasawari" } ]
Data transfer costs while VPC peering
2023-10-11T18:42:24.280Z
Data transfer costs while VPC peering
205
null
[ "indexes" ]
[ { "code": "status{ id: 1, name: 'Draft' }{ id: 2, name: 'Published' }status", "text": "In a collection all documents have a field status and its value is an object. for example: { id: 1, name: 'Draft' } or { id: 2, name: 'Published' }.\nIs it okay to index status field?", "username": "Sachin_Kumar_Verma" }, { "code": "status{ \n \"_id\": \"unique id\",\n \"status\": { \"id\": 1, \"name\": \"Draft\" } \n}\ncreateIndex({ \"status\": 1 })\nfind({ \"status\": { \"id\": 1, \"name\": \"Draft\" } })\nfind({ \"status.id\": 1, \"status.name\": \"Draft\" })\ncreateIndex({ \"status.id\": 1, \"status.name\": 1 });\n", "text": "Hello @Sachin_Kumar_Verma, Welcome to the MongoDB community forum,Is it okay to index status field?It depends on your use case, can you share more details about what is the possible query you want to use the index?If I am correct, consider the below document,If you create below index:To use the index you have to query like this:The below query won’t use above index,To support the index on the about query you have to create the compound index as below,", "username": "turivishal" }, { "code": "find({ \"status\": { \"id\": 1, \"name\": \"Draft\" } })find({ \"status\": { $ne: { \"id\": 1, \"name\": \"Draft\" } } })status", "text": "hi @turivishal, thanks for the reply\nquery on this collection will be like find({ \"status\": { \"id\": 1, \"name\": \"Draft\" } }) or find({ \"status\": { $ne: { \"id\": 1, \"name\": \"Draft\" } } }) etc.\nthat is why I wanted to index on status\nI was just worried that indexing on an object field would not work properly since I did not find any example like that. I don’t actually understand how MongoDB does comparisons while consulting the indexes this is also part of my concern. If you know JS you know how object comparisons are based on object references instead of comparing all key-value pairs in those objects.", "username": "Sachin_Kumar_Verma" }, { "code": "[{ \n \"_id\": \"1\",\n \"status\": { \"id\": 1, \"name\": \"Draft\" } \n},\n{ \n \"_id\": \"2\",\n \"status\": { \"name\": \"Draft\", \"id\": 1 } \n}]\nfind({ \"status\": { \"id\": 1, \"name\": \"Draft\" } })\n[{ \n \"_id\": \"1\",\n \"status\": { \"id\": 1, \"name\": \"Draft\" } \n}]\n{ \"id\": 1, \"name\": \"Draft\" } not equal to { \"name\": \"Draft\", \"id\": 1 }\n", "text": "Hello @Sachin_Kumar_Verma,I was just worried that indexing on an object field would not work properly since I did not find any example like thatI think the index will work perfectly, but the query will not, you have to understand the below things if you don’t know,\nConsider you have documents in the collection:Your query is:You will get a single document, as you can see the below result:Why? Because the order of status’s value property is not equal to the original document in the collection,So you have to make sure that your insert query should always store the status’s value object in the same order.If you know JS you know how object comparisons are based on object references instead of comparing all key-value pairs in those objects.MongoDB compares objects as values, so it is not like JS!Out of the question:\nI would suggest you experiment with these things using the MongoDB Compass, It’s easy and quick to connect your cluster, insert documents, execute queries (find, aggregate), create indexes, explain query by explain command to check does your query used index or not.", "username": "turivishal" }, { "code": "{ \"id\": 1, \"name\": \"Draft\" } not equal to { \"name\": \"Draft\", \"id\": 1 }a = { \"id\": 1, \"name\": \"Draft\" }\nb = { \"name\": \"Draft\", \"id\": 1 }\na === b // evaluates to false\na == b // evaluates to false\n\nd = { \"_id\" : 1 ,\n \"status\": { \"id\": 1, \"name\": \"Draft\", \"_x\" : 369 }\n}\ndb.collection.insertOne( d )\ndb.collection.findOne( { \"status\": { \"id\": 1, \"name\": \"Draft\" } } )\n/* will not find document with _id:1 because of the extra field _x */\ndb.collection.findOne( { \"status.id\": 1, \"status.name\": \"Draft\" } )\n/* will find document with _id:1 despite of the extra field _x */\n", "text": "{ \"id\": 1, \"name\": \"Draft\" } not equal to { \"name\": \"Draft\", \"id\": 1 }I think that is also true in JS.So you have to make sure that your insert query should always store the status’s value object in the same order.And not store any other values in the status object.One feature I use, thanks to the flexible schema nature of MongoDB, is to tag or flag objects or documents with extra information to help me code and debug. I would add a field say _debug and when ever I encounter such object I print out extra information as log or trace. I would not be able to do that if using object equality rather than dot notation in my queries.", "username": "steevej" }, { "code": "\na = { \"id\": 1, \"name\": \"Draft\" };\nb = { \"id\": 1, \"name\": \"Draft\" };\na === b // still false\na == b // false\n", "text": "I think that is also true in JS.Just to point out, the order of key-value pairs in objects does not matter in JS for equality comparison.Object comparison is based on reference, not their actual values.", "username": "Sachin_Kumar_Verma" }, { "code": "", "text": "Thanks for the clarification.", "username": "steevej" } ]
Indexing on an object value field
2023-10-10T10:45:48.503Z
Indexing on an object value field
251
null
[ "atlas-search" ]
[ { "code": "{\"tags\":[\"aaa\",\"bbb\",\"ccc\"]}{\n \"index\": \"test\",\n \"facet\": {\n \"operator\": {\n \"autocomplete\": {\n \"query\": \"a\",\n \"path\": \"tags\"\n }\n },\n \"facets\": {\n \"titleFacet\": {\n \"type\": \"string\",\n \"path\": \"tags\",\n \"numBuckets\": 100\n }\n }\n }\n }\n{\n \"analyzer\": \"search_keyword_lowercaser\",\n \"searchAnalyzer\": \"search_keyword_lowercaser\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"tags\": [\n {\n \"analyzer\": \"search_keyword_lowercaser\",\n \"searchAnalyzer\": \"search_keyword_lowercaser\",\n \"type\": \"string\"\n },\n {\n \"type\": \"stringFacet\"\n },\n {\n \"analyzer\": \"search_keyword_lowercaser\",\n \"foldDiacritics\": false,\n \"maxGrams\": 15,\n \"minGrams\": 1,\n \"tokenization\": \"nGram\",\n \"type\": \"autocomplete\"\n }\n ]\n }\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"search_keyword_lowercaser\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"keyword\"\n }\n }\n ]\n}\n", "text": "Hi,\nI’m having trouble with facets when working with arrays. The facet returns all elements of matching arrays, and I would like to return only those elements that matched my condition.Given the following collection that contains a single document:{\"tags\":[\"aaa\",\"bbb\",\"ccc\"]}I would like to run the following query and find all tags that contain the character “a”:This is the search index:The problem is that I am getting all tags, instead of only “aaa”.\nWhat can be done to solve this?", "username": "Shai_Binyamin" }, { "code": "Atlas atlas-cihc7e-shard-0 [primary] test> db.sample.find()\n[\n {\n _id: ObjectId(\"6526396b928f922719d4fa65\"),\n tags: [ 'bbb', 'bbb', 'ccc', 'ddd', 'ddd' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa66\"),\n tags: [ 'ddd', 'ccc', 'fff', 'jjj', 'ccc' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa67\"),\n tags: [ 'bbb', 'fff', 'aaa', 'yyy', 'bbb' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa68\"),\n tags: [ 'yyy', 'yyy', 'bbb', 'bbb', 'aaa' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa6a\"),\n tags: [ 'ddd', 'yyy', 'ccc', 'fff', 'yyy' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa6b\"),\n tags: [ 'fff', 'yyy', 'aaa', 'bbb', 'jjj' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa6c\"),\n tags: [ 'aaa', 'fff', 'yyy', 'fff', 'aaa' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa6d\"),\n tags: [ 'jjj', 'ccc', 'fff', 'ccc', 'ddd' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa6e\"),\n tags: [ 'bbb', 'ddd', 'fff', 'ccc', 'ddd' ]\n }\n]\nAtlas atlas-cihc7e-shard-0 [primary] test> db.sample.aggregate([{ $search: { facet: { operator: { autocomplete: { query: \"a\", path: \"tags\", }, }, facets: { titleFacet: { type: \"string\", path: \"tags\", numBuckets: 100, }, }, }, }, }])\n[\n {\n _id: ObjectId(\"6526396b928f922719d4fa6c\"),\n tags: [ 'aaa', 'fff', 'yyy', 'fff', 'aaa' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa67\"),\n tags: [ 'bbb', 'fff', 'aaa', 'yyy', 'bbb' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa6b\"),\n tags: [ 'fff', 'yyy', 'aaa', 'bbb', 'jjj' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa68\"),\n tags: [ 'yyy', 'yyy', 'bbb', 'bbb', 'aaa' ]\n }\n]\n", "text": "Hi @Shai_Binyamin and welcome to MongoDB community forums!!Based on the above information that you have shared, I tried to create some sample data which looks like:I used the same index definition and the search query as:The above query provides me the output for all the tags that contains “a” in the list.If this is not what you are seeking for, could you help me with some sample data along with expected output and current output that you are receiving.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari, thanks for your response.\nMy goal is to only return the tag “aaa” in this case, and not to show other tags.\nI would like to return only the matching elements in the array, and not to return non matching tags which are in the same array as matching tags.", "username": "Shai_Binyamin" }, { "code": "[\n {\n _id: ObjectId(\"6526396b928f922719d4fa6c\"),\n tags: [ 'aaa' ]\n },\n {\n _id: ObjectId(\"6526396b928f922719d4fa67\"),\n tags: [ 'aaa']\n },\n....\n]\nAtlas atlas-cihc7e-shard-0 [primary] test> db.sample.aggregate([ { $search: { facet: { operator: { autocomplete: { query: \"a\", path: \"tags\" } }, facets: { titleFacet: { type: \"string\", path: \"tags\", numBuckets: 100 } } } } }, { $addFields: { tags: { $reduce: { input: \"$tags\", initialValue: [], in: { $cond: { if: { $eq: [\"$$this\", \"aaa\"] }, then: { $concatArrays: [ \"$$value\", [\"$$this\"]] }, else: \"$$value\" } } } } } }])\n[\n { _id: ObjectId(\"6526396b928f922719d4fa6c\"), tags: [ 'aaa', 'aaa' ] },\n { _id: ObjectId(\"6526396b928f922719d4fa67\"), tags: [ 'aaa' ] },\n { _id: ObjectId(\"6526396b928f922719d4fa6b\"), tags: [ 'aaa' ] },\n { _id: ObjectId(\"6526396b928f922719d4fa68\"), tags: [ 'aaa' ] }\n]\nAtlas atlas-cihc7e-shard-0 [primary] test> db.sample.aggregate([ { $search: { facet: { operator: { autocomplete: { query: \"a\", path: \"tags\" } }, facets: { titleFacet: { type: \"string\", path: \"tags\", numBuckets: 100 } } } } }, { $addFields: { tags: { $filter: { input: \"$tags\", as: \"thisTag\", cond: { $eq: [\"$$thisTag\", \"aaa\"] } } } } }])\n[\n { _id: ObjectId(\"6526396b928f922719d4fa6c\"), tags: [ 'aaa', 'aaa' ] },\n { _id: ObjectId(\"6526396b928f922719d4fa67\"), tags: [ 'aaa' ] },\n { _id: ObjectId(\"6526396b928f922719d4fa6b\"), tags: [ 'aaa' ] },\n { _id: ObjectId(\"6526396b928f922719d4fa68\"), tags: [ 'aaa' ] }\n]\n", "text": "Thank you for the clarification @Shai_BinyaminIf I understand correctly and based on the example shared above, you would need the output as:and so on, considering the fact that ‘aaa’ occurs only once in the tags array.To accomplish this, there can be more than one way to achieve the desired response.P.S. Please note that the first document has multiple “aaa” in the “tags” array in my sample document.\nAlso, the above query is based on the sample document I have created and would recommend you to go through thorough testing and evaluate in terms of performance before using in the production environment.Please feel free to reach out in case of any further concerns.Warm regards\nAasawari", "username": "Aasawari" } ]
$searchMeta Facets - Return only filtered elements
2023-10-10T09:57:50.612Z
$searchMeta Facets - Return only filtered elements
269
https://www.mongodb.com/…1_2_1024x576.png
[ "aggregation", "dach-virtual-community", "mug-virtual-emea" ]
[ { "code": "oror\nMeeting ID: 97203953948\n\nPasscode: 188795\n\nMongoDB Senior Solutions ArchitectMongoDB ChampionIndependent ConsultantMongoDB ChampionPrincipal SRE Database Engineer at BeameryMongoDB ChampionDeveloper Advocate at RedHat", "text": "\nMUG1920×1080 222 KB\nWelcome to the inaugural Virtual EMEA MongoDB User Group Meetup! We are excited to bring the MongoDB enthusiasts in EMEA together for an afternoon filled with learning and opportunities to interact with fellow MongoDB users in your time zone.To RSVP - Please click on the “✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green highlighted button if you are going. You need to be signed in to access the button.Microservices and serverless offerings can significantly shorten the time between an idea and practical implementation. But what does something like this look like in practice?In the first session, we will look at how an idea can be realized quickly and easily with MongoDB Atlas with @Timo_Lackmann and @michael_hoeller . We will not only look at the database but the complete application stack and show how to implement a MongoDB Aggregation pipeline translator with ChatGPT and MongoDB Atlas.Once you’ve gained a solid understanding of the core concepts, we’ll provide opportunities to break out into different Breakout Rooms to learn new things like using MongoDB Kubernetes Operator from @Arkadiusz_Borucki or Understanding MongoDB Kafka Connector with @hpgrahsl or apply your learnings, participate in:dart: Trivia and Query Language Challenge (win exciting prizes ) or hang out with other attendees in the Hangout Room Event Type: Online Join Zoom Meeting (passcode is encoded in the link)MongoDB Senior Solutions Architect\nMongoDB Champion | Independent Consultant\n\narkadiusz-borucki3072×3072 430 KB\nMongoDB Champion | Principal SRE Database Engineer at BeameryMongoDB Champion | Developer Advocate at RedHat", "username": "Harshit" }, { "code": "", "text": "Looks awesome - can’t wait!", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "Wow, I’m thrilled to be a part of the inaugural Virtual EMEA MongoDB User Group Meetup! It’s great to see the MongoDB community coming together for an exciting afternoon of learning and networking. The topic of microservices and serverless offerings sounds intriguing. I’m eager to understand how they can accelerate idea implementation. Looking forward to the first session with @Timo_Lackmann, where we’ll explore the practical application of MongoDB Atlas and dive into implementing a MongoDB Aggregation pipeline with ChatGPT and MongoDB Atlas. This promises to be an insightful and hands-on session! Can’t wait to learn and connect with fellow MongoDB enthusiasts in the EMEA time zone. See you all there!", "username": "Kaftan_Tomer" }, { "code": "", "text": "Hey @Kaftan_Tomer,\nGlad to know that you are excited about the event. Make sure you RSVP for the eventTo RSVP - Please click on the “✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green highlighted button if you are going. You need to be signed in to access the button.", "username": "Harshit" }, { "code": "", "text": "I was in the process of learning kubernetes and kafka. I am super exited for this event to learn more about it ", "username": "33_ANSHDEEP_Singh" }, { "code": "", "text": "Hey All,\nGentle Reminder: The EMEA MUG Virtual Meetup is tomorrow at 11:00 AM. We are thrilled to have you join us.Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom Rooms is the original software-based conference...We want to make sure everyone has a fantastic time, so please join us at 11:00 AM to ensure you don’t miss any of the sessions. We can also have some time to chat before the talks begin.If you have any questions, please don’t hesitate to ask by replying to this thread. Looking forward to seeing you all at the event tomorrow!", "username": "Harshit" }, { "code": "", "text": "Hey Everyone!\nGentle Reminder - We will be starting soon!Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom Rooms is the original software-based conference...", "username": "Harshit" }, { "code": "", "text": "Hey Everyone!\nThank you for being a part of our virtual meetup. For those who couldn’t attend, we’ve got you covered with the recording of the event. Stay tuned, as we’ll be polishing the recordings and getting them ready for a YouTube release very soon.Video Conferencing, Web Conferencing, Webinars, Screen Sharing - Zoom\nPasscode: jRbex@47", "username": "Harshit" }, { "code": "", "text": "Thanks for sharing the recordings, @Harshit, It was a great learning experience, specifically learned some great things related to atlas vecor search, and mongoDB features related to data structure. ", "username": "Vishal_Alhat" }, { "code": "", "text": "It’s interesting to see how MongoDB Atlas is being utilized to bring ideas to life quickly and easily. The session seems to offer a comprehensive view of not just the database itself, but the entire application stack. The integration of ChatGPT and MongoDB Atlas, specifically for implementing a MongoDB Aggregation pipeline translator, sounds like an innovative approach. I’m curious to learn more about how these technologies work together and what practical applications can be achieved.@contact:gptnederlands.nl", "username": "Koch_Chris" }, { "code": "", "text": "Hello,I see you are looking for a translation tool that can help you solve your problem.Well I may have a solution for you.Try this amazing AI video translation and dubbing tool:https://wavel.ai/solutions/ai-video-translatorThe best thing about this tool is that its 95% accurate and is very inexpensive. I tried it myself and was really impressed by the results.Hope this helped you a little bit.Happy translating ", "username": "AI_wavel" } ]
EMEA vMUG: Aggregation Pipeline Translator with ChatGPT, Kubernetes Operator and Kafka Connector!
2023-06-27T23:41:44.404Z
EMEA vMUG: Aggregation Pipeline Translator with ChatGPT, Kubernetes Operator and Kafka Connector!
2,895
null
[ "queries", "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "const { MongoClient, ObjectId } = require('mongodb')\n\nconst main = async () => {\n const uri = \"mongodb+srv://weareandrei:<password>@omega.owkrpxa.mongodb.net/?retryWrites=true&w=majority&appName=AtlasApp\";\n const client = new MongoClient(uri);\n\n try {\n await client.connect();\n console.log('Connected successfully');\n\n const database = client.db('User');\n const collection = database.collection('User');\n\n const result = await collection.findOne({username: 'user'});\n\n if (result) {\n console.log('Retrieved document:', result);\n } else {\n console.log('Document not found');\n }\n } catch (e) {\n console.error('Error:', e);\n } finally {\n await client.close();\n console.log('Connection closed');\n }\n}\n\nmain()\nconst {mongoose} = require('mongoose');\n\nconst main = async () => {\n const mongoString = 'mongodb+srv://weareandrei:<password>@omega.owkrpxa.mongodb.net/?retryWrites=true&w=majority&appName=AtlasApp'\n\n try {\n mongoose.connect(mongoString, { useNewUrlParser: true, useUnifiedTopology: true })\n .then(() => {\n console.log('Database Connected');\n })\n .catch((error) => {\n console.error('Database Connection Error:', error);\n });\n\n mongoose.set(\"strictQuery\", false);\n\n const Schema = mongoose.Schema\n\n const UserModelSchema = new Schema({\n documentationId : {\n type: String\n },\n password : {\n type: String\n },\n username : {\n type: String\n }\n })\n\n const UserModel = mongoose.model(\"User\", UserModelSchema, 'User')\n\n const result = await UserModel.findOne({username: 'user'});\n\n if (result) {\n console.log('Retrieved document:', result);\n } else {\n console.log('Document not found');\n }\n } catch (e) {\n console.error('Error:', e);\n } finally {\n await mongoose.connection.close();\n console.log('Connection closed');\n }\n}\n\nmain()\nConnected successfully\nRetrieved document: {\n _id: new ObjectId(\"652367f3e9a268c7fc1e6efa\"),\n username: 'user',\n password: 'pass',\n documentationId: '6523592fd730e1f9120fbef6'\n}\nConnection closed\nDatabase Connected\nDocument not found\nConnection closed\n{\n \"_id\": {\n \"$oid\": \"652367f3e9a268c7fc1e6efa\"\n },\n \"username\": \"user\",\n \"password\": \"pass\",\n \"documentationId\": \"6523592fd730e1f9120fbef6\"\n}\n", "text": "Hi guys, could somebody give me a hand with understanding where I went wrong with using Mongoose? I am stuck for 2 days already.Here is my code in MongoClient, and it works:And here is a similar code that uses mongoose:If I run the first one, I get :However, the second code returns:Here is the json object of my User.User collection from mongoDB. I believe I defined the model correctly.Thank you in advance!", "username": "Andrei_Mikhov" }, { "code": "test", "text": "Hello @Andrei_Mikhov, Welcome to the MongoDB community forum,It looks like you missed to specify the database name in the Mongoose code. So by default, it will connect with the test database.", "username": "turivishal" }, { "code": "", "text": "Hey @turivishal , thatks for the reply!\nCould you elaborate pls? I can’t se where I missed it. It is specified in URI I guess? If you are talking about the model, then in model I included the db and collection name…", "username": "Andrei_Mikhov" }, { "code": "mongodb+srv://weareandrei:<password>@omega.owkrpxa.mongodb.net/<dbName>?retryWrites=true&w=majority&appName=AtlasApp\nconnectmongoose.connect(mongoString, { dbName: \"User\", useNewUrlParser: true, useUnifiedTopology: true })\nconst UserModel = mongoose.model(\"User\", UserModelSchema, 'User')\n", "text": "Hello @Andrei_Mikhov,You have to specify the Database name in the connection string:Or you can specify in options of connect method:https://mongoosejs.com/docs/connections.html#optionsIf you are talking about the model, then in model I included the db and collection name…The first parameter is for the Model name and the last one is for the collection name!\nhttps://mongoosejs.com/docs/6.x/docs/api/mongoose.html#mongoose_Mongoose-modelYou can always refer to the documentation.", "username": "turivishal" }, { "code": "", "text": "@turivishal Thanks a lot! This worked now.", "username": "Andrei_Mikhov" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can query MongoDB using MongoClient, but not with Mongoose
2023-10-11T14:30:26.369Z
Can query MongoDB using MongoClient, but not with Mongoose
266
null
[ "aggregation" ]
[ { "code": " \"winningPlan\": {\n \"stage\": \"CLUSTERED_IXSCAN\",\n \"filter\": {\n \"_id\": {\n \"$eq\": 15205202308\n }\n },\n{\n $lookup:\n {\n from: \"ClusteredCollection\",\n localField: \"myId\",\n foreignField: \"_id\",\n as: \"tss\"\n {\n \"$lookup\": {\n \"from\": \"ClusteredCollection\",\n \"as\": \"tss\",\n \"localField\": \"myId\",\n \"foreignField\": \"_id\",\n \"unwinding\": {\n \"preserveNullAndEmptyArrays\": false\n }\n },\n \"totalDocsExamined\": NumberLong(400),\n \"totalKeysExamined\": NumberLong(0),\n \"collectionScans\": NumberLong(200),\n \"indexesUsed\": [],\n \"nReturned\": NumberLong(200),\n \"executionTimeMillisEstimate\": NumberLong(2143)\n },\n", "text": "Hi, I wanted to ask if there are limitations with the Clustered Collections when they are the target of a lookup.If I run a Find with _id I see the index is used correctly:But if I do the same with a lookup, this does not use the index:explain:How can I take advantage of the _id index?Thanks", "username": "Giacomo_Benvenuti" }, { "code": "", "text": "Hello @Giacomo_Benvenuti ,Thanks for reporting the issue. I’m able to reproduce the same issue that you’re seeing and opened SERVER-82079. Feel free to watch/up-vote the ticket to receive notification on it.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Thank you very much Tarun.", "username": "Giacomo_Benvenuti" } ]
Clustered Collections as lookup target
2023-09-29T15:45:23.154Z
Clustered Collections as lookup target
270
null
[ "aggregation", "compass" ]
[ { "code": "", "text": "I lost all my aggregation pipelines and I would want to know why it happened. Some context: MongoDB 7.0 is running on my company’s server and compass 1.39.4 is installed on my windows PC. I am the only user currently and I did not delete any of them. Please correct me if I’m wrong, all pipelines are saved on my PC end and the issue likely is on my PC ? A", "username": "Aaron_Kuang" }, { "code": "", "text": "Hi @Aaron_Kuang. Can you share your log file(s) to help us understand and debug the issue?\nYou can follow these steps to find the log file. Please make sure to remove any sensitive information from the logs.", "username": "Basit_Chonka" } ]
Lost all saved aggregation pipelines in compass 1.39.4
2023-10-11T16:54:32.144Z
Lost all saved aggregation pipelines in compass 1.39.4
207
null
[ "dot-net" ]
[ { "code": "public abstract class TypeA\n {\n // Your polymorphic method\n public abstract void AbtractMethod();\n // Only exposing this for the purpose of demonstration\n public abstract IDependency Dependency { get; }\n }\n public class TypeB : TypeA\n {\n private readonly ISpecializedDependencyForB _dependency;\n public TypeB(ISpecializedDependencyForB dependency)\n {\n _dependency = dependency;\n }\n public override void AbtractMethod()\n {\n // Do stuff with ISpecializedDependencyForB without leaking the dependency to the caller\n }\n // You hopefully won't need this prop\n public override IDependency Dependency\n {\n get { return _dependency; }\n }\n }\n\n public class TypeC : TypeA\n {\n private readonly ISpecializedDependencyForC _dependency;\n public TypeC(ISpecializedDependencyForC dependency)\n {\n _dependency = dependency;\n }\n public override void AbtractMethod()\n {\n // Do stuff with ISpecializedDependencyForC without leaking the dependency to the caller\n }\n public override IDependency Dependency\n {\n get { return _dependency; }\n }\n }\n", "text": "Hello,We need to use the following pattern in our application for our entities since our polymorphic objects have different behaviors needing different external dependencies depending on their type:Json.NET allows injecting constructor dependencies during deserialization through their through their ContractResolver abstraction. However I cannot find an equivalent feature with the MongoDB csharp driver, which I find surprising considering the vast efforts that went into providing best-in-class support for polymorphism in MongoDB.", "username": "R_B" }, { "code": "", "text": "I have the same problem. Did you find the answer? Thanks to AI, search engines are incapable of finding tech resources.", "username": "Nima_Niazmand" }, { "code": "", "text": "Unfortunately no, I haven’t found any solution for this and we have since moved to another database for a variety of reasons.", "username": "R_B" }, { "code": "TypeBTypeCISpecializedDependencyForBISpecializedDependencyForC", "text": "@R_B and @Nima_NiazmandThis defeats the design of the C# driver, the purpose of the driver is to map to BSON data.MongoDB’s C# driver uses class mapping to map BSON documents from the database to C# objects. But the issue is that the Driver doesn’t do constructor injection during deserialization.You deserialize your MongoDB documents into instances of TypeB and TypeC , the DI container will automatically provide the required dependencies. But it doesn’t support the injection, you’d need something else to do that.This can be done with the C# Injection using .NET’s built-in DI framework (or any other DI framework you prefer) to provide instances of ISpecializedDependencyForB and ISpecializedDependencyForC when needed.Does this make sense?I had the same issue when I was building a blockchain based .Net server recently and had to dig in the Drivers source on GitHub.It doesn’t support direct injection and that causes your problem. I used the normal .NET DI Framework to do all this instead.", "username": "Brock_Leonard" }, { "code": "", "text": "No, It’s obviously not possible with built-in DI or any other DI. The purpose of Newtonsoft.Json is to map to JSON. But it provides extendibility. Lack of extendibility is a bad design.", "username": "Nima_Niazmand" }, { "code": "", "text": "It’s up to you, but I’d try .Net’s built in DI Framework.", "username": "Brock_Leonard" }, { "code": "", "text": "You should try to learn its not possible.", "username": "Nima_Niazmand" }, { "code": "", "text": "It seems like you may not be understanding the use case we are trying to address. Injecting dependencies in our application generally speaking IS NOT what we are trying to do, injecting dependencies in our entities as they are being deserialized IS what we are trying to do.When a library is properly designed and is extensible like for example JSON.NET, you can do this kind of thing, and you’d typically configure the library to use the built-in DI of .NET to resolve dependencies during deserialization. But the library needs to expose proper extensibility points to make it possible. As you can see, the problem is not about using built-in DI or not, this is besides the point. There are a number of reasons for wanting to inject dependencies at deserialization, the polymorphic dependencies example I posted is one of them, but there are others.", "username": "R_B" }, { "code": "public class MyEntity\n{\n IDependency _dependency => StaticDomainServiceProvider<IDependency>.GetInstance();\n}\nStaticDomainServiceProvider<IDependency>.GetInstance = serviceProvider.Resolve<IDependency>()\n", "text": "@Nima_Niazmand Actually I remember what we did, we used a static ServiceLocator inside our entities instead of doing constructor injection, something looking like:And then somewhere in the entry point of your application (startup.cs or program.cs for example)This is the overally idea. If you use ASP.NET Core, make sure to use an HttpContextAccessor to get the scoped serviceProvider for the HttpContext of the current request instead of hooking into the root scope as I’ve done in the example above, otherwise you may have lifetime issues with your injected dependencies (for example the “scoped” lifetime may not be what you expect it to be since it will not be tied to the lifetime of the request as you’d probably expect).This approach works well and is a good enough compromise, but we were hoping to leave it behind once for all because constructor injection has a number of benefits (no need to add configurations in the entry point of the application, dependencies are more explicit while remaining encapsulated enough - if the deserialization library is able to inject them - etc…)", "username": "R_B" }, { "code": "", "text": "Thank you @R_B\nI wanted to avoid static properties, but it seems it’s the only solution that works. I appreciate your help.", "username": "Nima_Niazmand" }, { "code": "", "text": "@R_BIf you don’t mind me asking, what is the benefit of this approach in your use case?Any pros/cons? And I apologize for the misunderstanding of your use case.I honestly haven’t seen this need before, and honestly I’m a little puzzled at the concept for use case reasons.I would greatly appreciate more feedback on this, as I love learning things that are new.", "username": "Brock_Leonard" }, { "code": "", "text": "You may want to look into Domain Driven Design. The need for the approach above typically arise when you need to deserialize rich domain models rather than basic POCOs because a rich entity may need external services (domain services) to do its job. This is one of the use cases of this approach.", "username": "R_B" }, { "code": "", "text": "I’m familiar with DDD, but most often just DI the data as stated above.Is there by chance a sample project I can get a hold of that demonstrates this design that isn’t working so far for you?", "username": "Brock_Leonard" }, { "code": "", "text": "No unfortunately I do not have time for that, however the sample included in the original post should already provide you what you are asking for.My feeling is that at this point, what may actually be lacking is a similar code sample describing the solution you are alluding to. But I’m pretty sure you are not adressing the same problem.", "username": "R_B" } ]
Constructor injection on deserialization with the csharp driver
2022-11-19T14:02:06.258Z
Constructor injection on deserialization with the csharp driver
1,364
null
[ "aggregation", "queries", "python", "change-streams" ]
[ { "code": "import pymongo\n\nclient = pymongo.MongoClient(connection_uri)\n\npipeline = [\n {\n '$match': {\n '$or': [\n {'updateDescription.updatedFields.base_name': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n {'updateDescription.updatedFields.industries': {'$exists': True}},\n {'updateDescription.updatedFields.country': {'$exists': True}},\n ]\n }\n }\n]\n\n# Set up the change stream\nchange_stream = client.change_stream_db.change_stream_collection.watch(pipeline)\n\n# Iterate through the change stream and print each change\nfor change in change_stream:\n print(change)\nbase_namecountriesindustriesmeta_data.statusbase_namecountriesindustriesmeta_datastatuscountriesbase_nameindustriesmeta_data.statusindustriesmeta_data.statusmeta_data.statusfrom bson.son import SONmeta_data.status{'_id': {'_data': '82642A99A8000000072B022C0100296E5A10048332797CF83843B4B13C30209360370E463C5F6964003C7279786A74676E677376000004'}, 'operationType': 'update', 'clusterTime': Timestamp(1680513448, 7), 'ns': {'db': 'change_stream_db', 'coll': 'change_stream_collection'}, 'documentKey': {'_id': 'ryxjtgngsv'}, 'updateDescription': {'updatedFields': {'meta_data.status': 'finished_updated'}, 'removedFields': [], 'truncatedArrays': []}}\n", "text": "Hi,I’m trying to set up a listener to a Mongo DB collection using the inbuilt change streams functionality. I’m coming into issues when I set up my watch on the collection when filtering for nested fields. I’m using PyMongo and my Mongo DB is hosted on Atlas.Here is my code…Essentially I want to look for updates in the base_name, countries, industries & meta_data.status fields. base_name & countries are simple strings, but industries is a field with an array object, and meta_data is a dictionary type object, with the nested field of status. So far, my watch(pipeline) picks up all updates to countries & base_name, but it’s not able to pick up changes to the more complex fields of industries and meta_data.status.I have confirmed that the change to those fields is happening. When I remove all filters on what i’m matching - i.e. watch(), and update those complex fields (industries & meta_data.status) the update flows through in the change streams print. But when I filter for them in the match command within the pipeline object, they do not.I regularly use dot notation with find() and everything seems to work fine (i.e. find(meta_data.status:‘active’) works fine) - but it’s not working in this watch(pipeline) i’ve specified above. Is there a way to do this with dot notation, or do I have to use the from bson.son import SON module? if so, how do I do this?Any advice would be greatly appreciated. Thanks!This is an example of when I just have watch() setup and I change the meta_data.status field…When I have the filter in match, there is nothing printed to the console.", "username": "Paul_Chynoweth" }, { "code": " {'updateDescription.updatedFields.base_name': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n {'updateDescription.updatedFields.industries': {'$exists': True}},\n {'updateDescription.updatedFields.country': {'$exists': True}},\n \nreplset [direct: primary] test> db.CS.find()\n[\n {\n _id: ObjectId(\"642d0f15d064d6c29aba53e1\"),\n updateDescription: {\n updatedFields: {\n base_name: 'Acme Corporation',\n meta_data: {\n status: 'active',\n created_at: ISODate(\"2022-01-01T00:00:00.000Z\")\n },\n industries: [ 'technology', 'manufacturing' ],\n country: 'USA'\n }\n }\n }\n]\nimport pymongo\nfrom pymongo import MongoClient\nconn = pymongo.MongoClient(\"localhost:8000\")\ndb = conn[\"test\"]\ncollection = db[\"CS\"]\ncursor = db.CS.watch()\nfor change1 in cursor:\n print(change)\nindustriesmeta_data.status", "text": "Hi @Paul_Chynoweth and welcome to the community forum!!Based on the above pipeline shared, I tried with the following sample data:and with the following change stream code in pythonand I was able to see the changes in the change stream for all updates on all fields.However,, but it’s not able to pick up changes to the more complex fields of industries and meta_data.status.Can you share your sample document which could give more clarity to understand the issue further.Regards\nAasawari", "username": "Aasawari" }, { "code": "{\n \"_id\": \"pnsfnrkoth\",\n \"base_name\": \"mjzzbvxqgk\",\n \"country\": \"ktybyozypo\",\n \"meta_data\": {\n \"cleaning_required\": true,\n \"status\": \"\",\n \"status_domain_keyword_search_scraper\": \"in_process\",\n \"status_domain_scraper\": \"finished\",\n \"status_domain_search\": \"finished\"\n },\n \"industries\": [\n \"industry3\",\n \"industry1\"\n ]\n}\ncollection.watch()", "text": "Hi Aasawari,Thanks for helping out & yes certainly. Here is a sample document…Let me know if that helps you to further understand/debug.And yeah when I run collection.watch() it picks up all changes, including changes to meta_data.status or industries. But when I filter specifically for those updated in the pipeline, I don’t see any change stream events coming through.Thanks,\nPaul", "username": "Paul_Chynoweth" }, { "code": "", "text": "@Aasawari (Forgot to tag you directly)", "username": "Paul_Chynoweth" }, { "code": "industriescountrydb.CS.updateOne( { _id: ObjectId(\"642d0f15d064d6c29aba53e1\") }, { $set: { \"meta_data.status\": \"Non active\" } })pipelineX = [\n {\"$match\": {\n \"$or\": [\n {\"updateDescription.updatedFields.base_name\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.meta_data.status\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.industries\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.country\": {\"$exists\": True}}\n ]\n }}\n ]\n\ncursor = db.CS.watch(pipeline=pipelineX)\n\nfor change1 in cursor:\n print(change1)\n", "text": "Hi @Paul_ChynowethBased on the sample data and the match pipeline shared, the fields industries and country are not nested fields and hence the dot notation is not applicable to them.\nCan you confirm if I am missing something to understand it correctly.I tried to use my sample data mentioned above, with the following following update query :db.CS.updateOne( { _id: ObjectId(\"642d0f15d064d6c29aba53e1\") }, { $set: { \"meta_data.status\": \"Non active\" } })and I was able to see the changes in the change stream as:{‘_id’: {‘_data’: ‘8264351662000000012B022C0100296E5A1004239EF740FA834282B31EF07C8525DCCE46645F69640064642D0F15D064D6C29ABA53E10004’}, ‘operationType’: ‘update’, ‘clusterTime’: Timestamp(1681200738, 1), ‘wallTime’: datetime.datetime(2023, 4, 11, 8, 12, 18, 682000), ‘ns’: {‘db’: ‘test’, ‘coll’: ‘CS’}, ‘documentKey’: {‘_id’: ObjectId(‘642d0f15d064d6c29aba53e1’)}, ‘updateDescription’: {‘updatedFields’: {‘meta_data’: {‘status’: ‘Non active’}}, ‘removedFields’: , ‘truncatedArrays’: }}with the following python code:Can you help me with your update command and the sample document shared.Regards\nAasawari", "username": "Aasawari" }, { "code": "$or$orpipeline = [\n {\n '$match': {\n '$or': [\n {'updateDescription.updatedFields.base_name': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n {'updateDescription.updatedFields.industries': {'$exists': True}},\n {'updateDescription.updatedFields.country': {'$exists': True}},\n ]\n }\n }\n]\nindustriesmeta_data.statusindustriesindustriesindustriespipeline = [\n {\n '$match': {\n '$or': [\n {'updateDescription.updatedFields.base_name': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n {'updateDescription.updatedFields.industries.0': {'$exists': True}},\n {'updateDescription.updatedFields.country': {'$exists': True}},\n ]\n }\n }\n]\nmeta_data.statusmeta_data.statuspipeline = [\n {\n '$match': {\n '$or': [\n {'updateDescription.updatedFields.base_name': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status.0': {'$exists': True}},\n {'updateDescription.updatedFields.country': {'$exists': True}},\n ]\n }\n }\n]\n\npipeline = [\n {\n '$match': {\n '$or': [\n {'updateDescription.updatedFields.base_name': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n {'updateDescription.updatedFields.industries': {'$exists': True}},\n {'updateDescription.updatedFields.country': {'$exists': True}},\n {'updateDescription.updatedFields.industries': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n ]\n }\n }\n]\n", "text": "Hi @Paul_Chynoweth,I guess my response from earlier today didn’t post… In the pipeline, you’re using $or operator to match updates on any of the specified fields. However, when using $or operator, you need to wrap each condition in a separate dictionary object. Here’s how the pipeline should look:Regarding your issue with not being able to pick up changes to the industries and meta_data.status fields, I suspect it could be due to the way you’re specifying the nested fields in the pipeline.For the industries field, you’re using dot notation to specify the nested field. However, since industries is an array field, you need to use the array field syntax to match updates to this field. Here’s how you can modify the pipeline to match updates to the industries field:For the meta_data.status field, the dot notation should work fine as long as the nested field exists. If it still doesn’t work, you can try specifying the nested field using the array field syntax as well. Here’s how you can modify the pipeline to match updates to the meta_data.status field:I hope this helps. Let me know if you have any further questions or if you’re still having issues. Also, the below overall would correct your pipeline as it adds your missing fields, and generally may actually solve your issue entirely on its own.A lot of the other stuff isn’t really necessary, it kind of bloats what you’re doing.", "username": "Brock" }, { "code": "client.change_stream_db.change_stream_collection.update_one({\"_id\": \"bshkdhnicr\"}, {\"$set\": {\"meta_data.status\": \"Non active\"}})\npipeline = [\n {\n '$match': {\n '$or': [\n {'updateDescription.updatedFields.base_name': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n {'updateDescription.updatedFields.industries': {'$exists': True}},\n {'updateDescription.updatedFields.country': {'$exists': True}},\n {'updateDescription.updatedFields.industries.0': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status.0': {'$exists': True}},\n ]\n }\n }\n]\n.0meta_data.status", "text": "“meta_data.status”: “Non active”Hi @Aasawari ,Thanks for your response. I have a similar update command in pymongo:But under this pipeline (as suggested by @Brock )There is still no change stream event. When I clear the pipeline and have the watch listening for all changes and I run the same update command, I get an event flowing through the change stream.I have recorded my struggles here in this loom:\n(I’ll delete this loom in a few days for security reasons. There is no credentials or anything but conscious of having a video on a discussion forum)@Brock Thanks for providing advice. I couldn’t quite work out what the difference between your first code block and mine, but I then tried with your suggestion of adding in the .0 after meta_data.status and the updated events still didn’t flow through.", "username": "Paul_Chynoweth" }, { "code": "from pymongo import MongoClient\n\nclient = MongoClient(\"mongodb://localhost:27017/\")\ndb = client[\"mydatabase\"]\ncollection = db[\"mycollection\"]\n\npipeline = [{'$match': {'operationType': 'update'}}]\n\nwith collection.watch(pipeline) as stream:\n for change in stream:\n print(change)\nmycollection", "text": "Hi @Paul_Chynoweth,It looks like you are experiencing issues with the change stream in MongoDB. Based on the code you have provided, it seems like you are attempting to update a document and expecting a change stream event to be triggered, but the event is not being received.One thing to check is whether the change stream is actually set up correctly. You can try inserting a new document and see if the change stream event is triggered for that insert. If the insert event is being received but the update event is not, then it’s likely an issue with the update operation.Another thing to check is the filter for the change stream. In your pipeline, you are filtering for updates to specific fields, but if the update is not modifying those fields, then the event will not be triggered. You can try removing the filter to see if the update event is being received at all.Here is an example of setting up a change stream in Python using the pymongo driver:This code sets up a change stream on the mycollection collection and listens for update events. If an update event is received, it will be printed to the console.I hope this helps you in troubleshooting your issue. Let me know if you have any further questions!", "username": "Brock" }, { "code": "# Connect to Mongo\nclient = pymongo.MongoClient(mongo_connection_uri_dev)\nmongodb_db = client[mongo_database_name]\nmongodb_collection = mongodb_db[mongo_collection_name]\n\nchange_stream = client.change_stream_db.change_stream_collection.watch()\n\n# Iterate through the change stream and print each change\nfor change in change_stream:\n print(change)\nclient.change_stream_db.change_stream_collection.insert_one(\n {\n \"_id\": \"pwdhlhfjok\",\n \"base_name\": \"zlskzvugui\",\n \"country\": \"vebeonoigb\",\n \"meta_data\": {\n \"cleaning_required\": False,\n \"status\": \"sdm_triggered\",\n \"status_domain_keyword_search_scraper\": \"finished\",\n \"status_domain_scraper\": \"finished\",\n \"status_domain_search\": \"finished\",\n },\n \"industries\": [\"industry2\"],\n }\n)\n{'_id': {'_data': '826437B02F000000062B022C0100296E5A10048332797CF83843B4B13C30209360370E463C5F6964003C707764686C68666A6F6B000004'}, 'operationType': 'insert', 'clusterTime': Timestamp(1681371183, 6), 'wallTime': datetime.datetime(2023, 4, 13, 7, 33, 3, 69000), 'fullDocument': {'_id': 'pwdhlhfjok', 'base_name': 'zlskzvugui', 'country': 'vebeonoigb', 'meta_data': {'cleaning_required': False, 'status': 'sdm_triggered', 'status_domain_keyword_search_scraper': 'finished', 'status_domain_scraper': 'finished', 'status_domain_search': 'finished'}, 'industries': ['industry2']}, 'ns': {'db': 'change_stream_db', 'coll': 'change_stream_collection'}, 'documentKey': {'_id': 'pwdhlhfjok'}}\nclient.change_stream_db.change_stream_collection.watch() meta_data.status# Code to update the above document...\nclient.change_stream_db.change_stream_collection.update_one({\"_id\": \"pwdhlhfjok\"}, {\"$set\": {\"meta_data.status\": \"On Hold\"}})\n\n# Change Stream event fired...\n{\n \"_id\": {\n \"_data\": \"826437B1860000002E2B022C0100296E5A10048332797CF83843B4B13C30209360370E463C5F6964003C707764686C68666A6F6B000004\"\n },\n \"operationType\": \"update\",\n \"clusterTime\": Timestamp(1681371526, 46),\n \"wallTime\": datetime.datetime(2023, 4, 13, 7, 38, 46, 786000),\n \"ns\": {\"db\": \"change_stream_db\", \"coll\": \"change_stream_collection\"},\n \"documentKey\": {\"_id\": \"pwdhlhfjok\"},\n \"updateDescription\": {\n \"updatedFields\": {\"meta_data.status\": \"On Hold\"},\n \"removedFields\": [],\n \"truncatedArrays\": [],\n },\n}\nclient.change_stream_db.change_stream_collection.watch()meta_data.status# Connect to Mongo\nclient = pymongo.MongoClient(mongo_connection_uri_dev)\nmongodb_db = client[mongo_database_name]\nmongodb_collection = mongodb_db[mongo_collection_name]\n\n# Set up the change stream\npipeline = [\n {\n '$match': {\n '$or': [\n {'updateDescription.updatedFields.base_name': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data.status': {'$exists': True}},\n {'updateDescription.updatedFields.meta_data': {'$exists': True}},\n {'updateDescription.updatedFields.industries': {'$exists': True}},\n {'updateDescription.updatedFields.country': {'$exists': True}},\n {'updateDescription.updatedFields.industries.0': {'$exists': True}}, # I have tried removing this too\n {'updateDescription.updatedFields.meta_data.status.0': {'$exists': True}}, # i have tried removing this too\n ]\n }\n }\n]\nchange_stream = client.change_stream_db.change_stream_collection.watch(pipeline)\n\n# Iterate through the change stream and print each change\nfor change in change_stream:\n print(change)\nmeta_data.statusclient.change_stream_db.change_stream_collection.update_one({\"_id\": \"pwdhlhfjok\"}, {\"$set\": {\"meta_data.status\": \"Active\"}})\nbase_name# update to base name\nclient.change_stream_db.change_stream_collection.update_one({\"_id\": \"pwdhlhfjok\"}, {\"$set\": {\"base_name\": \"Base Name 1\"}})\n\n# Event printed in Change Stream...\n{\n \"_id\": {\n \"_data\": \"826437B2FF000000112B022C0100296E5A10048332797CF83843B4B13C30209360370E463C5F6964003C707764686C68666A6F6B000004\"\n },\n \"operationType\": \"update\",\n \"clusterTime\": Timestamp(1681371903, 17),\n \"wallTime\": datetime.datetime(2023, 4, 13, 7, 45, 3, 265000),\n \"ns\": {\"db\": \"change_stream_db\", \"coll\": \"change_stream_collection\"},\n \"documentKey\": {\"_id\": \"pwdhlhfjok\"},\n \"updateDescription\": {\n \"updatedFields\": {\"base_name\": \"Base Name 1\"},\n \"removedFields\": [],\n \"truncatedArrays\": [],\n },\n}\n\nbase_namecountrychange_stream = client.change_stream_db.change_stream_collection.watch()change_stream = client.change_stream_db.change_stream_collection.watch()change_stream = client.change_stream_db.change_stream_collection.watch()meta_data.statusindustrieschange_stream = client.change_stream_db.change_stream_collection.watch(pipeline)change_stream = client.change_stream_db.change_stream_collection.watch(pipeline)meta_data.statusindustries", "text": "Thanks @Brock ,So basically starting from scratch I have this code here:Now when I insert this document…I receive the insert event in my stream This is shown below…So that’s all good. Now keeping my change stream to pick up all events (client.change_stream_db.change_stream_collection.watch() ) When I update the field meta_data.status I receive the below event from the change stream…So I can confirm that when client.change_stream_db.change_stream_collection.watch() is set up, I receive change stream events when documents are inserted and when nested fields (i.e. meta_data.status) are updated .But things change when I add in my pipeline filter object…Ok so now I try to update the same document and the same field meta_data.status with the below code…And I receive no change event.So, I then test to see if I change a simple field like base_name which is a simple string, and I receive a change stream event…So when I have the pipeline filter it works for simple fields like base_name (and I’ve tested this on country which is also a simple string field) and the pipeline filter works . BUT when I update complex fields (meta_data.status or industries - which is an array) the filter doesn’t pick it up . There must be something within the pipeline filter for those nested fields. What’s strange is when i don’t have the pipeline filters, and just have the change_stream = client.change_stream_db.change_stream_collection.watch() It has no problem picking up a change event when an update to the meta_data.status field happens.TL;DRLet me know if that’s not clear! Happy to provide more updates ", "username": "Paul_Chynoweth" }, { "code": "", "text": "Hey @Aasawari I might tag you directly here again too. Any advice or help would be amazing!", "username": "Paul_Chynoweth" }, { "code": "withtryloggingimport pymongo\nimport logging\n\n# Set up logging\nlogging.basicConfig(filename='change_stream.log', level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s')\n\n# Connect to MongoDB\nclient = pymongo.MongoClient(\"mongodb://localhost:27017/\")\n\n# Select the database and collection\ndb = client[\"mydatabase\"]\ncollection = db[\"mycollection\"]\n\n# Set up the change stream\npipeline = [\n {\n '$match': {\n 'updateDescription.updatedFields': {\n '$or': [\n {'base_name': {'$exists': True}},\n {'meta_data.status': {'$exists': True}},\n {'meta_data': {'$exists': True}},\n {'industries': {'$exists': True}},\n {'country': {'$exists': True}},\n {'industries.0': {'$exists': True}},\n {'meta_data.status.0': {'$exists': True}},\n ]\n }\n }\n }\n]\n\ntry:\n with collection.watch(pipeline) as stream:\n for change in stream:\n logging.info(f\"Received change event: {change}\")\n # Do something with the change event\nexcept pymongo.errors.PyMongoError as e:\n logging.error(f\"Encountered error: {e}\")\nchange_stream.logpymongo.errors.PyMongoError", "text": "Alright, let’s kick this up a notch… You can wrap the with statement in a try block and catch any exceptions that are raised. You can also use the logging module to log any errors or events that occur.Here’s some code that includes error handling and logging, hopefully this will pull whatever it is that’s going on or at least get visibility of the issue:I modified the code to look for errors. The logs will be written to a file named change_stream.log in the current directory. You can change the filename and log location to wherever you want to.NOTE: You should also handle any errors that occur when connecting to the MongoDB server or selecting the database and collection. These can catch these by using the pymongo.errors.PyMongoError exception.By doing this, we can see if there’s any errors. Which there has to be something going on that we aren’t seeing…To ensure it works correctly, I suggest the following steps:Verify that the pipeline filter object is constructed correctly and all the required fields are included. One way to do this is to use the MongoDB Compass GUI to create and test the pipeline filter object.Ensure that the MongoDB server version supports Change Streams and that the necessary permissions are granted for the user account used to connect to the server.Check if there are any errors or exceptions in the Python console when running the code.Try to use a different filter criteria or change the data in the database to see if the change stream events are being captured correctly.If the problem persists, try to recreate the code in a clean environment to eliminate any potential issues with dependencies or configurations.There’s definitely something not being seen that’s going on here, because using the scripts to make similar metadata on a local MDB it’s working just fine… Please verify steps 1 thru 5 to make sure we aren’t missing something.", "username": "Brock" }, { "code": "2. Ensure that the MongoDB server version supports Change Streams and that the necessary permissions are granted for the user account used to connect to the server.3. Check if there are any errors or exceptions in the Python console when running the code.meta_data.statusindustriesbase_namecountry4. Try to use a different filter criteria or change the data in the database to see if the change stream events are being captured correctly.5. If the problem persists, try to recreate the code in a clean environment to eliminate any potential issues with dependencies or configurations.{\n $or:[{base_name:\"ewuvebqyoh\"}, {meta_data.status:\"active\"}]\n}\nI received a syntax related error...\nmeta_data.statusmeta_data.status# Connect to Mongo\nclient = pymongo.MongoClient(mongo_connection_uri_dev)\nmongodb_db = client[mongo_database_name]\nmongodb_collection = mongodb_db[mongo_collection_name]\n\npipeline = [\n {\n \"$match\": {\n \"$or\": [\n {\"operationType\": \"insert\"},\n {\"updateDescription.updatedFields.meta_data\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.industries.0\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.meta_data.status\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.'meta_data.status'\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.base_name\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.'meta_data.status_domain_scraper'\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.industries\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.country\": {\"$exists\": True}}\n ]\n }\n }\n]\n\nlogging.basicConfig(filename='change_stream.log', level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s')\n\n\ntry:\n with mongodb_collection.watch(pipeline) as stream:\n for change in stream:\n print(change)\n logging.info(f\"Received change event: {change}\")\n # Do something with the change event\nexcept pymongo.errors.PyMongoError as e:\n logging.error(f\"Encountered error: {e}\")\nindustries", "text": "Hi @Brock,Thanks for the advice. I think i’ve narrowed it down to a potential syntax problem with Pymongo/Pipeline object - but i’m still not sure. Your point 1 seems to be where the problem is.Essentially from your numbered dot points:But your suggestion to use the Compass GUI (I used the Atlas gui instead) revealed something to me (point 1 of your message). Firstly when I tried to run in the Aggregation tab…\nimage1290×727 82.5 KB\nSo, wrapping the meta_data.status in quotes the filtering now works…\n\nimage1429×752 114 KB\nBut why i’m still unsure that this is the key problem boils down to two things:Note: I’ve also tried simplifying the pipeline object to not include so many fields (i.e. just meta_data.status or just meta_data.status & country for example)\nCode:\nimage1431×789 90.8 KB\nIf it was a syntax error, then I would expect to see similar issues in the Atlas gui when I look for changes to the industries field, since in my pymongo script i’m not seeing any change stream events when that field is updated.Do you think this syntax error could all be related somehow?Thanks again for the help. This is a serious head scratcher for me.", "username": "Paul_Chynoweth" }, { "code": " const pipeline = [\n {\n $match: {\n operationType: \"update\",\n \"updateDescription.updatedFields.meta_data.status\": { $exists: true }\n }\n }\n];\nconst pipeline = [\n {\n $match: {\n operationType: \"update\",\n \"updateDescription.updatedFields.industries\": { $exists: true }\n }\n }\n];\n", "text": "This is actually quite bizarre, my on premise deployment of 6.0.5 it’s working without the modification you had to make, but yes, in Atlas GUI I’m having to make the same change you are.But also, something that’s odd is I’m not getting errors either with dummy data I made for this either, this is either a bug, or syntax problem we’re not seeing. But something is executing, it’s going through but not generating what we’re wanting it to.Can I get some kind of sample data to compare to? To my knowledge the Atlas shouldn’t be different from on premise, because it’s just a gateway GUI to MongoDB behind the dashboard. There shouldn’t be any differences or any reasons to have to modify from one platform to another.But yeah, the pipeline is executing, but it’s not generating anything and I didn’t realize that because when I executed it, there were no errors pulled up so just called it good. But yes, I’m realizing it’s not showing things either and I’m wondering if it’s config setup at this point, but then Atlas shouldn’t be having config issues or any other services in the way of the data populating.Overall, there absolutely should be a visible change event, and you’re right, it’s not populating one.@Paul_Chynoweth Let’s take a different approach on this, I think we should try to isolate some things to process of elimination and see if it’s a cause for this behavior.Because I think we have a conflict with meta_data.status field, as I’m realizing we have both “updateDescription.updatedFields.meta_data.status” and “updateDescription.updatedFields.‘meta_data.status’” that’s being considered in the filter criteria, and I’m kind of wondering if this is why we are seeing this issue now.Right now I’m trying to figure out a good way to just eliminate one or the other of these without breaking the pipeline, but I’m really thinking this might be the problem. So I’m thinking eliminate one or the other and see if that’s what we need to do now.Because I strongly believe now it’s just not sorting/filtering this right for the output, so it’s not making the change possibly, this needs more testing but I do think we’re close. As far as why on prem doesn’t throw errors or Atlas throwing errors for this, which tells me it is in fact doing something, just isn’t doing what we want it to do. But syntactically is correct otherwise we should get errors.Filtering for just the meta data.For some reason MongoDB connector in VS code removes the quotes, the deployment via GitHub to atlas isn’t throwing back errors, I’m waiting for the change event though.I can check for Industries too, as a control variable to see if there’s a difference by doing:What I’d like to do, is see which field individually isn’t populating what we’re wanting it to populate.", "username": "Brock" }, { "code": "{\n \"_id\": \"pnsfnrkoth\",\n \"base_name\": \"mjzzbvxqgk\",\n \"country\": \"ktybyozypo\",\n \"meta_data\": {\n \"cleaning_required\": true,\n \"status\": \"\",\n \"status_domain_keyword_search_scraper\": \"in_process\",\n \"status_domain_scraper\": \"finished\",\n \"status_domain_search\": \"finished\"\n },\n \"industries\": [\n \"industry3\",\n \"industry1\"\n ]\n}\nbase_namemeta_data.statusindustriesimport os\n\nimport pymongo\nfrom bson.json_util import dumps\nimport logging\n\n\n# === Connect to MongoDB\nmongo_connection_uri_dev = os.environ[\"MONGO_CONNECTION_URI_DEV\"]\nmongo_database_name = os.environ[\"MONGO_DATABASE\"]\nmongo_collection_name = os.environ[\"MONGO_COLLECTION_NAME\"]\n\n\n\n# Connect to Mongo\nclient = pymongo.MongoClient(mongo_connection_uri_dev)\nmongodb_db = client[mongo_database_name]\nmongodb_collection = mongodb_db[mongo_collection_name]\n\n# Set up the change stream\nlogging.basicConfig(filename='change_stream.log', level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s')\n\n# pipeline object\npipeline = [\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"updateDescription.updatedFields.base_name\": { \"$exists\": True }\n }\n }\n];\n\n\ntry:\n with mongodb_collection.watch(pipeline) as stream:\n for change in stream:\n print(change)\n logging.info(f\"Received change event: {change}\")\nexcept pymongo.errors.PyMongoError as e:\n logging.error(f\"Encountered error: {e}\")\nconnection_uri = mongo_connection_uri_dev\nclient = pymongo.MongoClient(connection_uri)\nmongodb_db = client[\"change_stream_db\"]\nmongodb_collection = mongodb_db[\"change_stream_collection\"]\nmongodb_collection.update_one({\"_id\": \"pnsfnrkoth\"}, {\"$set\": {\"base_name\": \"demo\"}})\n{'_id': {'_data': '82643EA4C0000000192B022C0100296E5A10048332797CF83843B4B13C30209360370E463C5F6964003C706E73666E726B6F7468000004'\n }, 'operationType': 'update', 'clusterTime': Timestamp(1681827008,\n 25), 'wallTime': datetime.datetime(2023,\n 4,\n 18,\n 14,\n 10,\n 8,\n 122000), 'ns': {'db': 'change_stream_db', 'coll': 'change_stream_collection'\n }, 'documentKey': {'_id': 'pnsfnrkoth'\n }, 'updateDescription': {'updatedFields': {'base_name': 'demo'\n }, 'removedFields': [], 'truncatedArrays': []\n }\n}\n# pipeline object\npipeline = [\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"updateDescription.updatedFields.meta_data.status\": { \"$exists\": True }\n }\n }\n]\nmongodb_collection.update_one({\"_id\": \"pnsfnrkoth\"}, {\"$set\": {\"meta_data.status\": \"demo\"}})\n# pipeline object\npipeline = [\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"updateDescription.updatedFields.'meta_data.status'\": { \"$exists\": True }\n }\n }\n]\npipeline = [\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"updateDescription.updatedFields.industries\": { \"$exists\": True }\n }\n }\n]\nmongodb_collection.update_one({\"_id\": \"pnsfnrkoth\"}, {\"$set\": {\"industries.0\": \"demo\"}})\n\"$match\": {\n \"operationType\": \"update\",\n }\n{'_id': {'_data': '82643EA7FF0000002F2B022C0100296E5A10048332797CF83843B4B13C30209360370E463C5F6964003C706E73666E726B6F7468000004'}, 'operationType': 'update', 'clusterTime': Timestamp(1681827839, 47), 'wallTime': datetime.datetime(2023, 4, 18, 14, 23, 59, 552000), 'ns': {'db': 'change_stream_db', 'coll': 'change_stream_collection'}, 'documentKey': {'_id': 'pnsfnrkoth'}, 'updateDescription': {'updatedFields': {'industries.0': 'demo industry'}, 'removedFields': [], 'truncatedArrays': []}}\n# listener\npipeline = [\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"updateDescription.updatedFields.industries.0\": { \"$exists\": True }\n }\n }\n]\n\n# update\n mongodb_collection.update_one({\"_id\": \"pnsfnrkoth\"}, {\"$set\": {\"industries.0\": \"demo industry a\"}})\nupdateDescription.updatedFields.\n", "text": "Thanks @Brock , it does look more and more likely that it is some sort of syntax error or some sort of issue in the filtering process.So with this document here…\n\nScreen Shot 2023-04-18 at 3.59.11 pm2512×1082 216 KB\nI’m going to make 3 changes to the document using your pipeline code you sent through. I’ll make an update to base_name, meta_data.status & industries\nimage2488×1044 196 KB\nAll good there.Same code just different pipeline object…\nimage2508×1076 200 KB\nNo change stream event recorded.I tried again with this listening code…(updating the meta_data.status to demo2)But still no luck.\n\nimage2482×966 160 KB\nNo change stream event.So it doesn’t look like those pipeline changes are making any different unfortunately.I can confirm that when I just haveIt picks up all of the updates (including updates to meta_data.status & to industries). I sent an update to industries when it was the above match/pipeline and received this change stream event…So I tried again with this pipeline…And there was no change stream event printed to the console.So from my perspective it’s something to do with…and filtering on that object when there is nested objects (i.e. meta_data.status or industries or industries.0)I hope that gives you more data and more context, and allows you to rule out that theory!", "username": "Paul_Chynoweth" }, { "code": "pipeline = [\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"$or\": [\n {\"updateDescription.updatedFields.base_name\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.meta_data.status\": {\"$exists\": True}},\n {\"updateDescription.updatedFields.industries\": {\"$exists\": True}}\n ]\n }\n }\n]\n\nconst MongoClient = require('mongodb').MongoClient;\nconst uri = 'mongodb+srv://<username>:<password>@<cluster-address>/test?retryWrites=true&w=majority';\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n\nclient.connect(err => {\n const collection = client.db(\"test\").collection(\"testColl\");\n const changeStream = collection.watch();\n\n changeStream.on(\"change\", function(change) {\n console.log(\"A document has been \" + change.operationType + \"d.\");\n console.log(\"Updated fields: \", change.updateDescription.updatedFields);\n });\n});\n", "text": "THANK YOU SO MUCH!!! I’M AN IDIOT!!!Try this:I think we needed the $or,I’m a dufus, sorry about that! I forgot to throw in $or into the mix, the importance of $or is that it’s an operator to match updates to any of the fields. I didn’t catch until you mentioned that.Also, someone worth stalking for Aggregations and Pipelines, is a guy named Adam Harrison. The dude is extremely knowledgeable on these things. I use a lot of his stuff to figure out aggregation and pipeline issues.That said, also going by MongoDB Aggregation Pipeline | MongoDB and MongoDB: The Definitive Guide: Powerful and Scalable Data Storage ISBN13: 978-1491954461 which grossly needs to be updated for the new upcoming 7.0 since it’s presently on 4.2…I think the following may also be a good way to go about troubleshooting the change streams if this $or doesn’t fix this. Because otherwise, invoking what we have shouldn’t be having issues from the documentation I’ve been reading, unless there’s something benign I’m glossing over. I really don’t see how there could be any issues like are being described.Because the change stream is executing, but not doing anything which is the weird thing. If this persists even after the following example it’s definitely support ticket worthy.If it is something benign, then it definitely needs to go into changes to the docs to reflect it because otherwise this is crazy.But what I\"m trying to do with the above, is make something give an output of some kind, but I’m really edging on the fact it may just be that I’m an idiot and completely forgot the $or.", "username": "Brock" }, { "code": "pipeline = [\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"updateDescription.updatedFields.base_name\": { \"$exists\": True }\n }\n }\n];\n\"updateDescription.updatedFields.base_name\"\"updateDescription.updatedFields.meta_data.status\"", "text": "Hey @Brock,To be honest I don’t think you’re an idiot at all, thanks for helping with this.But i’m still unsure of one thing. So bringing in the $or operator means that i pick up all updates OR updates to that specific field. What i’m ultimately after is updates to specific field. I’d like my pipeline to pick up only updates to the meta_data.status field - and not all updates. If I add in the OR operator, it will pick up all updates regardless of which field is being updated.Also, if it was just the $or, then we would see the same probelm we see for meta_data.status as we would for the simple base_name. But this pipeline below worked perfectly well in picking up updates to base_name (and it didn’t have the $or operator):So unless i’m missing something, I don’t think having the $or solves the issue as to why it works for a simple field like base_name (\"updateDescription.updatedFields.base_name\") but doesn’t work for a nested field like meta_data.status (\"updateDescription.updatedFields.meta_data.status\").Let me know if i’m missing something!Thanks again,\nPaul", "username": "Paul_Chynoweth" }, { "code": "", "text": "@Paul_Chynoweth that logic is sound actually, I’m honestly starting to think there might be an output being generated, but it’s not being put into view.I honestly think this is worth opening a support ticket for the ability to see what’s going on under the hood. In Atlas GUI that in fact will not show anything, but on my local MongoDB I’m getting changes with it coming back to me.I’ve been screensharing with about a dozen DBAs who’ve been using MDB for the last 9 to 11 years as a favor for a corporation I’m doing a quick consult gig for to change some things. and they aren’t finding anything wrong with your query.In fact they are using almost the same pipelines for something else that they have going on (Lead said I can disclose that) and it’s very similar to yours. The only difference is everything with this company is on prem with exception to R&D labs they have for a few products to see how Atlas fairs in costs and performance vs their on prem.Being a very large corp that handles over 300,000 concurrent players daily, globally and their last outage for on-prem was almost a year ago, so I’d imagine they have a clue of what they are doing too when it comes to these things.This is worth opening a support ticket on, and requesting why the output isn’t generating or why changesctream isn’t happening. The TSE will be able to use internal tools to look at what’s actually happening under the hood, and why it’s not showing up in Atlas.Hey Paul, could you tell me what version of MongoDB is running on your Atlas? You state above 6.0.5.In a M0 Cluster so it doesn’t cost you anything, could you throw some dummy data down on a 5.X or 4.4 and tell me if you get an output?", "username": "Brock" }, { "code": "", "text": "That’s good thinking @Brock ,I tried to create a new Cluster on version 5.\n\nimage1745×888 85.1 KB\nFrustratingly after it’s created it defaults back to 6…\n\nimage1716×542 41.4 KB\nIs that expected behaviour?And yeah definitely seems like a support ticket. Is that something I create on my end?", "username": "Paul_Chynoweth" }, { "code": "", "text": "Hi @Paul_Chynoweth yes, you would make it yourself. And that’s actually interesting it’s not letting you specify a version other than 6.That’s very interesting actually.", "username": "Brock" }, { "code": "{ $match: {\n 'updateDescription.updatedFields.subscription.enabled': { $exists: true }\n}}\n", "text": "Hey @Paul_Chynoweth did you ever get to the bottom this? I’m experiencing the same issue (v7.0.1). An unfiltered pipeline triggers a the change stream fine, but trying to match a nested field doesn’t trigger.", "username": "christohill" } ]
'match' not operating as expected within watch(pipeline) functionality on nested fields
2023-04-03T09:05:49.266Z
&lsquo;match&rsquo; not operating as expected within watch(pipeline) functionality on nested fields
2,017
null
[ "golang", "transactions" ]
[ { "code": "", "text": "Bulk operations can be run “unordered” which may result in parallel op execution (doc link).mongo.SessionContext is not safe for use in concurrent code, however (doc link).Does this mean I should not perform unordered bulk writes within transactions? Or does the mongo driver handle this and only goroutines generated by the user are unsafe?Thanks!", "username": "Nates" }, { "code": "", "text": "Does this mean I should not perform unordered bulk writes within transactions?No. mongodb officially supports bulk operations within transactions. It only means you shouldn’t use concurrent ops on the same session from your app.parallel bulk operations is a server side thing and has nothing to do with that rule.", "username": "Kobe_W" }, { "code": "", "text": "Gotcha, I see where this is can be inferred now under “Error Handling inside Transactions” - they mention that the bulk write fails fast even when running unordered, which suggests its a supported operation like you say:Inside a transaction, the first error in a bulk write causes the entire bulk write to fail and aborts the transaction, even if the bulk write is unordered.Thanks for the link.", "username": "Nates" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SessionContext is not safe for use in goroutines - does this preclude unordered bulk writes?
2023-10-11T18:12:20.309Z
SessionContext is not safe for use in goroutines - does this preclude unordered bulk writes?
221
null
[ "queries", "python", "atlas-functions" ]
[ { "code": "exports = async function(accountname, password) {\n // assume password already hashed\n const accountsCollection = context.services\n .get(\"OutfitLB\")\n .db(\"OutfitLB\")\n .collection(\"Accounts_mongo\");\n \n const account = await accountsCollection.findOne({ account: accountname });\n\n if (account && account.password === password) {\n return account.data;\n } else {\n throw new Error(\"Invalid username or password.\");\n }\n};\nrequestsaccountsCollection.find().toArray();import requests\n\nGET_URL = \"https://us-east-2.aws.data.mongodb-api.com/app/data-zzkgk/endpoint/getData\"\n\n# Replace with the actual username and password\naccountname = \"test\"\npassword = \"9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08\"\n\nresponse = requests.get(\n GET_URL, params={\"accountname\": accountname, \"password\": password}\n)\n\nif response.status_code == 200:\n account_data = response.json()\n print(f\"Account Data for {accountname}: {account_data}\")\nelse:\n print(\"Wrong account details\")\n print(f\"Function call failed: {response.status_code}, {response.text}\")\n", "text": "I have a simple function in Realms that returns the data array from an entry via an HTTPS Endpoint:\nentry = {\n“account”: username,\n“password”: hashed_password,\n“data”: ,\n}However, this function only works properly in the Testing Console. Initially, it worked as well on my Python program using requests as well as online API testing tools. However, it randomly stopped working. Instead of returning the account.data when the username and password are correct, it will always throw the error that the username or password is incorrect. What I noticed is that the function only worked when I set the Testing console to System user. What is odd though is that I can return accountsCollection.find().toArray(); and I will be able to view all the data through my Python program. My Python program can be found below:How do I ensure that clients are able to properly invoke this function? I put a default role of read/write/search, however it did not fix it.", "username": "A_L" }, { "code": "exports = async function({ query, headers, body}, response) {\nconst accountname = query.accountname\nconst password = query.password\n", "text": "Hi A_L,It looks like you may not be accessing the function parameters correctly. You should be passing in a Request and Response object into the endpoint function call, where a Request object represents the HTTP request that called the endpoint.You would want to pass in accountname and password in the query field and access it within the function like so:Let me know if that works for you!", "username": "Kaylee_Won" } ]
Testing console returns different output than elsewhere
2023-09-30T00:17:58.269Z
Testing console returns different output than elsewhere
340
null
[ "atlas-cluster", "containers" ]
[ { "code": "MongoServerSelectionError: 00789A21077F0000:error:0A000438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1605:SSL alert number 80\n\nat Timeout._onTimeout (/opt/mira/node_modules/mongodb/lib/sdam/topology.js:278:38)\nat listOnTimeout (node:internal/timers:568:17)\nat process.processTimers (node:internal/timers:511:7) {\nreason: TopologyDescription {\ntype: 'ReplicaSetNoPrimary',\nservers: Map(3) {\n'ac-jl4g2pn-shard-00-02.mw2ni5a.mongodb.net:27017' => [ServerDescription],\n'ac-jl4g2pn-shard-00-00.mw2ni5a.mongodb.net:27017' => [ServerDescription],\n'ac-jl4g2pn-shard-00-01.mw2ni5a.mongodb.net:27017' => [ServerDescription]\n},\nstale: false,\ncompatible: true,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\nsetName: 'atlas-zogtj8-shard-0',\nmaxElectionId: null,\nmaxSetVersion: null,\ncommonWireVersion: 0,\nlogicalSessionTimeoutMinutes: null\n},\ncode: undefined,\n[Symbol(errorLabels)]: Set(0) {}\n}\n---------------------------------\nAn internal error unrelated to the peer or the correctness of the protocol (such as a memory allocation failure) makes it impossible to continue. This message is always fatal.\n", "text": "Hello there,I’m facing a TLS handshare issue on my AWS docker container node I’m not able to reproduce locally :From stack overflow, I got following explanation about TLS “alert number 80” : it means “internal_error” (see RFC 5246 Section 7.2). It is sent by the TLS server to the TLS client meaningDoes anyone got that issue already and found a solution on it ?Thank you for your help !", "username": "Mathilde_Ffrench" }, { "code": "", "text": "I found the solution on my problem.Indeed, on my current AWS setup (quick & dirty public public subnet + public IP address), each time I update the ECS service task, the ENI is changing and so with it the public IP address.I did changed my Atlas firewall setup accordingly just before the task init and seems now my services are happily connecting to my Atlas DB.", "username": "Mathilde_Ffrench" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
TLS handshake (alert number 80) issue between AWS ECS instance and Atlas
2023-10-11T18:08:21.403Z
TLS handshake (alert number 80) issue between AWS ECS instance and Atlas
307
https://www.mongodb.com/…7_2_1024x347.png
[]
[ { "code": "", "text": "Hey,I wonder the difference between these Max Disk IOPS and Disk IOPS. Also which one should I look when I’m using standard IOPS.Since the granularity is 1 minute here (no option to use second), should I assume the Disk IOPS is the average of that minute and Max Disk IOPS is the maximum number of operations in each second in the minute time interval?I’m on M30 (general) with 3000 baseline IOPS in AWS. Disk IOPS graph shows number under 500. The other one shows data nearly on 6000. I wonder if I will have a burst issue soon?image3386×1148 421 KB", "username": "Mete" }, { "code": "", "text": "Hey @Mete,Welcome to the MongoDB Community!I wonder the difference between these Max Disk IOPS and Disk IOPS.Max Disk IOPS is the highest disk IOPS value in that given time period, while the Disk IOPS value is the total input operations per second.Also which one should I look when I’m using standard IOPS.I’m on M30 (general) with 3000 baseline IOPS in AWS. Disk IOPS graph shows number under 500. The other one shows data nearly on 6000. I wonder if I will have a burst issue soon?In the graph above, the Max Disk IOPS is displayed at a 1-hour zoom level, which implies that the minimum granularity is 1 minute. Within that 1-minute interval, there can be many checkpoints that happen (for a very small amount of time), which drives the max disk IOPS value up. Therefore, it is advisable to look at Disk IOPS at a smaller zoom or granularity level for better insights.Hope this answers your question!Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hey @Kushagra_Kesav ,Since there is no second granularity and the metric itself is an interpretation of something in second, I need clarification.So when we look at the minute granularity, each data point shows me a number.\nFor IOPS, is this number showing the average of input output operations in that minute? Like Sum(t1, t2 …, t60) / 60Also for Max Disk IOPS, looking at 1 minute granularity, is this number shows me MAX(t1, t2, …, t60)\nwhich is the maximum IOPS value in one of the seconds inside that minute?", "username": "Mete" } ]
Max Disk IOPS vs Disk IOPS
2023-10-08T11:41:22.916Z
Max Disk IOPS vs Disk IOPS
286
null
[ "aggregation" ]
[ { "code": "{\n $lookup: {\n from: \"staffTimesheet\",\n localField: \"_id\",\n foreignField: \"missionId\",\n as: \"matchingTimesheets\"\n }\n },\n {\n $match: {\n matchingTimesheets: {\n $elemMatch: {\n \"schedule.startingDay\": { $lte: date },\n \"schedule.endingDay\": { $gte: date }\n }\n }\n }\n }\nmatchingTimesheets: {\n $elemMatch: {\n \"schedule.overtime\": true\n }\n }\nconst date = new Date().toJSON();", "text": "Hi,Here is two stage of my aggregate.\nThe “$lookup” works well and put “matchingTimesheets” (document’s array from my “Timesheet” collection) inside my current document.\nBut the $match never return anything.If I try to do:Its working well and I get the correct documents with at least one document inside “matchingTimesheets” that have “schedule.overtime” equal to true.\nSo I think I just don’t understand how to work with date fields…Edit:\nconst date = new Date().toJSON(); and it’s give me “2023-10-11T14:06:02.673Z”\nIn my database, startingDay and endingDay are formated like “2023-10-08T10:45:28.260+00:00”, maybe it’s part of the problem.", "username": "Jaime_Dos_Santos" }, { "code": "const date = new Date().toJSON();const date = new Date() ;\n", "text": "maybe it’s part of the problem.It is most likely the problem. To have comparable values field should have the same type. In your query, when you doconst date = new Date().toJSON();You assign a string value to the date varable. Which is most likely not comparable with native date in your database. Try with simply", "username": "steevej" } ]
Filtering by date Field on aggregate
2023-10-11T14:27:12.884Z
Filtering by date Field on aggregate
163
null
[]
[ { "code": "", "text": "I’m Anom, a full-stack & generative-ai developer. Nice to meet you all. I’m currently using MongoDB for vector storage. I’d be happy to tell you what I’m working on! Feel free to ask me anytime .", "username": "anom" }, { "code": "", "text": "Hey Anom! Welcome to the MongoDB Community Forums! Glad to have you here! Regards,\nSatyam", "username": "Satyam" } ]
Hello everyoneee!
2023-10-11T13:51:33.086Z
Hello everyoneee!
215
null
[]
[ { "code": "", "text": "My experience with Examity has been extremely frustrating. This platform not only drains your wallet but also wastes your precious time. The instructors’ language proficiency is abysmal, making it nearly impossible to comprehend their instructions. Furthermore, their communication lacks professionalism and respect.To add to the misery, it’s evident that the platform is plagued with technical issues. I had to switch browsers due to numerous bugs, and at one point, I was stuck for a painful 30 minutes before I could even start my exam. The overall testing environment is subpar, and it feels like they haven’t invested in better equipment, including microphones.edit: the support literally called me for a zoom call just to hang up on me later", "username": "Oussema_Sahbeni" }, { "code": "", "text": "Your description of Examity pretty much hits the nail on the head. It’s not just frustrating; it’s expensive, time-consuming, and confusing as heck. And don’t get me started on the language proficiency thing – I’ve been there, trying to decipher what the proctor was saying and feeling like I needed a decoder ring.Switching browsers because of bugs? Been there. Staring at a screen for half an hour, waiting to start the exam? Done that. The technical issues are like a never-ending saga, and it’s baffling how they haven’t sorted them out.As for that support Zoom call that ended in a hang-up, I’m pretty sure that qualifies as a comedy skit in the “How Not to Provide Support” handbook.But hey, it’s good to know we’re not alone in this, right? Misery loves company, they say. Let’s hope MongoDB takes note of our collective woes and helps make things better.edit: just made a post about my experience.", "username": "Business_email_2" }, { "code": "", "text": "Hello @Oussema_Sahbeni We sincerely apologize for your experience with our proctoring service and will pass all your comments to Examity. It is important to us that our users have a smooth and positive testing experience. We take these criticisms seriously and you can be assured that they will be addressed. Meanwhile, I will respond to the email you sent to [email protected]. Thank you.", "username": "Heather_Davis" }, { "code": "", "text": "thank you for this , and i really wish you take this seriously for better user experience for the future mongodb learners", "username": "Oussema_Sahbeni" }, { "code": "", "text": "We absolutely will. We do not want our candidates to have a negative testing experience.\nThank you.", "username": "Heather_Davis" }, { "code": "", "text": "I had to reschule twice regarding the name. Both the support team and the proctors are terrible. I have never had a worse exam experience in my life. Worse, I couldn’t even take the exam.", "username": "Yusuf_Caglar" }, { "code": "", "text": "@Yusuf_Caglar I am sorry to hear about your poor testing experience. I will reach out to you privately to discuss further. Thank you!", "username": "Heather_Davis" } ]
My Exeperience with examity
2023-10-08T14:35:59.277Z
My Exeperience with examity
351
null
[ "aggregation" ]
[ { "code": "[\n {\n invoice_id: \"INV0010\",\n amount: 1000,\n invoice_date: new Date(\"2023-10-10\"),\n order_date: new Date(\"2023-09-20\")\n },\n {\n invoice_id: \"INV0011\",\n amount: 1000,\n invoice_date: new Date(\"2023-10-10\"),\n order_date: new Date(\"2023-09-20\")\n },\n {\n invoice_id: \"INV0012\",\n amount: 1000,\n invoice_date: new Date(\"2023-10-10\"),\n order_date: new Date(\"2023-09-20\")\n }\n]\ndb.collection.aggregate([\n {\n \"$match\": {\n \"$or\": [\n {\n \"invoice_date\": {\n $gte: {\n $todate: \"2023-10-01\"\n },\n $lt: {\n $todate: \"2023-10-11\"\n }\n }\n },\n {\n \"order_date\": {\n $gte: {\n $todate: \"2023-10-01\"\n },\n $lt: {\n $todate: \"2023-10-11\"\n }\n }\n }\n ]\n }\n },\n {\n $project: {\n _id: 1,\n invoice_id: 1,\n amount: 1\n }\n }\n])\n", "text": "Hi good evening Folks, i have a document with two date one i invoice date, order date\nnow i need to check either one of the provided date falls under provided interval i need to display the resultsand my query is as followswhat am i doing wrong here, though i am having result in the document but i didn’t get any result", "username": "Thavarajan_M" }, { "code": "", "text": "Is there any reasons why you are using $todate rather than new Date() in your query?Most likely, it is because of a typo in $todate because it should be $toDate as documented.One way to debug is to simplify. Start by just checking if the invoice_date part works. Then check the order_date part works.This being said, your query seems to work with new Date().", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Checking date between with or
2023-10-11T12:58:50.206Z
Checking date between with or
180
null
[ "node-js" ]
[ { "code": "cron.schedule(\"*/10 * * * * *\", async () => {\n console.log(\"10 sek har gått\");\n try {\n let response = await fetch(\"https://polisen.se/api/events\", {\n method: \"GET\",\n headers: {\n \"User-Agent\": \"Oscar Throedsson\",\n },\n });\n const data = await response.json();\nconsole.log(\"Array? \", Array.isArray(data)); //output: true\n console.log(`Längd: ${data.length}`); //output: 500\nfor (const element of data) {\n let event = createNewDocument(element);\nconst checkEventExistence = await wholeColl.findOne({ _id: element.id }); //using await so the code checks the database before continuing. \n\n console.log(\n `4. ${counter}: Comparing: from API ${\n element.id\n } -> from DB: ${JSON.stringify(checkEventExistence._id)}`\n ); // output: number -> number. To see the comparision. \n counter++; // output: Helps me keep track on which list item is printed out. \n console.log(\"--------------------\");\nif (!checkEventExistence) {\n //# | save in DB\n try {\n added++; //! | delete when we clean up the code\n await event.save();\n console.log(`SAVED - > EventID: ${event.id}`);\n } catch (error) {\n console.error(\"Error while saving event:\", error);\n }\n //\n } else {\n console.log(`NOT SAVED - > EventID: ${event.id}`);\n //! | delete else when we clean up the code\n notAdded++; //! | delete when we clean up the code\n }\n", "text": "I´am fetching data from an API. I want to store that data in my DB.Problem:\nI know I get 500 objects every fetch- that is standard with this API. But when I run it with an empty DB, does not every object get added to my DB.\nAs my understanding, when I use async and await, the code does not continue until the code with the await is done. I have added the console output in a comment below.Inside a crone.scheduale() i do the following with a async callback. → fetch the data → using .json() to make it a json. → Validate that the response is an array and console.log() length of the array. → Create a for of and run every array item in to a schema. array item is objects. → Next is validating if the object is in the database. Every array item comes with a uniqe id, and I use that to confirm if it exist or not. → In an if statement, do I check if findOne() returns falsy or truly (null or object). If it is falsy (null) I add the object to my database of truly i dont add it. I use a try catch to see if the save went well or not.", "username": "Oscar_Throedsson" }, { "code": " let event = createNewDocument(element);\n\n //# | validate if the object exist in DB\n console.log(\"--------------------\");\n console.log(\"1. From API: \", element.id);\n", "text": "Here can you see what my console.logs() and what comes out is not in the correct order… 1. From API: 452571\n server | 2. 467: Comparing: from API 454046 → from DB: {“_id”:454046,“datetime”:“2023-10-09T18:22:03.000Z”,“name”:“09 oktober 19:37, Trafikolycka, Umeå”,“summary”:“Brogatan/Storgatan, Väst på stan. En personbil och en cyklist har kolliderat”,“url”:“/aktuellt/handelser/2023/oktober/9/09-oktober-1937-trafikolycka-umea/”,“type”:“Trafikolycka”,“location”:{“name”:“Umeå”,“gps”:“63.825847,20.263035”},“__v”:0}\nserver | --------------------\nserver | NOT SAVED - > EventID: 454046server | --------------------\n server | 1. From API: 454045\n server | 2. 468: Comparing: from API 452571 → from DB: null\nserver | --------------------\n server | 2. 469: Comparing: from API 454045 → from DB: {“_id”:454045,“datetime”:“2023-10-09T18:19:14.000Z”,“name”:“09 oktober 20:00, Stöld, Skellefteå”,“summary”:“Under kvällen har polisen larmats till två butiker med anledning av stöld”,“url”:“/aktuellt/handelser/2023/oktober/9/09-oktober-2000-stold-skelleftea/”,“type”:“Stöld”,“location”:{“name”:“Skellefteå”,“gps”:“64.750244,20.950917”},“__v”:0}\nserver | --------------------\nserver | NOT SAVED - > EventID: 454045The 2 is coming after another 2 which mean we have skipped running next object threw my schema (i guess)(below) The 2 is missing before we go to number one again. and the saved ID is not the same as the wone from the current element, but the same as “comparing 468” as abow. The saved ID should be the same as the IDE from \"from API: … \"server | --------------------\n server | 1. From API: 454043\nserver | SAVED - > EventID: 452571\n the 2 is missingserver | --------------------\n vserver | 1. From API: 452570\n ^Cserver | 2. 470: Comparing: from API 454043 → from DB: {“_id”:454043,“datetime”:“2023-10-09T18:00:40.000Z”,“name”:“09 oktober 18:44, Stöld, Sundsvall”,“summary”:“Birsta, polisen larmas till en butik”,“url”:“/aktuellt/handelser/2023/oktober/9/09-oktober-1844-stold-sundsvall/”,“type”:“Stöld”,“location”:{“name”:“Sundsvall”,“gps”:“62.390811,17.306927”},“__v”:0}\nserver | --------------------\nserver | NOT SAVED - > EventID: 454043On Can you see that number 1 or 2 is repeating it self, after 1 should only 2 come, and after 2 should only 1 come. Which it doesnt.", "username": "Oscar_Throedsson" } ]
Not everything is added to my DB
2023-10-11T11:48:50.443Z
Not everything is added to my DB
204
null
[ "aggregation", "crud", "time-series" ]
[ { "code": "db.litmusDataPoint.aggregate([{$set:{'Metadata.registerId':'$registerId'}}])\n\ndb.litmusDataPoint.updateMany({}, {$set: {\"Metadata.registerId\": \"$registerId\"}})\n", "text": "Hi,\nI am trying with no luck in Mongo6.0 to add a field to Metadata. For each document, this field is going to have value of another field (called registerId) from the same document. The existing Metadata object contains 2 fields siteId and organizationId. I am looking to add one more field to Metadata called registerId by copying registerId field for each document.Tried multiple ways. All of them add the field registerId to Metadata as expected but the value is string ‘$registerId’ instead of the value of registerId field.Is it not possible to dynamically set value of metaData filed in Mongo 6.0?", "username": "Pari_Dhanakoti" }, { "code": "db.litmusDataPoint.aggregate([{$set:{'Metadata.registerId':'$registerId'}}])", "text": "Hey @Pari_Dhanakoti,Welcome to the MongoDB Community!I am looking to add one more field to Metadata called registerId by copying registerId field for each document.\ndb.litmusDataPoint.aggregate([{$set:{'Metadata.registerId':'$registerId'}}])Let us know if you have any further questions!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "db.litmusDataPoint.updateMany({}, {$set: {\"Metadata.registerId\": \"$registerId\"}})\n", "text": "Hi, Appreciate your insights. Looking at the link provided , $set is allowed. As in my original post, second command I have tried uses update many with $set. However, the results are same. It adds the meta field but sets it to string “$registerId” instead of value of the field registerId.", "username": "Pari_Dhanakoti" }, { "code": "{\n \"_id\": 1,\n \"name\": \"Product A\",\n \"price\": 20,\n}\ndb.collections.updateMany({}, {$set: {\"Metadata.price\": \"$price\"}})\n{\n \"_id\": 1,\n \"name\": \"Product A\",\n \"price\": 20,\n \"Metadata\": {\n \"price\": \"$price\"\n }\n}\nprice'Metadata.price'db.books.updateMany( {},[ {\n $set: { \"Metadata.price\": \"$price\" } } ] )\n{\n \"_id\": 1,\n \"name\": \"Product A\",\n \"price\": 20,\n \"Metadata\": {\n \"price\": 20\n }\n}\n", "text": "Hi @Pari_Dhanakoti,So, when you execute the $set operator even on a regular collection, it will return the same results.For example, let’s consider you have the following documents in regular collection:And when you execute the command:it will return the following result:So, to address this, we have implemented an update with an aggregation pipeline that adds the price value to 'Metadata.price'.returning the following output:But as you know presently this feature is not available for the Time Series Collection.Hope it answers your question.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Time Series Metadata add field - Mongo 6.0 copy from non-meta field
2023-10-10T23:40:39.348Z
Time Series Metadata add field - Mongo 6.0 copy from non-meta field
292
https://www.mongodb.com/…b_2_1024x574.png
[ "node-js" ]
[ { "code": "", "text": "\nimage1878×1053 160 KB\n\nI run this backend on windows and kali linux worked perfectly but when I tried on ubuntu then it showed an error. could anybody help me?", "username": "Abu_Said_Shabib" }, { "code": "", "text": "my code is\n\nimage959×1071 95.7 KB\n", "username": "Abu_Said_Shabib" }, { "code": "", "text": "Hello @Abu_Said_Shabib, Welcome to the MongoDB community forum,I think you are using mongodb npm’s latest driver version 5, and there are Build and Dependency Changes, Just make sure the below thing,Minimum supported Node version\nThe new minimum supported Node.js version is now 14.20.1", "username": "turivishal" }, { "code": "", "text": "Hi, Were you able to find a solution to this problem? I haven’t been lucky so far.\nI have been facing server connection problems due to things (not sure if it’s because of a firewall or vpn or zscaler) but I Started facing this specific problem yesterday after I removed MongoDB from package.json file, deleted package-lock and node-modules and then reinstalled everything, then installed mongo db again.\nI am facing the same error and I haven’t been able to find other questions related to it on the the internet maybe Im looking for the wrong thing in the wrong places.\nplease help.attaching SS of my package.json file.I am using node version 19.8.0\nScreenshot 2023-03-23 at 6.33.00 AM1018×1550 133 KB\n", "username": "Najib_Shah" }, { "code": "", "text": "I can’t solve it but it’s probably problem on Ubuntu network | Firewall | Version or security related issues. Because in other versions of linux it working well. Like I’m tested on kali linux, fedora and zorin os.", "username": "Abu_Said_Shabib" }, { "code": "", "text": "I’m facing this problem on MacOS Ventura 13.1, my colleague with his MacOS is not facing this issue. I don’t understand what the problem could be.Firewall seems like a probable cause since my colleague has been here longer he must have different access than my laptop (he doesn’t remember, I tried asking him what special permissions his laptop might have).", "username": "Najib_Shah" }, { "code": "", "text": "my problem is fixed now,\nit was due to my company’s internet monitoring software blocking the ports I needed to visit. took me a long time to figure it out because the software is newly implemented and the team handling it wasn’t aware that it blocks ports too, not just website.", "username": "Najib_Shah" }, { "code": "", "text": "Reciently I find a solution. The comand “npm i mongoose” today install mongoose version 7.0.1., so I deleted directory “node_modules” and after I rewrite the file “package.json” to replace dependencies as “mongoose”: “^5.1.2”, and ejecute command “npm i dependences” and node.js will reinstall mongoose in a past version. this works fine for my.", "username": "Ramirez_Gomar_Sergio_Jose" }, { "code": " this.options = options ?? {};", "text": "Hello @Najib_Shah / @Abu_Said_Shabib / @Ramirez_Gomar_Sergio_JoseWelcome to the MongoDB Community forums As @turivishal mentioned the minimum supported Node.js version is now 14.20.1 for MongoDB Node.js Driver v5. So, please upgrade the node version to 14.20.1 or higher to resolve the issue.The new minimum supported Node.js version is now 14.20.1However, you can refer to my response here where I’ve explained the cause of the this.options = options ?? {}; error.I hope it helps!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you so much.\nmy problem is resolve due to your solution.", "username": "manu_vats" }, { "code": "", "text": "3 posts were split to a new topic: Getting Error This.options = options ? {};", "username": "Kushagra_Kesav" }, { "code": "^5.1.2 ^7.5.2", "text": "This worked for me, I downgraded the mongoose version to ^5.1.2 from ^7.5.2", "username": "Talha_Maqsood" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
This.options = options ? {};
2023-03-04T08:23:45.531Z
This.options = options ? {};
10,595
null
[ "kafka-connector" ]
[ { "code": "", "text": "For the sink connector - if it determines that a record should be updated, is it possible to have the connector only update the record if a field in the kafka message is greater than an existing field in mongo? (i.e. compare timestamps) Else, it would ignore and not update that record in mongo.Thanks.", "username": "Dejan_Katanic" }, { "code": "", "text": "Hi @Dejan_Katanic,Thats a good question, you’d have to write your own custom write model strategy. Returning a null value indicates a no-op, theres an example in the documentation that should help get you started.All the best,Ross", "username": "Ross_Lawley" }, { "code": "", "text": "Hi @Dejan_Katanic,As @Ross_Lawley mentioned you can of course come up with any custom implementation for write model strategies that you might need for a specific use case. However, if I got you right, it might be your lucky day because somebody else - in this case my humble self - has written something for you. Either it fits as is for what you want to achieve or you can take it as a starting point and modify it to your needs.My original sink connector code contains a wm strategy called MonotonicWritesDefaultStrategy which makes use of a conditional update pipeline. Read about the feature here: GitHub - hpgrahsl/kafka-connect-mongodb: **Unofficial / Community** Kafka Connect MongoDB Sink Connector -> integrated 2019 into the official MongoDB Kafka Connector here: https://www.mongodb.com/kafka-connectorThe code for it can be found here: kafka-connect-mongodb/MonotonicWritesDefaultStrategy.java at master · hpgrahsl/kafka-connect-mongodb · GitHub It’s probably not the most beautiful implementation but hey, it did the job for me and others quite well back in 2019 already NOTE: It needs MongoDB version 4.2+ and Java Driver 3.11+ since lower versions of either lack the support for leveraging update pipeline syntax which is needed to perform the conditional checks during write operations.Let me know if it’s helpful for you!", "username": "hpgrahsl" }, { "code": "", "text": "For the sink connector - if it determines that a record should be updated, is it possible to have the connector only update the record if a field in the kafka message is greater than an existing field in mongoHi @hpgrahsl ,Could you provide an usage example for MonotonicWritesDefaultStrategy .", "username": "Yogini_Manikkule" } ]
MongoDB sink connector conditional update
2021-08-10T13:22:08.906Z
MongoDB sink connector conditional update
4,255
null
[ "flexible-sync" ]
[ { "code": "IObservable<bool>", "text": "We are migrating an enterprise app from Partition Sync to Flexible Sync.One of the features that we have in our app is showing the user the progress of the synchronization. This is a crucial feature as it gives the user a visual feedback that.Both of those points are important. However, the first point is more important. As I understand, detailed progress monitoring is not possible with Flexible sync. However, I really need to find a way to solve the first issue.Essentially, I need to show some loading indicator wheneverWe are heavily using reactive extensions, so, ideally I would need either an event or and IObservable<bool> that will indicate that there’s a current background sync in progress.", "username": "Gagik_Kyurkchyan" }, { "code": "realm.Subscriptions.WaitForSynchronizationAsync()realm.Subscriptions.State", "text": "If you’re only interested in the upload path, then detailed progress notifications work there just fine. It’s only download notifications which are not available due to the unpredictability of the amount of data that needs to be downloaded (though we’re working on an a project to address this).When you add a new subscription, you can call realm.Subscriptions.WaitForSynchronizationAsync() to be notified when the server has sent you the data that matches the new subscription. You could also check realm.Subscriptions.State when you start your application to see whether all the requested data has been synchronized or if you need to tell the user they’re not seeing the complete dataset.", "username": "nirinchev" }, { "code": "return Observable.Timer(TimeSpan.Zero, TimeSpan.FromSeconds(1))\n .Select(_ => realm.Subscriptions.State == SubscriptionSetState.Pending)\n .DistinctUntilChanged()\n .Publish()\n .RefCount();\n", "text": "Hey NikolaThanks for getting back.Indeed, upload progress works fine. Download progress, is the issue.I am checking the “State” property. But the problem is that it is not reactive, nor is there an event I can subscribe to to check whether we are currently “Actively” syncing. To overcome this, I’ve created this nasty timer-based reactive chain:It’d be great to have a means to do this reactively, like with download/upload progress notifications. If we can’t do that now, that’s also fine. But I can suggest something like this as a feature. I realize that due to the nature of data, it’s hard to estimate how much of it you will download; hence, the download progress is not there. But, I would assume, telling the fact that there’s something that’s being downloaded/uploaded is a much less complicated issue, and it’d be cool to have a reactive observable or an event just for that.", "username": "Gagik_Kyurkchyan" }, { "code": "Pending -> Complete/Errorpublic static class SubscriptionObserver\n{\n public static event EventHandler<SubscriptionSetState> StateChanged;\n\n public static void UpdateRX(this SubscriptionSet subscriptions, Action update)\n {\n subscriptions.Update(update);\n _ = WaitForSyncAsync(subscriptions);\n }\n\n public static void Initialize(this SubscriptionSet subscriptions)\n {\n if (subscriptions.State == SubscriptionSetState.Pending)\n {\n _ = WaitForSyncAsync(subscriptions);\n }\n }\n\n private static async Task WaitForSyncAsync(SubscriptionSet subscriptions)\n {\n try\n {\n StateChanged?.Invoke(subscriptions, subscriptions.State);\n await subscriptions.WaitForSynchronizationAsync();\n }\n catch\n {\n // Log exceptions if necessary\n }\n finally\n {\n StateChanged?.Invoke(subscriptions, subscriptions.State);\n }\n }\n}\nSubscriptionObserver.Initialize(realm.Subscriptions)realm.Subscriptions.UpdateRX.Update", "text": "I see, this makes sense to me and I filed Have the session emit notifications whenever upload/download starts and completes · Issue #7045 · realm/realm-core · GitHub for the team to prioritize. Regarding the download path - we’re working on a project for this, but that will give you the estimated download progress, not how much data is left to download - do you think that will work for your use case or do you need to have some absolute measure for the amount of data to download (e.g. bytes/number of documents/number of changesets)?Finally, I’m not an expert on RX, but I feel the timer-based solution is a bit of an overkill since the subscription state changes are always deterministic and they go in one direction only - i.e. Pending -> Complete/Error. Not sure if that’s clear from the API/docs, but once a subscription is “Complete”, it will never go back to “Pending” unless you update the subscription set. So if you wanted a non-timer based solution, you could build a thin wrapper around the subscription set that will emit notifications like:The way you would use it is to call SubscriptionObserver.Initialize(realm.Subscriptions) the first time you create a Realm to trigger the initial notification (e.g. if the user updated their subscriptions while offline), then use realm.Subscriptions.UpdateRX instead of .Update. You could then wrap the event into an observable or just replace the eventhandler with an observable entirely.", "username": "nirinchev" }, { "code": "", "text": "Thanks a lot for raising the GitHub issue for this.The estimated download progress is more than enough for me. I just need a way to show the user that they have pending data to sync and give them some absolute measure of how much. They don’t care about the individual bytes.Thanks for the hint about the subscription state determinism. I actually didn’t know that. My assumption was that the subscription becomes “Pending” again once there’s some new data that will be downloaded. This raises a question though. If I have a subscription for Products. I don’t change it, but a new product comes and I need to sync it. In this case, the subscription object won’t help me detect and show “syncing” progress. This means, that whenever I change the subscription that’s the only place I can “wait” for that initial data to sync up and show some progress to the user. However, I can’t monitor the ongoing changes and show some indication to the user that there’s some data coming in unless I explicitly call “Synchronize” and wait for it to complete.So, having the feature that you have already created would be a great addition.Thanks, Nikola. We can close this thread for now.", "username": "Gagik_Kyurkchyan" }, { "code": "Completerealm.SyncSession.WaitForDownloadAsync", "text": "When the subscription goes into Complete state, this means the server has sent all data that matches the subscriptions at the time the change happened, but there may be more data coming in after that. If you want to make sure the user is caught up with the latest changes on the server you can use realm.SyncSession.WaitForDownloadAsync, though that will not tell you whether there are changes, it’ll just wait until the server tells it its caught up, which can happen almost immediately if there are no new changes or take some time if there are.", "username": "nirinchev" }, { "code": "realm.SyncSession.WaitForDownloadAsync", "text": "Yep, that makes sense, Nikola. I am already using realm.SyncSession.WaitForDownloadAsync", "username": "Gagik_Kyurkchyan" } ]
How to monitor the state of a flexible sync subscription
2023-10-08T07:00:21.230Z
How to monitor the state of a flexible sync subscription
312
null
[ "attribute-pattern" ]
[ { "code": "", "text": "We currently use attribute pattern with fixed fields. With the release of Mongo 7, I’m curious if we should be looking at switching this index to compound wildcard.Gilberto Velazquez’s talk on attribute pattern from a few years ago is excellent (https://www.youtube.com/watch?v=9eYwrloeM7U) and seems to indicate compound wildcard would be the way to go, but I’d love to know if there is any more recent data on this.In addition to fixed fields, we also have some sort fields indexed (after the attributes). So the query pattern is (fixed fields, attributes, sort). Just mentioning that in case it changes the recommendation (I’ve actually been wondering if it would make sense to have the sort broken out into a separate index, so any pointers there would be helpful as well).Thanks in advance for any pointers/advice/suggestions!", "username": "Andrew_Rothbart1" }, { "code": "", "text": "Hi @Andrew_Rothbart1 and welcome to MongoDB community forums!!With the release of Mongo 7, I’m curious if we should be looking at switching this index to compound wildcard.The performance enhancements would purely depend on your use case and the query you are performing. With the release of 7.0, using compound wildcard indexes would help with solving the issue of creating and maintaining multiple indexes with just a single index.The performance enhancements would purely depend on your use case and the query you are performing. > There are a few considerations to keep in mind when using wildcard indexes.Hence the suggestion would be to use the feature if this helps in improving your performance.Regards\nAasawari", "username": "Aasawari" } ]
Compound wildcard index
2023-09-24T21:05:12.777Z
Compound wildcard index
351
null
[ "backup" ]
[ { "code": "", "text": "Hello Team,How to take incremental backup in Mongodb?\nCan you please guide or share an SOP on that?\nThanks", "username": "Sachin_Baraskale" }, { "code": "", "text": "Hi @Sachin_Baraskale ,Incremental backups are part of MongoDB Enterprise in conjunction with Cloud Manager or Ops Manager.This is not available with Community Edition.", "username": "chris" }, { "code": "", "text": "For Mongodb community version (as said no built-in feature like that), you need to use external tools, e.g. aws ebs snapshot backup.", "username": "Kobe_W" } ]
Incremental backup in MongoDB
2023-10-10T03:39:31.934Z
Incremental backup in MongoDB
341
null
[ "aggregation", "atlas-search" ]
[ { "code": "RestaurantnamealiasnameProductstype Restaurant {\n _id: ObjectId;\n name: String;\n alias: string;\n}\n\ntype Product {\n _id: String;\n name: String;\n restaurantId: ObjectId;\n}\n let aggregatePipeline = [] as PipelineStage[];\n\n if (!!textSearch) {\n aggregatePipeline.push({\n $lookup: {\n from: ProductModel.collection.name,\n localField: '_id',\n foreignField: 'restaurantId',\n pipeline: [\n {\n $search: {\n compound: {\n should: [\n {\n autocomplete: {\n query: textSearch,\n path: 'name',\n },\n },\n {\n autocomplete: {\n query: textSearch,\n path: 'type',\n },\n },\n ],\n },\n },\n }\n ],\n as: 'products',\n },\n });\n\n // I don't know how to do this with $search\n aggregatePipeline.push({\n $match: {\n $or: [\n { products: { $ne: [] } },\n { aliasFilter: { $regex: new RegExp(`.*${textSearch}.*`) } },\n { name: { $regex: new RegExp(`.*${textSearch}.*`) } },\n ],\n },\n });\n }\n", "text": "Hi guys! I’m trying to make a autocomplete feature which needs to search within multiple fields. Some fields are in the same collection but other fields are in another collection. I already have a $search index in these two collections, but I ran intro multi problems.\nBasically I need to retrieve all documents in Restaurant that matches a textSearch by name, alias, and a field name from another collection Products. These 3 searches has to be concatenated by an $or .The model is the following:Right now I have something like this:As you can see I’m doing a $lookup to view all the Product that matches that string and then checking if that result is not an empty array. That will be concatenate with some basic search by $regex because I don’t know how to concatenate the search within products and restaurants.Could someone have a look in case I’m doing completely random? Any hints on how to use $search by fields from other collections will be appreciated!Thanks!", "username": "hot_hot2eat" }, { "code": "", "text": "Hi @hot_hot2eat and welcome to MongoDB community forums!!As mentioned in the MongoDB official documentation, that $search should be used as the first stage of the pipeline and it cannot be used with the $lookup stage. You can refer to the $search documentation for further details.Can you help me with some sample documentation from both the collections and the desired output using which we can assist you better with an aggregation query or suggest the data model change if needed.Warm regards\nAasawari", "username": "Aasawari" } ]
Autocomplete $search with foreign fields
2023-10-08T08:24:44.322Z
Autocomplete $search with foreign fields
259
null
[ "java", "spring-data-odm", "time-series" ]
[ { "code": "Test@TimeSeriesTimeSeries(\n timeField = \"timestamp\",\n granularity = Granularity.SECONDS,\n metaField = \"deviceId\")\npublic class Test {\n\n @Id private String id;\n private String deviceId;\n private OffsetDateTime timestamp;\n private int measurement;\n .....\n\n }\nmongoTemplate.createCollection(Test.class)List<Test> testObjectsList = //list of Test objects;\nmongoTemplate.insertAll(testObjectsList);\n@TimeSeries@TimeSeries", "text": "I’m using Spring Data MongoDB to interact with a MongoDB database. In my application, I have a Test class annotated as a time series collection using the @TimeSeries annotation as follows:When I explicitly use the mongoTemplate.createCollection(Test.class) method, the collection is created as a time series collection as expected. However, if I directly insert data without pre-creation like:It results in a standard collection, despite having the class annotated with @TimeSeries.Why does this happen? Why doesn’t MongoDB recognize the @TimeSeries annotation during direct insertion?", "username": "Yashasvi_Pant" }, { "code": "", "text": "Hello, welcome to the MongoDB community!This is actually intentional, this annotation is from Spring Data MongoDB and not something native to MongoDB. When you try to insert data without the collection being created, MongoDB will create a default collection, as the insert is supposed to be quick and light, without examining metadata or any configuration.In this case, always create your collection in advance before inserting data, so that you can be sure that the settings passed were accepted correctly.", "username": "Samuel_84194" } ]
Why doesn't Spring Data MongoDB create a time series collection during direct insertion with @TimeSeries annotation?
2023-10-10T18:47:27.090Z
Why doesn&rsquo;t Spring Data MongoDB create a time series collection during direct insertion with @TimeSeries annotation?
266
null
[ "react-native" ]
[ { "code": "", "text": "PLEASE. Provide a working, up-to-date repo/template/example of Realm + Expo that can be used with EAS and runs on windows 10. I did every setup and tried every template, repo, cli tool I could find on the web. Not a single one of them builds successfully and is littered with errors. As far as I can see a lot of people are having the same issue and are not finding a solution as well. This costs so much development time and turns MongoDB app development into mobile sdks bug exploring. I did not post any specific bug because at this point they all might be out of date and pointless.", "username": "Damian_Danev" }, { "code": "", "text": "In order to provide a useful advice, we need to understand which versions of Expo, React Native and Realm you are using (see also realm-js/COMPATIBILITY.md at v11 · realm/realm-js · GitHub for which versions are compatible).Moreover, it could be useful for us to see examples of the error messages.", "username": "Kenneth_Geisshirt" }, { "code": "", "text": "I will take some time to provide such information, but you could simply run the bootstrap expo guide in the docs or pull the expo realm template and you will see that both are broken. Non-expo realm examples and apps work perfectly fine.", "username": "Damian_Danev" }, { "code": "", "text": "@Damian_Danev We just released an expo template that should be working. Let us know if it works for you!Realm Template for Expo. Latest version: 0.5.1, last published: 4 minutes ago. Start using @realm/expo-template in your project by running `npm i @realm/expo-template`. There are no other projects in the npm registry using @realm/expo-template.", "username": "Andrew_Meyer" }, { "code": "", "text": "Now with expo 49. The template is broken again.", "username": "Chongju_Mai" } ]
Realm + Expo currently not possible!
2023-03-15T08:34:29.683Z
Realm + Expo currently not possible!
1,399
null
[ "python", "time-series", "storage" ]
[ { "code": " client = MongoClient()\n db = client[CoinsTimeSeries.get_db().name]\n print(db.list_collection_names(),)\n if 'coins_timeseries' not in db.list_collection_names():\n meta = {\n 'timeseries': {\n 'timeField': 'timestamp',\n 'metaField': 'symbol', \n 'granularity': 'seconds'\n },\n \"storageEngine\" : {\n \"wiredTiger\" : {\n \"configString\" : \"block_compressor=zstd\"\n }\n },\n 'indexes': [\n 'symbol',\n ('timestamp', 'symbol')\n ]\n }\n\n print(\"Creating new timeseries collection...\")\n db.command('create', 'coins_timeseries', timeseries={ 'timeseries': {\n 'timeField': 'timestamp',\n 'metaField': 'symbol', \n 'granularity': 'seconds'\n },\n 'indexes': [\n 'symbol',\n ('timestamp', 'symbol')\n ] },)\n else:\n print(\"Exsits...\")\nprint(db.list_collection_names(),)['system.views', 'coins_timeseries',]", "text": "Hi to all,\niam using pymongo and i need to create timeseries collection, the code iswhen i use this\nprint(db.list_collection_names(),)the result is\n['system.views', 'coins_timeseries',]and it does not show any collection with this name in mongo compos or any other interfacesplease tell me\nwhere is the problemthank you in advanced", "username": "Masoud_N_A" }, { "code": "", "text": "2 possible reasons.You are using the wrong database.orYou are connecting to the wrong server.", "username": "steevej" } ]
Timeseries collection using pymongo doesnt work properly
2023-10-09T16:34:58.265Z
Timeseries collection using pymongo doesnt work properly
248
null
[ "queries", "indexes", "performance" ]
[ { "code": "db.requests.find({ createdAt: { $gte: new Date(\"2023-01-01\"), $lt: new Date(\"2023-02-01\") } }).explain(\"executionStats\")executionStats: {\n executionSuccess: true,\n nReturned: 4116735,\n executionTimeMillis: 10913,\n totalKeysExamined: 4116735,\n totalDocsExamined: 4116735,\n executionStages: {\n stage: 'FETCH',\n nReturned: 4116735,\n executionTimeMillisEstimate: 1634,\n works: 4116736,\n advanced: 4116735,\n needTime: 0,\n needYield: 0,\n saveState: 4116,\n restoreState: 4116,\n isEOF: 1,\n docsExamined: 4116735,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 4116735,\n executionTimeMillisEstimate: 711,\n works: 4116736,\n advanced: 4116735,\n needTime: 0,\n needYield: 0,\n saveState: 4116,\n restoreState: 4116,\n isEOF: 1,\n keyPattern: {\n createdAt: 1\n },\n indexName: 'createdAt_1',\n isMultiKey: false,\n multiKeyPaths: {\n createdAt: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n createdAt: [\n '[new Date(1672531200000), new Date(1675209600000))'\n ]\n },\n keysExamined: 4116735,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n}\n", "text": "Hey there,I have a large collection of ~70m documents. I want to query the collection based on a time period. The field does have an index and according to the explain command the index is used.db.requests.find({ createdAt: { $gte: new Date(\"2023-01-01\"), $lt: new Date(\"2023-02-01\") } }).explain(\"executionStats\")The execution stats are:Any idea why the query takes 10 seconds and if there is anything I can do to improve the performance?", "username": "nikolasdas" }, { "code": "", "text": "The index seems to perform correctly.What I suspect is that the mongod server spend most of its time reading the documents from permanent storage. Do you have any disk usage and ram usage numbers?What you can do to improve the performance depends on what you want to do with the 4M documents returned by the query. It also depends on the size of the documents. For example, if you want to compute some values (an average, a sum, a max …) of a given field, then you may add that field to the index so the query becomes a covered query.", "username": "steevej" }, { "code": "", "text": "Thank you for your reply. I don’t see much of a spike in disk or ram usage while running the query on my PC.\nWhat I want to do is: I have this large collection of API requests. The documents are small, just some metadata about the request and the user who made the API call.\nWhat I need is: given a large list of users and a timespan, query for each user if there is a request in that period\nCurrently, I’m looping over each user and use findOne to determine if they have requests in the timespan, because doing many (concurrent) smaller database calls seems to be faster.\nIt works, but I would like to find a solution where need to query the database just once, without loosing performance", "username": "nikolasdas" }, { "code": "", "text": "Sounds like a job for https://www.mongodb.com/docs/manual/aggregation/", "username": "chris" }, { "code": "", "text": "Sure thing.\nBut with ~6k users and ~1.4m requests for the users in the given timespan, looping over each user and using findOne is way faster (~14x) than running an aggregation ", "username": "nikolasdas" }, { "code": "", "text": "I don’t see much of a spike in disk or ram usage while running the query on my PCWe are more interested about the disk and ram on the mongod server rather than the disk and ram of the PC where you run your query.If both mongod and the client code are running on the same machine then it is possible that many requests is faster than 1 request because there is no network latency. You might experience a different behavior if the server is on a different network.For some analytic use-cases, it might be better if they run slower in order to leave more bandwidth for other use-cases. Unless you schedule the use-case outside the high usage periods.May be the pipeline is not optimal.In your findOne timing, do you consider the time to get the 6k users from the server to the client. If you only time the local iteration of the users and their findOne() then you are not comparing the same.The cache might also have a role. If you run the single aggregation first and the multiple findOne(), then may be the working set of the use-case is already in memory.But without access to the exact code of both scenario is it really hard to make an assessment.", "username": "steevej" }, { "code": "{ user, days[] }{ day, users[] }{ user, day }", "text": "Yes, these results are from my personal computer, running the server and query on the same machine. So I guess you’re right, in production there would be more network overhead, although the systems are hosted in the same data center.\nI tried my loop variant again after a fresh restart, and the cache seems to be a huge factor, the first run takes a very long time. I guess I have to choose between long cold start and snappy performance after that or more consistent, but longer runtime with the aggregation.The real problem is the data structure of course, but I’m not quite sure yet how to change that. Since I don’t need the information about every single request for my described use case, but only the information if a user was active on a given date, my ideas are:I know this is diverging a bit from the original question, but any suggestion what option to pick or if there is a better solution?", "username": "nikolasdas" }, { "code": "", "text": "About 1. and 2.a) When you update an array the whole array is written back to disk. So these 2 might be slower than 3.b) $addToSet is 0(n) since the array need to be scan to prevent duplicate. It should be better to use $push with an appropriate query as to select 0 document is entry is already present.About 3.a) My preferred without testing. Only testing can pinpoint the best. The unique index user:1,day:1 might perform better for updates, while the unique index day:1,user:1 might for the query of the original post.", "username": "steevej" } ]
Poor indexed query Performance in large collection
2023-10-08T11:58:59.948Z
Poor indexed query Performance in large collection
279
null
[ "containers" ]
[ { "code": "mongo_mema:\n image: \"mongo:6.0\"\n command: [\"--auth\"]\n restart: unless-stopped\n env_file:\n - .env\n ports:\n - \"${MPORT}:${MPORT}\"\n volumes:\n - mongo_mema_data:/data/db\n networks:\n - memanet\nSERV_MDB=mongo_mema\nMPORT=27017\nMONGODB_SUGGESTIONCACHE_COLLECTION=suggestioncache\nMONGO_INITDB_ROOT_USERNAME=admin\nMONGO_INITDB_ROOT_PASSWORD=-MyLabradorFartsUnderMyDesk\ndocker compose restart mongo_memamongo://admin:-MyLabradorFartsUnderMyDesk@localhost:27017/", "text": "For unknown reasons I cannot connect as admin to my self-hosted MongoDB running in a docker container. This container is a standard 6.0 container launched as part of a group with docker compose. The following is the relevant part of docker-compose.yml:and the .env file contains (passwords are fictitious):I tried the following from the same directory where .env and docker-compose.yml are:\ndocker compose restart mongo_mema\nand would have expected to be able to use the following connection URL:\nmongo://admin:-MyLabradorFartsUnderMyDesk@localhost:27017/\nfrom within the running container (or the host since 27017 is exposed on the host too) but the authentication fails.\nThis container is running a database in use so I need to minimize downtimes.\nWhat error am I doing?\nThanks a lot.", "username": "Robert_Alexander" }, { "code": "--volumes-from--authdocker run --name resetMongoPW --rm -d --volumes-from projectName_mongo_mema mongo:6.0docker exec resetMongoPW mongosh --quiet --eval --quiet 'db.getSiblingDB(\"admin\").changeUserPassword(\"admin\",\"password\")'mongo://admin:-MyLabradorFartsUnderMyDesk@localhost:27017/MONGO_INITDB_ROOT_USERNAME\nMONGO_INITDB_ROOT_PASSWORD\n", "text": "You already know the procedure per your other thread you just need to execute it correctly.In this instance I would stop the container and the start a new one on the command line using the --volumes-from argument to start a mongod without --authdocker run --name resetMongoPW --rm -d --volumes-from projectName_mongo_mema mongo:6.0docker exec resetMongoPW mongosh --quiet --eval --quiet 'db.getSiblingDB(\"admin\").changeUserPassword(\"admin\",\"password\")'and would have expected to be able to use the following connection URL:\nmongo://admin:-MyLabradorFartsUnderMyDesk@localhost:27017/These are only for the initial db, they won’t update after the initialisation.", "username": "chris" }, { "code": "local mema_services_mongo_mema_data\nlocal mema_services_mongodb_data\nlocal mongodb_data\n", "text": "Thanks Chris I appreciate it. Very godd to know those envs are only for the initial setup, I had overlooked that.Never used the --volumes-from option. Shall study a bit. From your example not sure what object you’re referring to with the string projectName_mongo_mema . The volume I use for persistance as per the docker-compose.yml file is mongo_mema_datadocker volume ls gives:Take care.", "username": "Robert_Alexander" } ]
Resetting admin password for a container running MongoDB
2023-10-10T08:21:49.174Z
Resetting admin password for a container running MongoDB
308
https://www.mongodb.com/…f_2_791x1024.png
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "uniquecreate()find()create()[email protected]:nirgluzman/Express-MongoDB-Recap.git", "text": "Hello support !This subject has been discussed heavily in the past.\nFor some reason, unique does not enforce uniqueness on the property and I do not get a duplicate key error.\nI am using create() function to create a new document.\nI would like to avoid using find() for validation before issue a create() request.I am using the latest version 6.0.5, M0 Sandbox (General) on AWS.\nI’ve created a sample application to reproduce the issue: [email protected]:nirgluzman/Express-MongoDB-Recap.gitMongoose - unique option is not enforced1700×2200 185 KBThank you for your support !", "username": "Nir_Gluzman" }, { "code": "uniquecreate(){ \"error\": \"E11000 duplicate key error collection: test.blogs index: email_1 dup key: { email: \\\"[email protected]\\\" }\" }\n", "text": "Hello @Nir_Gluzman,Welcome to the MongoDB Community forums For some reason, unique does not enforce uniqueness on the property and I do not get a duplicate key error.\nI am using create() function to create a new document.I tested your Git repository code base, and it worked perfectly fine for me. I received the following error in Postman when I tried to insert a document with the same email ID:\nimage2496×1072 177 KB\nTo better understand the issue, could you share the workflow you followed to insert the document in MongoDB?This subject has been discussed heavily in the past.Could you please elaborate on what you meant by the above statement?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "index: trueindex:truecreateIndexunique: truecreateIndex", "text": "Hi Kushagra, thank you for helping.Just a few hours ago, I updated the GitHub repo and added index: true to email attribute in the Schema.\nThis fix seems to solve the issue.\n\nBlogSchema1069×725 45.6 KB\nI found out that index:true enforces Mongoose to call createIndex for each defined index in the Schema when the collection is initially created - to be confirmed.\nWithout this option (having only unique: true), Mongoose does not issue createIndex.https://mongoosejs.com/docs/faq.html#unique-doesnt-work\nhttps://mongoosejs.com/docs/guide.html#indexesI am not sure that this is the best solution.", "username": "Nir_Gluzman" }, { "code": "", "text": "Hello @Nir_Gluzman,As mentioned in the Moongoose documentation, it is not an index management solution. Additionally, if you run this application against a populated database with duplicate data, it will not enforce a duplicate key constraint on the server-side.I am not sure that this is the best solution.I believe those are mongoose’s design decision. If you feel that this is not optimal for your use case, please open a Github issue on the mongoose repository.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "npm start", "text": "Hi Kushagra,Here are the steps to reproduce the issue.Start Node.js application with npm start and establish connection to DB.\nDrop database on MongoDB console\nSend multiple POST with same email - no error received.My question - what should I do on the Node.js side when I drop the database.Many thanks\nNir", "username": "Nir_Gluzman" }, { "code": "npm start", "text": "Hey @Nir_Gluzman,Start the Node.js application with npm start and establish a connection to DB.\nDrop database on MongoDB console\nSend multiple POST with the same email - no error received.\nMy question is - what should I do on the Node.js side when I drop the database?Could you please clarify whether dropping the database is part of the application workflow? If it is, you need to ensure that the collection is recreated with all the necessary indexes and constraints.Dropping the database or collection will also remove any indexes, including the one that enforces the unique constraint. Therefore, if you drop the database after starting the node application server, please restart it again. Otherwise, ensure that the database or collection does not already exist. I believe Mongoose will recreate it with the specified options (i.e. including the unique constraints).Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Just Drop the collection.Most likely you updated the schema for “unique” after creating some records. So “unique” is not working as per expectation. I fetched a similar issue and resolved it that way.", "username": "Musadul_Islam" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Express+Mongoose - unique option not working
2023-05-07T23:29:12.662Z
Express+Mongoose - unique option not working
2,129
null
[]
[ { "code": "for (const auto& shardEntry : allShards) {\n auto swShard = shardRegistry->getShard(opCtx, shardEntry.getName());\n if (!swShard.isOK()) {\n return swShard.getStatus();\n }\n\n const auto& shard = swShard.getValue();\n\n auto swDropResult = shard->runCommandWithFixedRetryAttempts(\n opCtx,\n ReadPreferenceSetting{ReadPreference::PrimaryOnly},\n nss.db().toString(),\n dropCommandBSON,\n Shard::RetryPolicy::kIdempotent);\n\n if (!swDropResult.isOK()) {\n return swDropResult.getStatus().withContext(\n str::stream() << \"Error dropping collection on shard \" << shardEntry.getName());\n }\n\n auto& dropResult = swDropResult.getValue();\n\n auto dropStatus = std::move(dropResult.commandStatus);\n auto wcStatus = std::move(dropResult.writeConcernStatus);\n if (!dropStatus.isOK() || !wcStatus.isOK()) {\n if (dropStatus.code() == ErrorCodes::NamespaceNotFound && wcStatus.isOK()) {\n // Generally getting NamespaceNotFound is okay to ignore as it simply means that\n // the collection has already been dropped or doesn't exist on this shard.\n // If, however, we get NamespaceNotFound but also have a write concern error then we\n // can't confirm whether the fact that the namespace doesn't exist is actually\n // committed. Thus we must still fail on NamespaceNotFound if there is also a write\n // concern error. This can happen if we call drop, it succeeds but with a write\n // concern error, then we retry the drop.\n continue;\n }\n\n errors.emplace(shardEntry.getHost(), std::move(dropResult.response));\n }\n\n", "text": "MongoDb Version :4.0.3 CommunityI have a mongodb cluster with dozens shard, and the data size in each shard reach TB level, the “db.collection.drop” takes me hours to complete。I found the drop command is a serial operation. Is there any possibility that this could be changed to parallel ?", "username": "zhangruian1997" }, { "code": "", "text": "Hi @zhangruian1997 and welcome to the MongoDB Community forum!!MongoDb Version :4.0.3 CommunityThe MongoDB version you’ve mentioned is currently 4 yrs old and has reached end of life in April 2022. I would recommend you test & upgrade the sharded deployment to the latest versions for bug fixes and new features.I have a mongodb cluster with dozens shard, and the data size in each shard reach TB level, the “db.collection.drop” takes me hours to complete。As per the server ticket, there have been a few bug fixes which have been made in the newer version.If however, after the upgrade, if you are still facing issues, could you help me with some details about the deployment.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Dear @Edward_Culllen,I have flagged your post as SPAM. I have started to follow you to pickup your next attempt at spamming sooner.", "username": "steevej" }, { "code": "", "text": "Dear @Edward_Culllen,it looks like you have not read my reply to one of your early post.I have started to follow you to pickup your next attempt at spamming sooner.Congratulation for integrating your SPAM into a ChatGPT generated answer. Your post is not flagged as SPAM because I want other SPAM haters to know which iOS app developers in Atlanta to boycott.", "username": "steevej" } ]
Why drop a collection command is not a parallel operation?
2023-02-16T09:02:24.738Z
Why drop a collection command is not a parallel operation?
1,329
null
[ "python", "thailand-mug" ]
[ { "code": "", "text": "Hello everyone, My name is James from Thailand. I’m THMUG leader.\nI fell in love with low-level stuff of computers such as memory, compiler, and OS.\nBefore joining THMUG, I spent time with MongoDB on several projects.\nOne of my favorite languages is Python. Now, I’m learning Python’s driver and aiming to contribution to it.Feel free to connect and collaborate more in this community.", "username": "Kanin_Kearpimy" }, { "code": "", "text": "Welcome to the MongoDB Community, @Kanin_Kearpimy! We are thrilled to have someone with your experience and passion leading the community. We look forward to all the exciting things you have planned.", "username": "Harshit" } ]
Greeting from Thailand! James is nice to see you
2023-10-09T15:49:55.206Z
Greeting from Thailand! James is nice to see you
271
null
[ "aggregation", "spark-connector" ]
[ { "code": " {\n \"$match\": {\n \"_id\": {\n \"$lt\": \"(ObjectId of the previous hour)\"\n }\n }\n }, \n", "text": "I have collections with around 11M to 70Million documents that get read from secondaries, the aggregation pipeline’s $match contains the current date and an “hour” field which uses a compound index on all date fields in that $match to retrieve the previous hour’s set of data, but for some reason when looking at how it comes through on mongo - there is a predefined $match before anything else:My aggregation pipeline runs after this $match - therefore creating slow, long running queries due to it reading all documents prior to the previous hour, which could be up to 70 million at times.is this due to the spark connectors default partitioning field being on _id? and if i were to change the partitioning would it have to be a unique field or could i use a field that is being used, and that is indexed such as the date field being 202310 for example?", "username": "Gareth_Furnell" }, { "code": "aggregation.pipeline{\"$match\": {\"closed\": false}}closed:false", "text": "Also, with regards to the documentation:\nin the aggregation.pipeline setting, can someone explain the {\"$match\": {\"closed\": false}} meaning and if the closed:false is necessary in the config/code - or how the implementation works", "username": "Gareth_Furnell" } ]
MongoDB Spark connector partition field
2023-10-10T08:04:26.694Z
MongoDB Spark connector partition field
238
null
[]
[ { "code": "", "text": "exports = function(arg){\nvar collection = context.services.get(“Cluster0”).db(“Database”).collection(“alldata”);return collection.find({});\n};error:\nTypeError: Cannot access member ‘db’ of undefinedI keep getting this error. How can I fix this?", "username": "tkdgy_dl" }, { "code": "", "text": "I’ve also tried “mongo-atlas” too.", "username": "tkdgy_dl" }, { "code": "", "text": "Cannot access member ‘db’ of undefinedCheck this link", "username": "Ramachandra_Tummala" }, { "code": "mongodb-atlas", "text": "Hi @tkdgy_dl ,You should try mongodb-atlas instead.If that doesn’t work can you go into function UI and copy paste the URL in the browser hereThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@tkdgy_dl\nI find the answer your problem.\nIn Atlas UI > Triggers, you have to press “Link” button after choice your Link Data Source(s).The reason for the error in TypeError is not connected “Link Data Source(s)”, so occur ‘db’ of undefined.", "username": "_BE_Austin" }, { "code": "", "text": "yoo!, Thank You bro.", "username": "M4A1_N_A" } ]
> error: TypeError: Cannot access member 'db' of undefined
2021-11-23T21:57:33.943Z
&gt; error: TypeError: Cannot access member &lsquo;db&rsquo; of undefined
7,184
null
[ "cxx" ]
[ { "code": "", "text": "This jira issue concludes by suggesting not to use minPoolSize in mongocxx::uri object instantiation for a mongocxx::pool. When used irrespective, it throws a deprecation warning and along the line I get a SegFault. I’m not sure the SegFault is related but if it is, I think these should be documented.Steps to reproduce:\nMongoCXX 3.8\nInstantiate mongocxx::uri with url having query param minPoolSize.", "username": "Chukwujiobi_Canon" }, { "code": "", "text": "Hi @Chukwujiobi_CanonWelcome to the MongoDB community and thanks for reporting!\nI tried to mimic the steps but unable to reproduce the segfault.\nCould you please share the environment details, call stack and smallest reproducible code that I could give a try on my end?Thanks!", "username": "Rishabh_Bisht" } ]
SegFault when using minPoolSize URI param
2023-10-09T17:29:02.261Z
SegFault when using minPoolSize URI param
224
null
[ "node-js" ]
[ { "code": "import { ObjectId } from \"bson\"\n\nlet movies\nlet mflix\nconst DEFAULT_SORT = [[\"tomatoes.viewer.numReviews\", -1]]\n\nexport default class MoviesDAO {\n static async injectDB(conn) {\n if (movies) {\n return\n }\n try {\n mflix = await conn.db(process.env.MFLIX_NS)\n movies = await conn.db(process.env.MFLIX_NS).collection(\"movies\")\n this.movies = movies // this is only for testing\n } catch (e) {\n console.error(\n `Unable to establish a collection handle in moviesDAO: ${e}`,\n )\n }\n", "text": "I’ve opened an issue on the mongo remix example here.I've got a remix app with the same setup as in this repo. In each request I do t…his:\n\n```\n let db = await mongodb.db(\"sample_mflix\");\n let collection = await db.collection(\"movies\");\n...\n```\nAnd that's all setup in my db.ts file\n```\nif(process.env.NODE_ENV === \"production\") {\n mongodb = new MongoClient(connectionString);\n} else {\n if(!global.__db) {\n global.__db = new MongoClient(connectionString);\n }\n mongodb = global.__db;\n}\n```\nFunctionally it's all fine and working well, but I'm seeing a lot of connections on my production atlas nodes.\n\nYou can see in the screenshot the number of connections seems to drop when I deploy. The throughput is still tiny (<1/s/node).\n\n<img width=\"607\" alt=\"image\" src=\"https://github.com/mongodb-developer/remix/assets/26863411/cd8cbb2f-e1bd-42ee-8801-eccfb7eba242\">\n\nI'm finding it hard to confirm if I'm using the mongo driver correctly.\n\nShould I move these bits also into global singletons? Do I then have to worry about retries if the connections drop?\n\n```\n let db = await mongodb.db(\"sample_mflix\");\n let collection = await db.collection(\"movies\");\n```\n\nThe slowly increasing simultaneous connections really worry me as we scale up soon. Thanks a lot!After not hearing back on the issue, I contacted support who pointed me here.Is anyone able to confirm if I’m using the driver correctly? It’s a serious concern for me that the connections increase until a new deployment.I can’t find a definitive answer on whether I should be storing the client in memory, like I am, storing the “db” in memory, or even storing each “collection” in memory.In contrast to the remix example, the mongo university example stores the “collection” in memory.Any help would be greatly appreciated. Thanks.", "username": "Will_Smith1" }, { "code": "", "text": "Hey @Will_Smith1,Welcome to the MongoDB Community!Is anyone able to confirm if I’m using the driver correctly?You can refer to official MongoDB documentation as well as the MongoDB university course - Connecting to MongoDB in Node.js to learn the right approach to utilize the Nodejs driver within your application to connect to MongoDB Atlas.It’s a serious concern for me that the connections increase until a new deployment.May I ask what specific issues you are facing with the increased number of connections? Also, could you further clarify “connections increase until a new deployment”?However, here the drivers maintain a connection pool, i.e., monitoring threads, rather than a single connection for all the operations, so there can be multiple connections to the cluster from a client.I can’t find a definitive answer on whether I should be storing the client in memory, like I am, storing the “db” in memory, or even storing each “collection” in memory.In contrast to the remix example, the Mongo University example stores the “collection” in memory.It depends on your application’s use case and your desired approach for connection reuse. If your application frequently interacts with a particular database, it can be beneficial to store it in a variable and access it throughout the application. This concept extends to the client connection as well. By storing and reusing these connections, you can optimize your application’s performance and resource usage. Eventually, the decision should align with your application’s use case and requirements.For your reference, here is example code snippet of reusing the connection from the StackOverflow response, which might be helpful to you.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Too many connections using node driver with atlas
2023-09-05T18:59:02.314Z
Too many connections using node driver with atlas
380
null
[ "data-modeling", "time-series" ]
[ { "code": "bucketMaxSpanSecondsbucketRoundingSecondsdb.createCollection( \"weather24h\", { timeseries: { timeField: \"timestamp\", metaField: \"metadata\", bucketMaxSpanSeconds: 300, bucketRoundingSeconds: 300 } } )\nTimeseries 'bucketMaxSpanSeconds' is not configurable to a value other than the default of 3600 for the provided granularity", "text": "I am using MongoDB version 7.0.2 (upgraded from 6.0.3) which offers the feature of setting bucketMaxSpanSeconds and bucketRoundingSeconds according to the official document. But when I execute the following commands:I am getting the error:Timeseries 'bucketMaxSpanSeconds' is not configurable to a value other than the default of 3600 for the provided granularityCan it be due to the upgrade? Any help regarding this would be appreciated.", "username": "Yashasvi_Pant" }, { "code": "", "text": "Hi @Yashasvi_PantI am using MongoDB version 7.0.2 (upgraded from 6.0.3)You did not complete the upgrade correctly, follow the upgrade procedure in the release notes. Then the above command will complete correctly.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem setting bucketMaxSpanSeconds and bucketRoundingSeconds in MongoDB timeseries collection
2023-10-09T18:16:20.638Z
Problem setting bucketMaxSpanSeconds and bucketRoundingSeconds in MongoDB timeseries collection
215
null
[ "replication", "atlas-cluster", "containers", "kafka-connector" ]
[ { "code": "ENV CONNECT_BOOTSTRAP_SERVERS=\"weavix.servicebus.windows.net:9093\"\nENV CONNECT_SECURITY_PROTOCOL=\"SASL_SSL\"\nENV CONNECT_SASL_MECHANISM=\"PLAIN\"\nENV CONNECT_SASL_JAAS_CONFIG=\"<my connection string>\";\"\n{\n \"name\": \"weavix-dev-master-mongodb-source\",\n \"config\": {\n \"tasks.max\": \"1\",\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"connection.uri\": \"mongodb+srv://<user>:<password>@<my db>.mezxs.mongodb.net/test?authSource=admin&replicaSet=atlas-8mkql5-shard-0&readPreference=primary&ssl=true\",\n \"database\": \"master\",\n \"topic.prefix\": \"weavix-dev\",\n \"collection\": \"accounts\",\n \"topic.separator\": \"-\",\n \"startup.mode\": \"copy_existing\"\n }\n}\n", "text": "I have setup a kafka connect docker image that is successfully configured to use Azure event hub as the broker. I was able to verify that connect was configured correctly when I saw the new event hubs created by my connect instance. In order for my kafka connect instance to talk to Azure, I had to set its configuration with these settings:My MongoDB cluster is hosted by Mongo.I installed mongodb/kafka-connect-mongodb:1.11.0 connector and sent this payload for a new connector:{The connection string I use is usable with MongoCompass.Unfortunately, I am getting in the logs is a repetitive set of messages:[2023-10-09 21:09:16,558] INFO [weavix-dev-master-mongodb-source|task-0] [Producer clientId=connector-producer-weavix-dev-master-mongodb-source-0] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient:937)\n[2023-10-09 21:09:16,558] INFO [weavix-dev-master-mongodb-source|task-0] [Producer clientId=connector-producer-weavix-dev-master-mongodb-source-0] Cancelled in-flight API_VERSIONS request with correlation id 4112 due to node -1 being disconnected (elapsed time since creation: 52ms, elapsed time since send: 52ms, request timeout: 30000ms) (org.apache.kafka.clients.NetworkClient:341)\n[2023-10-09 21:09:16,558] WARN [weavix-dev-master-mongodb-source|task-0] [Producer clientId=connector-producer-weavix-dev-master-mongodb-source-0] Bootstrap broker weavix.servicebus.windows.net:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1065)It looks like a networking issue with the connector but I don’t see any settings for the connector for setting up broker authentication. Anybody have any suggestions?", "username": "Guy_Swartwood" }, { "code": "", "text": "When I review the logs when I add the connector, I see this:2023-10-09 16:34:16 [2023-10-09 21:34:16,296] INFO [weavix-dev-master-mongodb-source|task-0] ProducerConfig values:\n2023-10-09 16:34:16 acks = -1\n2023-10-09 16:34:16 batch.size = 16384\n2023-10-09 16:34:16 bootstrap.servers = [weavix.servicebus.windows.net:9093]\n…\n2023-10-09 16:34:16 sasl.client.callback.handler.class = null\n2023-10-09 16:34:16 sasl.jaas.config = null\n2023-10-09 16:34:16 sasl.kerberos.kinit.cmd = /usr/bin/kinit\n2023-10-09 16:34:16 sasl.kerberos.min.time.before.relogin = 60000\n2023-10-09 16:34:16 sasl.kerberos.service.name = null\n2023-10-09 16:34:16 sasl.kerberos.ticket.renew.jitter = 0.05\n2023-10-09 16:34:16 sasl.kerberos.ticket.renew.window.factor = 0.8\n2023-10-09 16:34:16 sasl.login.callback.handler.class = null\n2023-10-09 16:34:16 sasl.login.class = null\n2023-10-09 16:34:16 sasl.login.connect.timeout.ms = null\n2023-10-09 16:34:16 sasl.login.read.timeout.ms = null\n2023-10-09 16:34:16 sasl.login.refresh.buffer.seconds = 300\n2023-10-09 16:34:16 sasl.login.refresh.min.period.seconds = 60\n2023-10-09 16:34:16 sasl.login.refresh.window.factor = 0.8\n2023-10-09 16:34:16 sasl.login.refresh.window.jitter = 0.05\n2023-10-09 16:34:16 sasl.login.retry.backoff.max.ms = 10000\n2023-10-09 16:34:16 sasl.login.retry.backoff.ms = 100\n2023-10-09 16:34:16 sasl.mechanism = GSSAPI\n2023-10-09 16:34:16 security.protocol = PLAINTEXT\n2023-10-09 16:34:16 security.providers = null\n2023-10-09 16:34:16 send.buffer.bytes = 131072These messages seems to indicate to me that the connector’s producer may not be configured correctly, alas I don’t see a way to set its configuration.", "username": "Guy_Swartwood" } ]
Unable to get MongoDB source connector to talk to azure event hub
2023-10-09T21:23:33.797Z
Unable to get MongoDB source connector to talk to azure event hub
278
null
[]
[ { "code": "", "text": "I had to share my recent MongoDB exam experience because it was a rollercoaster ride with Examinty, and not the fun kind…So, picture this: It’s exam day, and I’m all pumped up to prove my MongoDB prowess. I log into Examinty as instructed, thinking it’s gonna be a breeze. Well, hold on to your hats, because here’s what went down:My “Super-Fast Turtle” Wi-Fi: According to Examinty, my Wi-Fi was slower than a snail on a lazy Sunday. But, surprise! My Wi-Fi was doing just fine. I binge-watched Netflix, Attended online meeting, and played videogames on it the night before without a hitch.Waiting Game: I waited and waited…and waited some more. For about 1.5 hours, I twiddled my thumbs while Examinty tried to connect me with the exam proctor. Finally, they hit me with a “Sorry, we can’t make this happen because you’re being “disconnected”. Contact MongoDB.” after making me waste all this time.ID Photo Fumble: Oh, and let’s not forget the epic fail with the photo ID upload feature. I tried again and again to upload my ID, but Examinty just wouldn’t have it. Had to MacGyver my way through it.So, why am I venting here? Well. I want us to chat about these issues because I can’t be the only one (and ik im not after reading some posts here) who’s been through this Examinty circus. MongoDB and Examinty, if you’re listening, we’ve got to fix this. Our certification process deserves better.I’ve reached out to MongoDB to get my exam rescheduled and to let them know about these Examinty hiccups. Fingers crossed they take it seriously and make things smoother for all of us.In the meantime, Feel free to drop your own Examinty dramas below.", "username": "Business_email_2" }, { "code": "", "text": "Hello @Business_email_2 We sincerely apologize for your experience with our proctoring service and will most definitely bring it to Examity’s attention. It is important to us that our users have a smooth and positive testing experience. We take these criticisms seriously and you can be assured that they will be addressed. Meanwhile, I will respond to the email you sent to [email protected]. Thank you.", "username": "Heather_Davis" }, { "code": "", "text": "Thanks for your quick response and attention to this matter.", "username": "Business_email_2" } ]
Examinty is just not it
2023-10-09T02:57:08.897Z
Examinty is just not it
283
null
[ "queries", "python" ]
[ { "code": "", "text": "Is there any parser in pymongo that supports enables the user directly query with odata query.or Is there any python library to do so?", "username": "Ilamparithi_Karthikeyan" }, { "code": "", "text": "PyMongo does not support an OData interface. There may be a third party package that adds support for it but I have not encountered any yet.", "username": "Shane" }, { "code": "", "text": "If you’d like to query using HTTPS+JSON you can enable the Atlas data API:", "username": "Shane" } ]
Odata query to mongodb qury
2023-10-09T03:59:05.614Z
Odata query to mongodb qury
230
null
[ "dot-net", "app-services-user-auth" ]
[ { "code": "failed to lookup key for kid=**********", "text": "Hi there, I’m trying to get Sign in with Apple working on a .NET MAUI app, I have followed the instructions in the documentation about creating the JWT etc. I have been unable to get it to work at all, I am just constantly receiving this error:failed to lookup key for kid=**********The key is set up as a secret in my Realm settings. Honestly I don’t know what I’m doing wrong here, any advice would be appreciated!", "username": "varyamereon" }, { "code": "", "text": "Have you followed the docs to configure apple auth? And if so, how are you passing the token to the Realm SDK?", "username": "nirinchev" }, { "code": "var credentials = Credentials.Apple(**JWT**);\nvar user = await App.RealmApp.LogInAsync(credentials);\n", "text": "Hi @nirinchev, thanks for your reply. Yes I’ve followed the docs, and after executing the Ruby script I end up with a JWT. This is what I have stored as the client secret in my authentication configuration on the Realm site. It is also what I am passing into the method:The response from this is the AuthError.", "username": "varyamereon" }, { "code": "", "text": "Hm, sorry, I should have been more precise - I wanted to see the code for obtaining the JWT, notably whether you’re encoding it to utf-8. It could also be a good idea to just file a support ticket about this since the team will be able to inspect the app configuration and the server-side logs.", "username": "nirinchev" }, { "code": "", "text": "Hi @nirinchev, I don’t know exactly what you mean by ‘code’, I am following the instructions here including using the Ruby script. It is returning a JWT which I am using in my realm app as in the lines of code above. Not sure exactly what you mean about encoding it to UTF-8 either I’m afraid.", "username": "varyamereon" } ]
Sign in with Apple AuthError
2023-10-05T17:43:06.941Z
Sign in with Apple AuthError
316
null
[ "storage" ]
[ { "code": "", "text": "I want to persist the following parameters and modify them into mongod.conf. Currently, the configuration is not recognized.\ndb.adminCommand({setParameter: 1,wiredTigerEngineRuntimeConfig:“eviction=(threads_min=6,threads_max=12)”});vim mongod.conf\nsharding:\nclusterRole: shardsvr\nstorage:\ndbPath: /mongodb_sh1_27003/db\nwiredTigerEngineRuntimeConfig:“eviction=(threads_min=6,threads_max=12)”\nwiredTiger:\nengineConfig:\ncacheSizeGB: 1Error when starting mongod\n$ mongod -f mongod.conf\nUnrecognized option: storage.wiredTigerEngineRuntimeConfig\ntry ‘mongod --help’ for more information", "username": "xinbo_qiu" }, { "code": "", "text": "Hi @xinbo_qiu,You can’t set this parameter in your configuration file in this way:wiredTigerEngineRuntimeConfig:“eviction=(threads_min=6,threads_max=12)”Also, looking quickly, there are no ways to set it persistently.Regards", "username": "Fabio_Ramohitaj" }, { "code": "setParametersetParameter:\n wiredTigerEngineRuntimeConfig: eviction=(threads_min=6,threads_max=12)\n", "text": "User the setParameter option in the configuration file:", "username": "chris" }, { "code": "", "text": "You’re right as always!! @chris Best regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to set wiredTigerEngineRuntimeConfig eviction=(threads_min=6) to mongod.conf?
2023-10-09T02:38:50.927Z
How to set wiredTigerEngineRuntimeConfig eviction=(threads_min=6) to mongod.conf?
227
null
[]
[ { "code": "", "text": "My current Database size is of around ~4 GB and it is increasing day by day. I have a requirement to integrate it with BI service to give clear insights of the data to the stakeholders.Is it possible to integrate MongoDB directly with AWS QuickSight? There are options of importing CSVs, JSON etc, But as data growth is high it doesn’t look feasible and I am looking for real time solutions.Note: I am running MongoDB on AWS ec2 instances, not using Atlas currently.What I would like to know is:Is there a best way to connect to MongoDB cluster with AWS QuickSight with Realtime outputs? if not, What are the other best possible solutions?", "username": "viraj_thakrar" }, { "code": "", "text": "Hey Viraj, did you find the answer to your question?", "username": "Naser_Zandi" }, { "code": "", "text": "Hi @Naser_Zandi,Welcome to the community.Yes. I tried couple of options and I deployed it successfully.There are several third party connectors available in the market, You can use that. The way, I did was, I wrote a script to get necessary data from Mongo and storing it on RDS. You can try storing files on S3 too. And finally, you can plug that in as the data source in the Quicksight. You can also use AWS DMS to load the data for Quicksight dashboard.I hope this is helpful.Cheers!\nViraj", "username": "viraj_thakrar" }, { "code": "", "text": "Hi,\nI’m interested in your experience thus far. I have a similar situation (using an SaaS product in AWS GovCloud set on a mongo database). I’m interested to know how effective the solution in report generation and how the issue with non indexed, not relational data is overcome.", "username": "Robert_Staurowsky" }, { "code": "", "text": "Hello, I’m also interested in your experience on this topic, would you like please reach send me a private message on skype : b.hamichi\nThanks in advance\nBR", "username": "Boualem_HAMICHI" }, { "code": "", "text": "In this post, you will learn how to use Amazon Athena Federated Query to connect a MongoDB database to Amazon QuickSight in order to build dashboards and visualizations. Amazon Athena is a serverless interactive query service, based on Presto, that...", "username": "Nithin_Alex" } ]
Real Time: MongoDB data to AWS QuickSight
2020-04-03T12:52:52.593Z
Real Time: MongoDB data to AWS QuickSight
7,775
null
[ "queries", "indexes" ]
[ { "code": "projectIdprojectId", "text": "Hi!I’ve been dealing with a problem that I can’t seem to get my head around. We’re running a mongodb database with just one collection that has ~5K documents. This collection, when exported to JSON is around 200MB in size.I’m running a query aimed towards this collection filtering by a text projectId field, this query should return all ~5K documents as we only have documents from one project at the moment.The issue is this query takes way too long and the time it takes can vary a lot from time to time. Originally this database was hosted in Atlas, but I’ve cloned it both in AWS and my dev environment with similar performance. Some times the load times are as slow as 20 minutes.I’ve also tried to create a text index for the projectId field with no luck whatsoever.Does anyone have any idea why this could be happening? We had this data inside an SQL database before and it seems that one is able to query the projects in a matter of milliseconds, so I’m sure we’re doing something wrong here, but I can’t seem to find what.Thanks in advance and sorry if this is something trivial, I’ve just started using mongodb and I’m fairly new to most of it’s concepts.", "username": "Oscar_Arranz" }, { "code": "executionStats", "text": "Hello @Oscar_Arranz ,Welcome to The MongoDB Community Forums! Can you please share more details for me to understand your use case better?Regards,\nTarun", "username": "Tarun_Gaur" } ]
Queries perform poorly when retrieving ~5K documents
2023-10-09T07:54:51.501Z
Queries perform poorly when retrieving ~5K documents
218
null
[]
[ { "code": " \"_id\": \"63a2b0f87a810608e6ca6d95\",\n \"Templates\": [\n {\n \"HardwareVer\": \"minthein@Joseph\",\n \"SoftwareVer\": \"11111.0\",\n \"RevisionNum\": \"mtw\",\n \"EffectiveDate\": \"2022-12-26T08:29:58.470Z\",\n \"WorkTasks\": [\n \"63a70c631dbb68ffa7473be1\",\n \"63aa9a084c60349138c4d5c3\"\n ],\n \"HasTraveller\": true,\n \"_id\": \"63a70a691dbb68ffa7473ba8\"\n },\n {\n \"HardwareVer\": \"josephwin\",\n \"SoftwareVer\": \"11111.0\",\n \"RevisionNum\": \"win\",\n \"EffectiveDate\": \"2022-12-26T07:29:58.470Z\",\n \"WorkTasks\": [],\n \"HasTraveller\": false,\n \"_id\": \"63a70b741dbb68ffa7473bbc\"\n },\n {\n \"HardwareVer\": \"A333\",\n \"SoftwareVer\": \"333.0\",\n \"RevisionNum\": \"221227135521\",\n \"EffectiveDate\": \"2023-01-27T05:55:09.148Z\",\n \"WorkTasks\": [],\n \"HasTraveller\": false,\n \"_id\": \"63aa88c96e0a2601d545e52c\"\n }\n ],\n}\n\nI wish to get result at the follow\n\n{\n \"_id\": \"63a2b0f87a810608e6ca6d95\",\n \"Templates\": [\n {\n \"HardwareVer\": \"minthein@Joseph\",\n \"SoftwareVer\": \"11111.0\",\n \"RevisionNum\": \"mtw\",\n \"EffectiveDate\": \"2022-12-26T08:29:58.470Z\",\n \"WorkTasks\": [\n \"63a70c631dbb68ffa7473be1\",\n \"63aa9a084c60349138c4d5c3\"\n ],\n \"HasTraveller\": true,\n \"_id\": \"63a70a691dbb68ffa7473ba8\"\n },\n ],\n} ```\n\nPlease help me sir.I had start to learning MongoDB and Nodjs.", "text": "I want to retrieve data by date and time inside of array from EffectiveDate element.\nI saving EffectiveDate by Date.now on Mongodb not with ISO date format.\nI want to get result EffectiveDate is not greater than current date and then get the lasted date from list result. My collection data is at the following sample.", "username": "Min_Thein_Win" }, { "code": "", "text": "In your thread Join data collection MongoDB inside an array and two element result add array element, I wrotePlease read Formatting code and log snippets in posts and update your sample documents so that we can cut-n-paste into our system.In another thread of yours, Add value to new AddFields Array, I wrotePlease read Formatting code and log snippets in posts and update your sample documents so that we can cut-n-paste into our system.And finally, in Aggregate - return the whole array if query match one element in the array, I repliedPlease read Formatting code and log snippets in posts and update your sample documents so that we can cut-n-paste into our system.This is the third time I write you the above. Help us help you by providing your document in a usable form. Editing documents that are badly formatted is time consuming. It is easier to help others that have well formatted documents so we answer them faster. It is really really easy for you to supply documents that are easy to cut-n-paste into our systems.Good Luck!", "username": "steevej" }, { "code": "", "text": "I had edit code line format please help me your idea sir.", "username": "Min_Thein_Win" }, { "code": " MainAssyTemplate.findOne(\n {\n _id: req.body.ProductId,\n // Templates: {\n // $elemMatch: { EffectiveDate: { $lt: isodate(\"2023-01-27\") } },\n // },\n \"Templates.EffectiveDate\": { $lt: new Date(\"2023-01-27\") },\n },\n (err, mainAssyTemplate) => {\n if (err) {\n return res.status(500).json({\n message:\n \"Some error occured while retrieving Main Assbly Template [\" +\n err.reason +\n \"]\",\n });\n }\n res.send(mainAssyTemplate);\n }\n );", "text": "My coding is not working. What is wrong the query ? Please help and thanks", "username": "Min_Thein_Win" }, { "code": "", "text": "The code looks okay but does not match your sample data. It looks like the field EffectiveDate has a string type rather than a date type.", "username": "steevej" }, { "code": "const d = new Date(myDate)d.toISOString()", "text": "I agree. It seems an ISO date.In JS once you do const d = new Date(myDate) you can call the method d.toISOString().A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.And then the query may do what is expected.", "username": "santimir" }, { "code": " _id: { type: String, require: true, unique: true },\n Templates: [\n {\n HardwareVer: { type: String, require: true },\n SoftwareVer: { type: String, require: true },\n RevisionNum: { type: String, require: true },\n EffectiveDate: { type: Date, default: Date.now() },\n WorkTasks: { type: Array, default: [], require: true },\n HasTraveller: { type: Boolean, require: true, default: false },\n },\n ],\n});", "text": "I had create schema for EffectiveDate using type:Date();", "username": "Min_Thein_Win" }, { "code": "", "text": "Sorry to jump in but what is the error you talk about?If it is just not finding a doc, are you sure there is a doc with that Id ? Otherwise it will indeed be empty.Maybe you can link an example in https://mongoplayground.net that reproduces the error…", "username": "santimir" }, { "code": "_id : req.body.ProductId\n", "text": "I missed that part of the query:it is good that you did.Please print the value of req.body.ProductId and share the document that you expect to see.Do not store your _id as String, a real ObjectId takes less space and is faster.It looks like you are using mongoose. You might have a schema with type:Date, but nothing stop anyone (outside your mongoose code), to create document with string dates. If your dates were not string there would be not double quotes around the value. Like your true and false of HasTraveller.Please provide followups in your other threads. That is how we keep the forum useful.Open Compass and share a screenshot of the result of Schema Analysis.", "username": "steevej" }, { "code": "", "text": "Dear @sarah_white,Please read Formatting code and log snippets in posts and make sure you format your next code accordingly.Yes, I have mentioned your next code because I have flagged your reply as spam. I am following you so I see all your posts. But good work in this case for putting the spam only after editing the post once. But I still got it.@+", "username": "steevej" } ]
Retrieve data by date and time inside of array element
2022-12-27T13:50:40.236Z
Retrieve data by date and time inside of array element
3,828
null
[]
[ { "code": "{\nid: 22893472347102,\napple: 3,\ncount: 5\n}\n", "text": "Before I update the document, I want to first retrieve the value of the existing document, such as:\ndocument A:I want to update {apple: 5, count:3} to document A, if apple in document A == 3, else insert {apple: 2, count:5}.\nIs there any way to do it faster without doing query + insert(two action)?", "username": "WONG_TUNG_TUNG" }, { "code": "", "text": "Your request is not clear to me.You want to update but you writeinsert {apple: 2, count:5}.Do you want to update or insert?It is best to share sample income documents and sample output documents.", "username": "steevej" }, { "code": "", "text": "I want to update, but base on the existing document content to update, I don’t want to use .find then use .updateOne, is there any faster solution that I can base on existing document to update?", "username": "WONG_TUNG_TUNG" }, { "code": "", "text": "I don’t want to use .find then use .updateOneDoing a find() then an updateOne() is definitively the wrong way to do it.is there any faster solution that I can base on existing document to update?Yes there is. But with the scarce details of what you exactly want to do, the only thing we can do is to send you to the documentation on update operators:A much better answer could be supplied with:sample income documents and sample output documents", "username": "steevej" }, { "code": "{\nid: 22893472347102,\ntype: \"apple\",\ncount: 10,\nneedReduce: 2,\n}\nModel.updateOne({type: \"apple\"}, {count: 20})\nneedReduce{\nid: 22893472347102,\ntype: \"apple\",\ncount: 18,\nneedReduce: 2,\n}\n.find({type: apple}).updateOne()", "text": "Existing document A:Sample update:Sample document A after updated(count should reduce the existing needReduce value):How can I do it without .find({type: apple}), then .updateOne()?", "username": "WONG_TUNG_TUNG" }, { "code": "update = { count : 20 } ;\nModel.updateOne( \n { \"type\" : \"apple\" } ,\n [ { \"$set\" : {\n \"count\" : { \"$subtract\" : [ update.count , \"$needReduce\" ] }\n } } ]\n) ;\n", "text": "From the sample documents and update desired you need to use updateOne() using the update with aggregation:The following page provides examples of updates with aggregation pipelines.A $set, that uses $subtract such as the untested:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to apply the update after reading the existing document
2023-10-05T10:54:30.117Z
How to apply the update after reading the existing document
227
null
[ "production", "c-driver" ]
[ { "code": "[ { $code: ... } ][{$dbPointer: ...}]strerror_l", "text": "Announcing 1.24.3 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.Fixes:Fixes:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "This post is missing the driver releases tags so it does not show up in the list above.", "username": "Patrick_Callahan" }, { "code": "", "text": "Thank you. The post Category has been updated to include “Driver Releases”.", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C Driver 1.24.3 Released
2023-08-07T20:22:12.336Z
MongoDB C Driver 1.24.3 Released
799
null
[]
[ { "code": "\n{\n db: 'test1',\n collections: 19,\n views: 1,\n objects: Long(\"195348\"),\n avgObjSize: 2724.9829739746506,\n dataSize: Long(\"532319974\"),\n storageSize: Long(\"55431168\"),\n totalFreeStorageSize: Long(\"0\"),\n numExtents: Long(\"0\"),\n indexes: 25,\n indexSize: Long(\"962560\"),\n indexFreeStorageSize: Long(\"0\"),\n fileSize: Long(\"0\"),\n nsSizeMB: 0,\n ok: 1\n}\n{\n db: 'test2',\n collections: 1,\n views: 0,\n objects: Long(\"0\"),\n avgObjSize: 0,\n dataSize: Long(\"0\"),\n storageSize: Long(\"4096\"),\n totalFreeStorageSize: Long(\"0\"),\n numExtents: Long(\"0\"),\n indexes: 1,\n indexSize: Long(\"4096\"),\n indexFreeStorageSize: Long(\"0\"),\n fileSize: Long(\"0\"),\n nsSizeMB: 0,\n ok: 1\n}\n{\n db: 'test3',\n collections: 11,\n views: 0,\n objects: Long(\"4113\"),\n avgObjSize: 647.5229759299781,\n dataSize: Long(\"2663262\"),\n storageSize: Long(\"2387968\"),\n totalFreeStorageSize: Long(\"0\"),\n numExtents: Long(\"0\"),\n indexes: 22,\n indexSize: Long(\"1208320\"),\n indexFreeStorageSize: Long(\"0\"),\n fileSize: Long(\"0\"),\n nsSizeMB: 0,\n ok: 1\n}\n{\n db: 'test4',\n collections: 2,\n views: 0,\n objects: Long(\"13\"),\n avgObjSize: 6024.384615384615,\n dataSize: Long(\"78317\"),\n storageSize: Long(\"53248\"),\n totalFreeStorageSize: Long(\"0\"),\n numExtents: Long(\"0\"),\n indexes: 2,\n indexSize: Long(\"40960\"),\n indexFreeStorageSize: Long(\"0\"),\n fileSize: Long(\"0\"),\n nsSizeMB: 0,\n ok: 1\n}\n", "text": "Hi there,I am using a free cluster from mongodb altas. I have been using that for an year. Today I received an error when inserting document in the DB. But, I am able to retrieve the records from the DB. Please find the error message below.“MongoServerError: you are over your space quota, using 512 MB of 512 MB”,I deleted some of the collections and still getting the same error. Tried to run db.stats() for all the available DBs and the total size is 56 MB. Attached the response of the db.stats() queries below.", "username": "Shelif_M_A" }, { "code": "dataSizeindexSizedb.stats()dataSizeindexSize", "text": "Hi @Shelif_M_A,Welcome to the community “MongoServerError: you are over your space quota, using 512 MB of 512 MB”,The storage quotas for free and shared clusters are based on summing the dataSize and indexSize for all databases. You can refer to the documentation to learn - How does Atlas calculate storage limits for shared clusters (M0, M2, M5).Attached the response of the db.stats() queries below.Based on the shared db.stats() output, it appears that the combined total of dataSize and indexSize exceeds 512 MB. This is the reason you are encountering the error message.Please feel free to reach out in case of any further questions.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Unable to insert document - getting 8000, your are over your space quota
2023-10-05T11:27:01.528Z
Unable to insert document - getting 8000, your are over your space quota
269
null
[ "aggregation" ]
[ { "code": "False", "text": "Hi all,\nI am new to mongodb using atlas search for my query. I want to return document based on a condition where if the field exists it should be false if the field does not exists it should return the document based on filters. Basically return the documents where field is False or where the field doesn’t exist at all. If using find we can use like this\nq[‘active’] = {‘$ne’: True}\nHow can we achieve same using atlas search", "username": "Shradha_Nambiar" }, { "code": "Falsecompoundexistsequals[\n {\n $search: {\n index: \"default\",\n \"compound\": {\n \"should\": [\n {\n \"equals\": {\n \"path\": \"field_name\",\n \"value\": false\n }\n },\n {\n \"compound\": {\n \"mustNot\": {\n \"exists\": {\n \"path\": \"field_name\"\n }\n }\n }\n }\n ]\n }\n }\n }\n])\n", "text": "Hello @Shradha_Nambiar,I am new to mongodb using atlas search for my query.\nBasically, return the documents where field is False or where the field doesn’t exist at all.In Atlas Search, you can use the compound operator along with exists and the equals operator to filter documents based on both the field’s non-existence and its value.With the help of the above operator, you can form a query that can return the resultant documents as per your conditions. Here is the example query for your reference:However, May I ask about your use case or any specific requirements you have? This will help us provide more tailored advice on whether Atlas Search or traditional queries are the better fit. If your use case doesn’t involve full-text search or ranking requirements, sticking with regular MongoDB queries might provide a more straightforward solution.Looking forward to your response.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Atlas Search if field does not exists
2023-10-04T08:02:25.417Z
Atlas Search if field does not exists
335
null
[ "java", "spring-data-odm" ]
[ { "code": "", "text": "I connected my spring boot app with mongo Atlas, everything works perfectly with postman. All the operations works. However, I can not see the DB created (nor the data) in cluster when I go on mongo Atlas. Has anyone got this issue?", "username": "Edwin_Kenfack" }, { "code": "", "text": "Hi @Edwin_Kenfack and welcome to MongoDB community forums!!We sincerely apologise for the inconvenience you’re experiencing.To assist you effectively, we kindly request more information to gain better insights into resolving the issue. However, I can not see the DB created (nor the data) in cluster when I go on mongo Atlas. If I understand correctly, you are unable to see the database or data when you are on MongoDB Atlas Dashboard (cloud.mongodb.com). If yes, could you share the screenshot of what you are seeing there?Regards\nAasawari", "username": "Aasawari" } ]
Spring boot connection with Mongo Atlas
2023-10-08T04:06:59.093Z
Spring boot connection with Mongo Atlas
282
null
[ "node-js", "serverless" ]
[ { "code": "", "text": "Hey,So I have a serverless instance and I recently got my billing. Now I have about 25mil RPU and I’m currently optimizing my queries.I was wondering, is it possible to see which of my API routes uses the most RPU? Right now, my best guess is because of my cron jobs that runs every hour and after doing some check using .explain() it really does scans my whole collections so I’m currently optimizing themFor example:\n63552 RPU - https://domain.com/userdata\n2324 RPU - https://domain.com/api-route-oneI also want to see how much RPU my other queries uses without really using .explain on each one of them since there’s a decent amount of themThank youedit: added some example", "username": "Peeps" }, { "code": "", "text": "Hi PeepsThank you for the question. There are currently three ways to estimate the documents scanned:We do not have a bulk explain() command. Please upvote this idea here so that we can prioritize this feature in a future release.Please let us know if you have any additional questions.Best,\nAnurag", "username": "Anurag_Kadasne" } ]
MongoDB analytics
2023-10-06T22:18:31.624Z
MongoDB analytics
280
null
[ "flutter" ]
[ { "code": "", "text": "Hello.For instance, say I have two models: playlists, and movies.Playlists would contain a list of playlists, with id and name fields. The movies would contain a list of all the users movies.Now I need another model that saves to a column in the database with a name of say “playlist_id” where id is the id of the playlist.I would want to use the same movies model to write rows into the individual playlists tables.When I want to get the contents of an individual playlist, id use a playlist id from the playlist table and then accessing the playlist_id table would give me all of the movie data contained in that table.How would I go about designing this?\nThank you.", "username": "NoTux_NoBux" }, { "code": "", "text": "What you are describing is a relational database.You can hack on it for a while in the hope of personally rediscovering the past 50 years of RDBMS (Relational Data Base Management Systems) or you can study relational databasing on the web or thru books.If you choose to learn from a book, read anything by C. J. Date", "username": "Jack_Woehr" }, { "code": "", "text": "Couldn’t remember what the name was for what I was looking for. You’ve got me on the right track.Cheers!", "username": "NoTux_NoBux" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Write model into different schema
2023-10-09T03:18:43.901Z
Write model into different schema
228
null
[ "atlas-cluster", "database-tools" ]
[ { "code": "./mongoimport --uri mongodb+srv://catalin_mongodb:<PASSWORD>@clusterhz.iug22.mongodb.net/<DATABASE> --collection <COLLECTION> --type <FILETYPE> --file <FILENAME>./mongoimport --uri mongodb+srv://catalin_mongodb:******@clusterhz.iug22.mongodb.net/test --collection purchases --type csv --file home/catalin/Downloads/purchases.txt --headerline", "text": "I use the mongoimport command, specifically the following:./mongoimport --uri mongodb+srv://catalin_mongodb:<PASSWORD>@clusterhz.iug22.mongodb.net/<DATABASE> --collection <COLLECTION> --type <FILETYPE> --file <FILENAME>and with my archive:\n./mongoimport --uri mongodb+srv://catalin_mongodb:******@clusterhz.iug22.mongodb.net/test --collection purchases --type csv --file home/catalin/Downloads/purchases.txt --headerlineAnd that gave me the next error:\n|2023-10-08T18:20:59.592+0200|error parsing command line options: error parsing uri: lookup _mongodb._tcp.clusterhz.iug22.mongodb.net on 127.0.0.53:53: server misbehaving|\n|2023-10-08T18:20:59.592+0200|try ‘mongoimport --help’ for more information|How can I solve this error?", "username": "Catalin_Costea" }, { "code": "", "text": "Its exactly the same problem as your other thread/topic.Same as I cannot connect to the \"Dedicated\" cluster from mongodb shell - #2 by chris", "username": "chris" } ]
I cannot import a file with "mongoimport" in a cluster
2023-10-08T17:17:04.157Z
I cannot import a file with &ldquo;mongoimport&rdquo; in a cluster
309
null
[ "mongodb-shell", "atlas-cluster" ]
[ { "code": "", "text": "Hi, I am trying to connect to the cloud cluster via mongodb shell, but I get the following error:maikol@ubuntu20:/usr/local/mongodb/mongodb-4.4/bin$ ./mongosh “mongodb+srv://clusterhz.iug22.mongodb.net/” --username maikol_mongodb\nEnter password: ******\nCurrent Mongosh Log ID:\t652299532e44d7a611ee6990\nConnecting to:\t\tmongodb+srv://@clusterhz.iug22.mongodb.net/?appName=mongosh+1.10.6\nError: querySrv ESERVFAIL _mongodb._tcp.clusterhz.iug22.mongodb.netI have the IP set correctly, and if I connect to the free cluster it connects. This clusterhz is a “Dedidacted” cluster, tier M50, with 2 Shards. I don’t know why it doesn’t works.Greetings.", "username": "Catalin_Costea" }, { "code": "11:16 $ dig +short srv _mongodb._tcp.clusterhz.iug22.mongodb.net; dig +short txt clusterhz.iug22.mongodb.net\n0 0 27016 clusterhz-shard-00-00.iug22.mongodb.net.\n0 0 27016 clusterhz-shard-01-02.iug22.mongodb.net.\n0 0 27016 clusterhz-shard-01-01.iug22.mongodb.net.\n0 0 27016 clusterhz-shard-01-00.iug22.mongodb.net.\n0 0 27016 clusterhz-shard-00-02.iug22.mongodb.net.\n0 0 27016 clusterhz-shard-00-01.iug22.mongodb.net.\n\"authSource=admin\"\n", "text": "Looks like the DNS server you are using is not resolving the cluster.If you select older versions in the ‘Connect’ dialog in Atlas you will get the legacy connection string which may work work better for you, or switch DNS servers.image779×818 62.2 KBThe Cluster resolves fine for me:", "username": "chris" }, { "code": "", "text": "Hello, thanks for your answer, but I don’t know if it worked, I get the following text.{“t”:{“$date”:“2023-10-08T16:28:50.727Z”},“s”:“W”, “c”:“CONTROL”, “id”:23321, “ctx”:“main”,“msg”:“Option: This name is deprecated. Please use the preferred name instead.”,“attr”:{“deprecatedName”:“ssl”,“preferredName”:“tls”}}\nMongoDB shell version v4.4.24\nconnecting to: mongodb://clusterhz-shard-00-00.iug22.mongodb.net:27016/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb\n{“t”:{“$date”:“2023-10-08T16:28:50.844Z”},“s”:“I”, “c”:“NETWORK”, “id”:5490002, “ctx”:“thread1”,“msg”:“Started a new thread for the timer service”}\nImplicit session: session { “id” : UUID(“09ba958d-c8db-4a84-a0c3-d280483f4a90”) }\nMongoDB server version: 6.0.10\nWARNING: shell and server versions do not match\nMongoDB Enterprise mongos>", "username": "Catalin_Costea" }, { "code": "ssl=truetls=true", "text": "Yes, quite right switch ssl=true to tls=true.But yes it worked,", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I cannot connect to the "Dedicated" cluster from mongodb shell
2023-10-08T14:55:04.903Z
I cannot connect to the &ldquo;Dedicated&rdquo; cluster from mongodb shell
283
null
[]
[ { "code": "exports = async function SubTest() {\n\n const ah = require(\"aurafortest-herofishing\");\n const { PubSub } = require('@google-cloud/pubsub');\n const pubsub = new PubSub();\n console.log(\"pubsub=\" + JSON.stringify(pubsub))\n const topicName = 'herofishing-json-topic';\n const topic = pubsub.topic(topicName);\n const subscriptionName = 'herofishing-subscription-atlasfunction';\n const subscription = topic.subscription(subscriptionName);\n const [messages] = await subscription.get({ maxResults: 1 }); // get error {\"message\":\"exec is not supported\",\"name\":\"Error\",\"code\":\"MODULE_NOT_FOUND\"}\n let jsonData = {}\n if (messages.length > 0) {\n jsonData = JSON.parse(messages[0].data.toString('utf8'));\n console.log(\"[SubTest] 取Json資料成功\")\n } else {\n let error = \"[SubTest] 取Json資料失敗\";\n console.log(error)\n return JSON.stringify(ah.ReplyData.NewReplyData(jsonData, error));\n }\n return JSON.stringify(ah.ReplyData.NewReplyData(jsonData, null));\n\n}\n", "text": "I use atlas function to sub google pubsub, but I got error.\n“exec is not supported”", "username": "Scoz_Auro" }, { "code": "", "text": "Or Atlas doesn’t support google pub/sub? I found similiar questions and got no answer.", "username": "Scoz_Auro" } ]
Atlas function pub/sub error "exec is not supported"
2023-10-07T16:35:03.375Z
Atlas function pub/sub error &ldquo;exec is not supported&rdquo;
295
https://www.mongodb.com/…66df2fed4ab.jpeg
[ "lebanon-mug" ]
[ { "code": "", "text": "elie-devfest800×800 70.1 KBHello folks,Join me next Saturday at the Lebanese American University for #Devfest 2023, where we’ll navigate the transformative world of app development in this digital age.There, we’ll highlight how the fusion of app development and AI, led by innovations like MongoDB #Atlas with its Application-Driven Intelligence, is revolutionizing modern applications.We’ll explore distinct features from semantic searches deciphering user intent to real-time data analytics and unifying diverse data sources. This session promises an insightful journey into how MongoDB Atlas ensures businesses remain at the forefront of adaptive user experiences", "username": "eliehannouch" }, { "code": "", "text": "I am deeply humbled and honored to have been a part of #DEVFEST #Beirut 2023, organized by Google Developer Groups - GDG Coast Lebanon, hosted by the Lebanese American University. With an overwhelming attendance of more than 700 individuals, the energy and passion in the room were palpable. Being part of the panel discussion on “Community and Leadership in the Tech Industry” alongside experienced industry leaders was not only enlightening but also a testament to the growth and potential of our tech community. I’m filled with gratitude for the opportunity to share insights and learn from such esteemed professionals.Furthermore, it was a privilege to deliver a talk on how to use MongoDB to shape the next #generation of #intelligent #applications. I hope my session provided value and sparked curiosity among the attendees.image1554×1542 265 KBFor those who missed it or want to revisit, full recordings of both my session and the panel discussion, along with a detailed insights article, will be shared in the coming days.A special moment for me was being recognized by #LAU, represented by Dr. Nadine Abbas , for my contributions in nurturing and building the Lebanese tech community. This accolade is not just for me but for all of us who believe in the transformative power of #technology and #community.I’d like to extend my heartfelt thanks to all the #organizers, #volunteers, and everyone involved. Your hard work and dedication made this day truly memorable and impactful.Thank you, #DEVFEST #Beirut #2023, for the memories, insights, and opportunities. Until next time! ", "username": "eliehannouch" } ]
LEBANON DEVFEST: Application Driven Intelligence: Defining The Next Wave Of Moderns Apps
2023-10-04T17:36:21.015Z
LEBANON DEVFEST: Application Driven Intelligence: Defining The Next Wave Of Moderns Apps
416
null
[]
[ { "code": "", "text": "Since the cache_size and threads_min parameter values are set to a small value, consider increasing them and use the following method to set them, but it does not work.shard1:PRIMARY> db.adminCommand({“setParameter”: 1, “wiredTigerEngineRuntimeConfig”: “cache_size=5G”})\n{\n“was” : “”,\n“ok” : 1,\n“$gleStats” : {\n“lastOpTime” : Timestamp(0, 0),\n“electionId” : ObjectId(“xxxxxx”)\n}\n}\nshard1:PRIMARY> db.adminCommand({getParameter: 1, wiredTigerEngineRuntimeConfig: 1})\n{\n“wiredTigerEngineRuntimeConfig” : “”,\n“ok” : 1,\n“$gleStats” : {\n“lastOpTime” : Timestamp(0, 0),\n“electionId” : ObjectId(“xxxxxx”)\n}\n}", "username": "xinbo_qiu" }, { "code": "", "text": "mongodb version 3.4.10", "username": "xinbo_qiu" }, { "code": "db.serverStatus().wiredTiger.cache\n", "text": "Hi @xinbo_qiu,\nI premise that I have never set it via administrative command, so I will make some observations.db.adminCommand({“setParameter”: 1, “wiredTigerEngineRuntimeConfig”: “cache_size=5G”})MongoDB Server Parameters — MongoDB ManualFor example, instead in version 7.0 it is present:shard1:PRIMARY> db.adminCommand({“setParameter”: 1, “wiredTigerEngineRuntimeConfig”: “cache_size=5G”})\n{\n“was” : “”,\n“ok” : 1,\n“$gleStats” : {\n“lastOpTime” : Timestamp(0, 0),\n“electionId” : ObjectId(“xxxxxx”)\n}\n}From the following command you can find the value “maximum bytes configured” which should correspond to the value you are trying to set :I, personally would set it in the configuration file so that the change is persistent. From the documentation:Configuration File Options — MongoDB ManualI hope it is useful!Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "I find command in the documentation for the version 3.4:db.serverStatus().wiredTiger.cache display the “maximum bytes configured” set is successful\n“maximum bytes configured” : 5368709120,", "username": "xinbo_qiu" }, { "code": "", "text": "Hi @xinbo_qiu,I find command in the documentation for the version 3.4You’re right.db.serverStatus().wiredTiger.cache display the “maximum bytes configured” set is successful\n“maximum bytes configured” : 5368709120,Perfect!So, I think you can flag the solution.Regards", "username": "Fabio_Ramohitaj" } ]
Using db.adminCommand to set the wiredTigerEngineRuntimeConfig parameter does not take effect
2023-10-07T09:18:29.411Z
Using db.adminCommand to set the wiredTigerEngineRuntimeConfig parameter does not take effect
283
https://www.mongodb.com/…d_2_1024x518.png
[ "ops-manager" ]
[ { "code": "http://localhost:8080/api/public/v1.0/\nadmin/backup/daemon/config/\nlocalhost/%2Fdata%2Fbackup%2F\nsudo chmod 777 -R /data/*", "text": "Hello - I’m trying to do an evaluation of ops manager. I’m trying to configure the backup module, and have gotten to the Backup Initial Configuration.Screenshot 2023-10-03 at 3.48.54 PM1394×706 72.1 KBI have no idea what to select as a HEAD directory. I tried to use the example head directory for Linux platforms from this post: Update One Backup Daemon Configuration — MongoDB Ops Manager 6.0which is:It was not successful. I confirmed that the mongod user had access to all the elements in the /data/ folder with sudo chmod 777 -R /data/*I have verified that the backup daemon is running.\n[root@localhost ~]# /etc/init.d/mongodb-mms-backup-daemon restart\nStopping the Backup Daemon\nTrying to shutdown gracefully. [ OK ]\nStarting pre-flight checks\nSuccessfully finished pre-flight checksStart Backup Daemon… [ OK ]\nI’ve followed this instructions for my ops manager deployment: Install a Simple Test Ops Manager Installation — MongoDB Ops Manager 6.0. So, path: “/data/appdb/mongodb.log” and dbPath: “/data/appdb”I made another mongod for my tests as a single node backup like this:\n[root@localhost ~]# sudo -u mongod mongod --port 27018 --dbpath /data/backup --logpath /data/backup/mongodb.log --wiredTigerCacheSizeGB 1 --fork\nabout to fork child process, waiting until server is ready for connections.\nforked process: 194771child process started successfully, parent exitingI’m not sure what else to try. Advice would be welcome.", "username": "Brenna_Buuck" }, { "code": "sudo chmod 777 -R /data/*/data//datamongodb-mms", "text": "It was not successful. I confirmed that the mongod user had access to all the elements in the /data/ folder with sudo chmod 777 -R /data/*This is just setting the permissions on the directories and files below /data/ not on /data itself.Usually create a new directory for this, the owner should be the same of the user running the backup-daemon if installing via package manager that will be mongodb-mms.", "username": "chris" } ]
Evaluating Ops Manager - HEAD Directory
2023-10-04T05:32:07.779Z
Evaluating Ops Manager - HEAD Directory
277
null
[ "swift", "transactions" ]
[ { "code": "final class Foo: Object\n{\n @Persisted(primaryKey: true) var _id: UUID\n @Persisted var child: Bar?\n}\n\nfinal class Bar: Object\n{\n @Persisted(primaryKey: true) var _id: UUID\n @Persisted var name: String = \"\"\n @Persisted var parent: LinkingObjects<Foo> = LinkingObjects(fromType: Foo.self, property: \"child\")\n}\nFooBarchildFooBar.childBarparentObject", "text": "Using the Swift SDK, suppose I have these Objects:Two separate users download the cloud database with a Foo object. Then, each user goes offline. While offline, each user creates a separate Bar Object and sets it as the child of the Foo Object. (The Bar Object is added to the Realm and set as .child in a write transaction.)Next, both users connect to the Internet and sync begins. I understand that Realm will apply the changes in time-order, so whichever user’s Bar Object was created most recently will prevail. But my question is: what happens to the other, “orphaned” Bar object? Is it automatically deleted from the Realm, or does it still exist in the Realm with no parent? Do I need to worry about cleaning up such orphaned Objects myself?I have seen this page: https://www.mongodb.com/docs/atlas/app-services/sync/details/conflict-resolution/, which states “Last Value Wins” and that Sync will keep the latest value for a property. That’s straightforward for scalar values. But when the value is an Object subclass, are the “losing” values deleted from the Realm automatically?", "username": "Bryan_Jones" }, { "code": "FooBarBarEmbeddedObjectObject", "text": "Hi @Bryan_Jones!In this case what you’ve defined here is a “To-One” link between the parent class Foo and the child class Bar (See Docs Here). In the scenario you’ve described, unless both child documents have the same primary key, you will need to cleanup the existing object since you’re telling realm that you’d like to create a link between 2 objects, but both objects should exist in their tables independent of each other. If both child objects do have the same primary key, the one that “loses” conflict resolution will be replaced with the “winner”It sounds like what you actually want here is an embedded object rather than a link between objects. For embedded objects the relationship between the two objects is one-to-one and you get things like cascading deletes and cleanup of the “orphaned” embedded object Bar in the conflict resolution scenario that you’ve described here.So in this case you can get the behavior that you’re describing by removing the primary key from Bar and deriving from EmbeddedObject rather than Object", "username": "Sean_Brandenburg" }, { "code": "", "text": "Thanks @Sean_Brandenburg. Unfortunately, embeddedObjects come with restrictions that often make them a non-starter. They can’t hold more than one direct relationship and there are limitations on querying them.My example here is very simplified, but in real applications with more complex relationships, embeddedObjects just aren’t usable.It’s one area where Core Data vastly exceeds Realm: I can specify a delete rule for any relationship; I need not use a second-class citizen to get cascade deletes.", "username": "Bryan_Jones" }, { "code": "", "text": "Thinking more about this, it’s a shame there isn’t a way for me to tell Realm to cleanup “loser” Objects during a sync.Under the current design, it’s easy for Objects to “leak” during a sync unless I manually fetch all of them that don’t have a parent relationship and delete them myself.The mental disconnect is that Realm Sync is handling the “assign new Object to this property” process FOR me. If I handle that process manually, I obviously realize that I need to delete the old Object I’m replacing. But when sync updates this property to merge 15 different users’ changes, everything is opaque—I no longer think about the old Objects that need to be deleted because I didn’t do the property updating. So 14 orphaned Objects are just floating in the database.There should be a way to manage this for sync. To tell Realm, “Any Objects that don’t ‘win’ the conflict resolution should be deleted.” That’s subtly different than EmbeddedObject, which is deleted when the parent object is deleted.", "username": "Bryan_Jones" } ]
Sync Conflict Resolution: What Happens to "Orphan" Objects?
2023-10-04T23:02:58.285Z
Sync Conflict Resolution: What Happens to &ldquo;Orphan&rdquo; Objects?
331
null
[]
[ { "code": "", "text": "Dear friends,\nI have no idea how I got into this but the credentials for the admin user and the read/only user for a specific database are not recognized anymore Thankfully the readwrite to the production DB is working well.Self hosted on an EC2 instance with Ubuntu.It this wasn’t in production use I could systemctl stop mongod, then relaunch it with no authentication, fix my users, then again restart with auth.How could I aproach this with the very least noticeable downtime? Thanks a lot.", "username": "Robert_Alexander" }, { "code": "", "text": "It this wasn’t in production use I could systemctl stop mongod, then relaunch it with no authentication, fix my users, then again restart with auth.Yes, that is how we do it. Having a replicaSet allows you to do this with minimal downtime.", "username": "chris" }, { "code": "", "text": "EDIT: solved … see below … almost had an heart attack Oh my god I disabled the authentication, ran a javascript to create the same user with a new password and restarted with auth enabled but now I seem to only see an old version of the data … really worried. Help please.EDIT:\nI did the password changes without thinking enough on my Ubunto native host mongodb and restarted it with the new credentials. Then checked the data and started panicking as it was old stale data from mid September.Point is that since then I am not using the host MongoDB but rather a docker container mongodb which exports its 27017 port on the host.Then the host MongoDB started it “highjacked” that port without flinching so I connected to the host instance with stale data.One I understood I killed the host instance and reconnected with Compass to 27017 and this time to the container 27017 and all was good.Now will have to repeat the password reset on the containerized mongo.Thanks for the patience", "username": "Robert_Alexander" } ]
Minimize downtime on self hosted for a password emergency
2023-10-06T14:31:52.741Z
Minimize downtime on self hosted for a password emergency
252
null
[ "sharding", "indexes" ]
[ { "code": "totalDocsExaminedtotalKeysExamined", "text": "Hi Team, I am evaluating query performances in Mongodb with different data models for one of our use cases; when I check the explain query, I can see different stages of my query, and I can see a lot of information about the each stage. But I could not find any documentation for the fields except totalDocsExamined and totalKeysExamined.\nwhat I am curious in 3rd stage of my query planner I am seeing “docsExamined” : 144, “alreadyHasObj” : 144. can I get any information or links about each field explanation in explain results.", "username": "Kiran_Sunkari" }, { "code": "", "text": "Hello @Kiran_Sunkari,Check out this documentation,This is the latest documentation, you can change the version from the top left dropdown, whatever you are using.\n", "username": "turivishal" }, { "code": "alreadyHasObj", "text": "Thanks for the documentation link. It contains a majority of information, but it still does not have information about “alreadyHasObj”. I am getting this value in one of the execution stages. My query has three stages. 3rd stage is a logical condition ( $and, $or). For the first and second stages contains alreadyHasObjas zero. But the third-stage is showing alreadyHasObj=144. Is it indicating some antipattern in the data mode.?", "username": "Kiran_Sunkari" }, { "code": "", "text": "Can you please share more details:", "username": "turivishal" } ]
What is `alreadyHasObj` in mongo query planner
2023-10-07T10:13:14.710Z
What is `alreadyHasObj` in mongo query planner
218
null
[]
[ { "code": "", "text": "Currently, I am using m10, The $function operator is supported but external libraries like moment.js are not supported in the aggregation pipeline.", "username": "Sakil_Hossain" }, { "code": "$function", "text": "Hell @Sakil_Hossain, Welcome to the MongoDB community forum,The $function does not support any external libraries!Why do you need this kind of operation in a query? Make sure you have read the Note in the documentation,Executing JavaScript inside an aggregation expression may decrease performance. Only use the $function operator if the provided pipeline operators cannot fulfill your application’s needs.Always try to use this kind of operation on your client-side or server-side instead of a database query.", "username": "turivishal" } ]
In $function operator external library like moment.js not supporting
2023-10-05T18:16:41.720Z
In $function operator external library like moment.js not supporting
202
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Current Mongosh Log ID: 65204f678f41e058cc30ca4f\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.0.1This is what I when trying mongosh. Log file shows below.featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \"featureCompatibilityVersion\", version: \"4.4\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected ‘5.0’ or ‘5.3’ or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}", "username": "John_Week" }, { "code": "", "text": "What are the versions you are upgrading from/to?Based on the messages I would assume this was a 5.0 installation that had an incomplete upgrade from 4.4 but you’d have to confirm that.", "username": "chris" } ]
Upgraded MongoDB and now its broken
2023-10-06T18:27:01.313Z
Upgraded MongoDB and now its broken
261
null
[ "aggregation" ]
[ { "code": "model.aggregate([\n {\n $match: {\n parent: 0\n }\n },\n {\n $graphLookup: {\n from: appId + \"_\" + viewName + \"s\",\n startWith: \"$id\",\n connectFromField: \"id\",\n connectToField: \"parent\",\n depthField: \"level\",\n as: \"data\"\n }\n },\n {\n $unset: [\n \"data._id\",\n \"data.createdAt\",\n \"data.updatedAt\",\n \"data.updateBy\"\n ]\n },\n {\n $unwind: {\n path: \"$data\",\n preserveNullAndEmptyArrays: true\n }\n },\n {\n $sort: {\n \"data.level\": -1\n }\n },\n {\n $group: {\n _id: \"$id\",\n parent: {\n $first: \"$parent\"\n },\n value: {\n $first: \"$value\"\n },\n type: {\n $first: \"$type\"\n },\n data: {\n $push: \"$data\"\n }\n }\n },\n {\n $addFields: {\n data: {\n $reduce: {\n input: \"$data\",\n initialValue: {\n level: -1,\n presentData: [],\n prevData: []\n },\n in: {\n $let: {\n vars: {\n prev: {\n $cond: [\n {\n $eq: [\n \"$$value.level\",\n \"$$this.level\"\n ]\n },\n \"$$value.prevData\",\n \"$$value.presentData\"\n ]\n },\n current: {\n $cond: [\n {\n $eq: [\n \"$$value.level\",\n \"$$this.level\"\n ]\n },\n \"$$value.presentData\",\n []\n ]\n }\n },\n in: {\n level: \"$$this.level\",\n prevData: \"$$prev\",\n presentData: {\n $concatArrays: [\n \"$$current\",\n [\n {\n $mergeObjects: [\n \"$$this\",\n {\n data: {\n $filter: {\n input: \"$$prev\",\n as: \"e\",\n cond: {\n $eq: [\n \"$$e.parent\",\n \"$$this.id\"\n ]\n }\n }\n }\n }\n ]\n }\n ]\n ]\n }\n }\n }\n }\n }\n }\n }\n },\n {\n $addFields: {\n data: \"$data.presentData\"\n }\n }\n ]).allowDiskUse(true)", "text": "I am using mongo Atlas M10. I want to transform all document data to formatted tree data by using the aggregate framework. It is only working for a certain limit of documents.\nI am getting below error in a large number of documents.\n“MongoError: BSONObj size: 20726581 (0x13C4335) is invalid. Size must be between 0 and 16793600(16MB)”I already set allowDiskUse to true. It is still getting that error.May I have a solution for that error?below are my aggregate stages:", "username": "edenOo" }, { "code": "", "text": "Hi Eden,Looks like some stages of your pipeline are hitting the 16MB BSON limit . My understanding is that you need to make sure that the output of every stage in your pipeline is less than 16MB (in your example, one of your stages is blocked from outputting ~21MB).When I hit this problem for the first time I also felt like Mongo’s documentation could’ve done a better job at proposing possible solutions / examples of solutions (instead of just stating the limit).Xavier Robitaille\nFeather Finance", "username": "Xavier_Robitaille" }, { "code": "{ $project: { \"<field1>\": 0, \"<field2>\": 0, ... } } // Return all but the specified fields\n", "text": "For reference, one possible solution to consider is to add a $project stage early to exclude fields that are non-essential to your query, and which use up part of the 21MB.Exclude Fields with $project:", "username": "Xavier_Robitaille" }, { "code": "explainfields", "text": "add a $project stage early to exclude fields that are non-essentialThis is (usually) bad advice. You never need to do this, because the pipeline already analyzes which fields are needed and only requests those fields from the collection.You can see that by using explain - see fields section.Asya", "username": "Asya_Kamsky" }, { "code": "$graphLookup", "text": "@edenOo if you’re doing $graphLookup from a view, could you reduce the size of the view? I see you are unsetting several fields that come from the view, but excluding them upfront may limit the size of the entire tree enough to fit into 100MBs.Note that $graphLookup is fundamentally limited to 100MBs and cannot spill to disk. So if the expected tree structure is bigger than 100MBs then you’ll probably need to find a different solution to your problem. Maybe give us more details about what the data is and what exactly you are trying to do with it?Asya", "username": "Asya_Kamsky" }, { "code": "project: {\"activities\": 0}//-----------------------------------------------------------------------------------------------------\n// get user and all its activityBuckets(without actual activities otherwise would bust 16MB)\n//-----------------------------------------------------------------------------------------------------\ndb.users.aggregate( [\n { $match: { 'email': '[email protected]' } }, \n { $lookup: {\n from: \"activitybuckets\",\n let: { users_id: \"$_id\"},\n pipeline: [ \n { $project: {\"activities\": 0} },\n {\n $match: {\n $expr: { \n $and: [\n { $eq: [ '$$users_id', \"$user\" ] },\n }\n }\n }\n ],\n as: \"activities\"\n } },\n] );\n", "text": "You never need to do this, because the pipeline already analyzes which fields are needed and only requests those fields from the collection.@Asya_Kamsky thanks for stepping in. The reason why I stumbled on Eden’s post is that I had this problem myself, and I was looking for the best way to solve it.Let me describe my use case, our web app handles stock market transactions (aka account “activities”), and we use a Bucket Pattern, because many of our users have several 20k-50k transactions/activities in their account (i.e. several times the 16MB limit). Our use case is pretty much exactly the example described in these two articles by Justin LaBreck.I was getting BSON size limit error messages from the following query when querying users with many activityBuckets. I added the project: {\"activities\": 0} stage and it solved my problem. The query returns all of the user’s activityBuckets, but without the actual activity data (ie. only the activityBucket high level data).Would you have recommended a different solution?", "username": "Xavier_Robitaille" }, { "code": "activities$project$graphLookup$project$unset$graphLookup", "text": "The problem you describe is quite different - without the project in the inner pipeline you’re saying you want all of the document to be in the activities array and that would make it bigger than legal BSON size for single document. $project is needed when you have to tell the engine what fields you want/need. In the original answer you imply that it’s necessary to exclude fields not essential to your query which the engine will attempt to determine by itself based on which fields you are using in the pipeline and which you are returning to the client. So it’s important to specify correctly (at the end of the pipeline usually is the best place) which fields you want back. Sometimes in complex sub-pipelines where you need to specify that is less obvious.In the case of $graphLookup like the original question, there is a limitation that means there’s no way to use $project or $unset other than by creating a view to make the collection you’re doing $graphLookup in smaller.Hope this is more helpful, rather than more confusing Asya", "username": "Asya_Kamsky" }, { "code": "$project$matchlocalFieldforeignField", "text": "P.S. I would put $project after $match inside the sub-pipeline, by the way. I also would use the localField/foreignField syntax, as of 5.0.0 you can still add more stages (due to https://jira.mongodb.org/browse/SERVER-34927 being implemented).", "username": "Asya_Kamsky" }, { "code": "", "text": "@Asya_Kamsky thank you so much!It is much clearer now.", "username": "Xavier_Robitaille" }, { "code": "$graphLookup$project$unset$graphLookup$graphLookup", "text": "because the pipeline already analyzes which fields are needed and only requests those fields from the collection.In the case of $graphLookup like the original question, there is a limitation that means there’s no way to use $project or $unset other than by creating a view to make the collection you’re doing $graphLookup in smaller.I came to the same problem. Thanks for the explanation! My problem is solved. But I think it would be much nicer if we can apply some pipeline before $graphLookup, instead of creating a view.", "username": "Yun_Hao" }, { "code": "", "text": "Hello We have more 300M documents in one of our collection we have written an aggregation pipeline to separate the records which are in one year range. Pipeline is pretty simple juat have two stage", "username": "Venkata_Sai_Gopi" }, { "code": "", "text": "Without seeing the real pipeline that you are doing it is impossible for us to pin-point any issues you might get.", "username": "steevej" } ]
How to use aggregation for large collection?
2021-08-30T07:00:30.513Z
How to use aggregation for large collection?
9,246
null
[ "queries" ]
[ { "code": "", "text": "I am currently seeing the below issue in my M20 Atlas cluster, anyone experienced this before what how can i stop this{“t”:{\"$date\":“2022-06-13T03:19:27.485+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22942, “ctx”:“listener”,“msg”:“Connection refused because there are too many open connections”,“attr”:{“connectionCount”:3365}}\n{“t”:{\"$date\":“2022-06-13T03:19:27.533+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22942, “ctx”:“listener”,“msg”:“Connection refused because there are too many open connections”,“attr”:{“connectionCount”:3365}}\n{“t”:{\"$date\":“2022-06-13T03:19:27.533+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22942, “ctx”:“listener”,“msg”:“Connection refused because there are too many open connections”,“attr”:{“connectionCount”:3365}}\n{“t”:{\"$date\":“2022-06-13T03:19:27.621+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22942, “ctx”:“listener”,“msg”:“Connection refused because there are too many open connections”,“attr”:{“connectionCount”:3365}}\n{“t”:{\"$date\":“2022-06-13T03:19:27.670+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22942, “ctx”:“listener”,“msg”:“Connection refused because there are too many open connections”,“attr”:{“connectionCount”:3365}}", "username": "Maanda_Ambani" }, { "code": "", "text": "Hi @Maanda_Ambani and welcome in the MongoDB Community !M20 can only handle 3000 connections per node and apparently you have reached this limit. Upgrading to an M40 would upgrade the support to 6000 connections per nodes.But if you don’t expect that many connections, there may be something wrong with the way you are handling your connections (opening too many or not closing them properly).Usually I see this error when people use serverless functions and they don’t cache the MongoDB connection so each function calls are re-using the same cached connection pool. In this scenario, it’s a bad idea to create a new MongoClient for each serverless function calls.", "username": "MaBeuLux88" }, { "code": "", "text": "I found a solution by limiting the connections. This video explained why and how to solve it.", "username": "Mostafa_Mamun_Emon" } ]
Connection refused because there are too many open connections
2022-06-13T07:46:00.693Z
Connection refused because there are too many open connections
2,978
null
[ "react-native" ]
[ { "code": "", "text": "Hello,I am using realm-js v11 and realm/react 0.4.1.I have a schema model where one of my objects has a property that stores a list of items. As an example, we can say the parent object is a TodoList and it holds a list of tasks. I render the tasks by fetching the TodoList from the useObject hook and render TodoList.tasks.As the user adds more tasks, this list can become quite large – say around 100 items. As the list grows, deleting a single task gets slower and slower. After using the React profiling tools in Flipper, I see that when a task is deleted, the entire list of items is re-rendered since the parent object/list reference changes. This is the reason why the re-render becomes slow as items grow.I am wondering what is a solution for improving my re-renders when deleting from a list.EDIT: My renderItem component passed to the FlatList is memoized. Adding to the list does not trigger all items to re-render, but deleting does.", "username": "max_you" }, { "code": "", "text": "The description of the issue is clear but the cause is not clear. 100 items isn’t a lot of items and re-rendering 100 items should happen pretty much instantly. So deleting and item, while causing a re-render, should be almost imperceptible.There’s also a matter of how that data is being observed; are you using object or collection listeners? If so, what’s the implementation?I think some brief sample code that duplicates the issue would clarify the issue, and possible cause.", "username": "Jay" }, { "code": "", "text": "Hi Jay,You can see the behavior mentioned in this project. This project has a Store as the top level object and a list of Tasks under the Store. When you create a task, you will see that the memoized Tasks are not re-rendered, but deleting causes re-render on every Task.I would attach the React profiling json but the forum says I cannot upload json attachments as a new member, so I have provided screenshots of the profiling. You can also produce the same results if you profile on your machine.Screenshot on the left is of adding an item to the Store.tasks and screenshot on the right is of deleting an item from the Store.tasks:\nScreen Shot 2023-10-06 at 12.23.55 PM copy3904×2382 1.38 MBTo answer your question of how I’m observing the data, I am using the hooks from @realm/react so however it is setup there is how I am using it. I looked into the useObject hook and it appears to be using a collection listener.I guess I am just confused as to why deleting from the list seems to cause the parent object (Store in my code example) reference to change completely. At least that is what I think the problem is", "username": "max_you" } ]
How to properly delete from list without triggering re-render on all items
2023-10-05T19:08:26.533Z
How to properly delete from list without triggering re-render on all items
282
null
[ "python", "indexes" ]
[ { "code": "geolocation{\n \"type\": \"MultiPolygon\",\n \"coordinates\": [\n [\n [\n [\n 123.125,\n -63\n ],\n [\n 124.375,\n -63\n ],\n [\n 124.375,\n -62\n ],\n [\n 123.125,\n -62\n ],\n [\n 123.125,\n -61.5\n ],\n [\n 122.5,\n -61.5\n ],\n [\n 122.5,\n -60.5\n ],\n [\n 121.875,\n -60.5\n ],\n [\n 121.875,\n -60\n ],\n [\n 121.25,\n -60\n ],\n [\n 121.25,\n -59\n ],\n [\n 120.625,\n -59\n ],\n [\n 120.625,\n -58\n ],\n [\n 120,\n -58\n ],\n [\n 120,\n -53\n ],\n [\n 119.375,\n -53\n ],\n [\n 119.375,\n -52\n ],\n [\n 118.75,\n -52\n ],\n [\n 118.75,\n -51.5\n ],\n [\n 118.125,\n -51.5\n ],\n [\n 118.125,\n -50.5\n ],\n [\n 117.5,\n -50.5\n ],\n [\n 117.5,\n -48.5\n ],\n [\n 116.875,\n -48.5\n ],\n [\n 116.875,\n -46.5\n ],\n [\n 116.25,\n -46.5\n ],\n [\n 116.25,\n -49.5\n ],\n [\n 116.875,\n -49.5\n ],\n [\n 116.875,\n -52\n ],\n [\n 117.5,\n -52\n ],\n [\n 117.5,\n -56.5\n ],\n [\n 118.125,\n -56.5\n ],\n [\n 118.125,\n -59\n ],\n [\n 118.75,\n -59\n ],\n [\n 118.75,\n -60\n ],\n [\n 119.375,\n -60\n ],\n [\n 119.375,\n -60.5\n ],\n [\n 120,\n -60.5\n ],\n [\n 120,\n -61\n ],\n [\n 120.625,\n -61\n ],\n [\n 120.625,\n -61.5\n ],\n [\n 121.25,\n -61.5\n ],\n [\n 121.25,\n -62\n ],\n [\n 122.5,\n -62\n ],\n [\n 122.5,\n -62.5\n ],\n [\n 123.125,\n -62.5\n ],\n [\n 123.125,\n -63\n ]\n ]\n ],\n [\n [\n [\n 115.625,\n -46.5\n ],\n [\n 116.25,\n -46.5\n ],\n [\n 116.25,\n -37\n ],\n [\n 115.625,\n -37\n ],\n [\n 115.625,\n -36\n ],\n [\n 115,\n -36\n ],\n [\n 115,\n -35.5\n ],\n [\n 114.375,\n -35.5\n ],\n [\n 114.375,\n -35\n ],\n [\n 113.75,\n -35\n ],\n [\n 113.75,\n -34.5\n ],\n [\n 114.375,\n -34.5\n ],\n [\n 114.375,\n -34\n ],\n [\n 113.75,\n -34\n ],\n [\n 113.75,\n -33\n ],\n [\n 113.125,\n -33\n ],\n [\n 113.125,\n -32.5\n ],\n [\n 112.5,\n -32.5\n ],\n [\n 112.5,\n -32\n ],\n [\n 111.25,\n -32\n ],\n [\n 111.25,\n -31.5\n ],\n [\n 110.625,\n -31.5\n ],\n [\n 110.625,\n -31\n ],\n [\n 109.375,\n -31\n ],\n [\n 109.375,\n -30.5\n ],\n [\n 108.125,\n -30.5\n ],\n [\n 108.125,\n -30\n ],\n [\n 111.25,\n -30\n ],\n [\n 111.25,\n -29.5\n ],\n [\n 111.875,\n -29.5\n ],\n [\n 111.875,\n -28.5\n ],\n [\n 112.5,\n -28.5\n ],\n [\n 112.5,\n -28\n ],\n [\n 111.875,\n -28\n ],\n [\n 111.875,\n -27.5\n ],\n [\n 110,\n -27.5\n ],\n [\n 110,\n -27\n ],\n [\n 108.125,\n -27\n ],\n [\n 108.125,\n -26.5\n ],\n [\n 105.625,\n -26.5\n ],\n [\n 105.625,\n -27\n ],\n [\n 105,\n -27\n ],\n [\n 105,\n -27.5\n ],\n [\n 104.375,\n -27.5\n ],\n [\n 104.375,\n -28.5\n ],\n [\n 105.625,\n -28.5\n ],\n [\n 105.625,\n -29\n ],\n [\n 106.875,\n -29\n ],\n [\n 106.875,\n -29.5\n ],\n [\n 107.5,\n -29.5\n ],\n [\n 107.5,\n -30\n ],\n [\n 105.625,\n -30\n ],\n [\n 105.625,\n -30.5\n ],\n [\n 106.25,\n -30.5\n ],\n [\n 106.25,\n -31.5\n ],\n [\n 107.5,\n -31.5\n ],\n [\n 107.5,\n -32.5\n ],\n [\n 108.125,\n -32.5\n ],\n [\n 108.125,\n -33\n ],\n [\n 108.75,\n -33\n ],\n [\n 108.75,\n -34\n ],\n [\n 109.375,\n -34\n ],\n [\n 109.375,\n -34.5\n ],\n [\n 110,\n -34.5\n ],\n [\n 110,\n -35\n ],\n [\n 110.625,\n -35\n ],\n [\n 110.625,\n -36\n ],\n [\n 111.875,\n -36\n ],\n [\n 111.875,\n -36.5\n ],\n [\n 112.5,\n -36.5\n ],\n [\n 112.5,\n -37.5\n ],\n [\n 113.125,\n -37.5\n ],\n [\n 113.125,\n -38.5\n ],\n [\n 113.75,\n -38.5\n ],\n [\n 113.75,\n -40\n ],\n [\n 114.375,\n -40\n ],\n [\n 114.375,\n -41.5\n ],\n [\n 115,\n -41.5\n ],\n [\n 115,\n -43.5\n ],\n [\n 115.625,\n -43.5\n ],\n [\n 115.625,\n -46.5\n ]\n ]\n ]\n ]\n}\n> db.blobs.createIndex({geolocation:'2dsphere'})\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Index build failed: 821317aa-a32a-4c7a-bd71-857716cc7626: Collection argo.blobs ( 69cdf012-53f3-4e51-bbf0-62985ef04e41 ) :: caused by :: Can't extract geo keys: [long error document suppressed] Edges 15 and 35 cross. Edge locations in degrees: [-53.0000000, 119.3750000]-[-52.0000000, 119.3750000] and [-60.0000000, 119.3750000]-[-60.5000000, 119.3750000]\",\n\t\"code\" : 16755,\n\t\"codeName\" : \"Location16755\"\n{\n \"type\": \"MultiPolygon\",\n \"coordinates\": [\n [\n [\n [\n -146.25,\n -46\n ],\n [\n -143.75,\n -46\n ],\n [\n -143.75,\n -45.5\n ],\n [\n -142.5,\n -45.5\n ],\n [\n -142.5,\n -45\n ],\n [\n -141.25,\n -45\n ],\n [\n -141.25,\n -43\n ],\n [\n -136.875,\n -43\n ],\n [\n -136.875,\n -42.5\n ],\n [\n -134.375,\n -42.5\n ],\n [\n -134.375,\n -43\n ],\n [\n -133.75,\n -43\n ],\n [\n -133.75,\n -41\n ],\n [\n -135.625,\n -41\n ],\n [\n -135.625,\n -40.5\n ],\n [\n -136.875,\n -40.5\n ],\n [\n -136.875,\n -39.5\n ],\n [\n -137.5,\n -39.5\n ],\n [\n -137.5,\n -39\n ],\n [\n -138.75,\n -39\n ],\n [\n -138.75,\n -38.5\n ],\n [\n -141.25,\n -38.5\n ],\n [\n -141.25,\n -39\n ],\n [\n -141.875,\n -39\n ],\n [\n -141.875,\n -38.5\n ],\n [\n -142.5,\n -38.5\n ],\n [\n -142.5,\n -38\n ],\n [\n -143.125,\n -38\n ],\n [\n -143.125,\n -37.5\n ],\n [\n -144.375,\n -37.5\n ],\n [\n -144.375,\n -37\n ],\n [\n -145,\n -37\n ],\n [\n -145,\n -36.5\n ],\n [\n -145.625,\n -36.5\n ],\n [\n -145.625,\n -35.5\n ],\n [\n -146.25,\n -35.5\n ],\n [\n -146.25,\n -34.5\n ],\n [\n -146.875,\n -34.5\n ],\n [\n -146.875,\n -33.5\n ],\n [\n -147.5,\n -33.5\n ],\n [\n -147.5,\n -33\n ],\n [\n -148.125,\n -33\n ],\n [\n -148.125,\n -32.5\n ],\n [\n -148.75,\n -32.5\n ],\n [\n -148.75,\n -32\n ],\n [\n -149.375,\n -32\n ],\n [\n -149.375,\n -31.5\n ],\n [\n -150.625,\n -31.5\n ],\n [\n -150.625,\n -31\n ],\n [\n -151.875,\n -31\n ],\n [\n -151.875,\n -32\n ],\n [\n -152.5,\n -32\n ],\n [\n -152.5,\n -33.5\n ],\n [\n -151.875,\n -33.5\n ],\n [\n -151.875,\n -34\n ],\n [\n -151.25,\n -34\n ],\n [\n -151.25,\n -35\n ],\n [\n -150.625,\n -35\n ],\n [\n -150.625,\n -35.5\n ],\n [\n -150,\n -35.5\n ],\n [\n -150,\n -36.5\n ],\n [\n -149.375,\n -36.5\n ],\n [\n -149.375,\n -37.5\n ],\n [\n -148.75,\n -37.5\n ],\n [\n -148.75,\n -38.5\n ],\n [\n -148.125,\n -38.5\n ],\n [\n -148.125,\n -40\n ],\n [\n -147.5,\n -40\n ],\n [\n -147.5,\n -42\n ],\n [\n -146.875,\n -42\n ],\n [\n -146.875,\n -43.5\n ],\n [\n -146.25,\n -43.5\n ],\n [\n -146.25,\n -46\n ]\n ]\n ]\n ]\n}\n> db.blobs.createIndex({geolocation:'2dsphere'})\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Index build failed: 6e707a5e-59b0-4760-8269-97d9e07d4a54: Collection argo.blobs ( 69cdf012-53f3-4e51-bbf0-62985ef04e41 ) :: caused by :: Can't extract geo keys: [long error document suppressed] Edges 5 and 21 cross. Edge locations in degrees: [-45.0000000, -141.2500000]-[-43.0000000, -141.2500000] and [-38.5000000, -141.2500000]-[-39.0000000, -141.2500000]\",\n\t\"code\" : 16755,\n\t\"codeName\" : \"Location16755\"\n}\n", "text": "Hi team - I’m getting some surprising-to-me index failures in Mongo 5.0.4; facts:Example one geolocation:example one error:example two geometry:example two error:In both cases the offending line segments are colinear, but clearly separated by several degrees. Why does mongo not like these shapes? Thanks!", "username": "William_Mills" }, { "code": "", "text": "It looks like this is a known bug, currently stuck in MongoDB’s backlog: https://jira.mongodb.org/browse/SERVER-52928This is a pretty serious problem for us, I can’t fudge coordinates to make them artificially non-colinear in my use case to avoid this problem. We can close out here while that ticket remains open, but scientific applications need the above bug fixed when feasible.", "username": "William_Mills" } ]
Valid geojson won't index in 2dsphere
2023-10-04T20:04:41.491Z
Valid geojson won&rsquo;t index in 2dsphere
268
null
[ "aggregation", "node-js" ]
[ { "code": "mongodbmongodb-js/saslprepsaslprepsaslprepmongodb-js/saslprepsaslprepConnectionPoolCreatedEventConnectionPoolCreatedEventclient.options.credentials@clemclxmongodb", "text": "The MongoDB Node.js team is pleased to announce version 4.17.0 of the mongodb package!Until v6, the driver included the saslprep package as an optional dependency for SCRAM-SHA-256 authentication. saslprep breaks when bundled with webpack because it attempted to read a file relative to the package location and consequently the driver would throw errors when using SCRAM-SHA-256 if it were bundled.The driver now depends on mongodb-js/saslprep, a fork of saslprep that can be bundled with webpack because it includes the necessary saslprep data in memory upon loading. This will be installed by default but will only be used if SCRAM-SHA-256 authentication is used.In order to avoid mistakenly printing credentials the ConnectionPoolCreatedEvent will replace the credentials option with an empty object. The credentials are still accessble via MongoClient options: client.options.credentials.We invite you to try the mongodb library immediately, and report any issues to the NODE project.", "username": "neal" }, { "code": "", "text": "Thank you @neal , we are using mongodb 4.17.0\nimage1108×532 79.4 KBBut we are still seeing the logs are full of warning that saslprep not installed?(node:23) [MONGODB DRIVER] Warning: Warning: no saslprep library specified. Passwords will not be sanitized", "username": "Xin_Zhang" }, { "code": "", "text": "Update, I found 4.17.1 fixed the importing MongoDB Node.js Driver 4.17.1 & 5.8.1 Released - Product & Driver Announcements - MongoDB Developer Community Forums", "username": "Xin_Zhang" }, { "code": "", "text": "You beat me to it @Xin_Zhang, but as you noted 4.17.1 was released specifically to fix that issue ", "username": "alexbevi" } ]
MongoDB Node.js Driver 4.17.0 Released
2023-08-17T22:23:01.927Z
MongoDB Node.js Driver 4.17.0 Released
721
null
[ "queries", "dot-net", "mongoose-odm" ]
[ { "code": "", "text": "Hi Team, We used MongoDB 3.6 long back in our project. Now We are planning to move aws documentdb (SaaS) mongodb supported and I see there is support for 3.6 as well by aws but have question, Till what time AWS is going to support MongoDB version 3.6?", "username": "Pankaj_Tiwari1" }, { "code": "", "text": "Hey @Pankaj_Tiwari1,We used MongoDB 3.6 long back in our project. Now We are planning to move aws documentdb (SaaS) mongodb supportedDocumentDB uses the MongoDB 3.6 wire protocol, but there are a number of functional differences and the supported commands are a subset of those available in MongoDB 3.6.Till what time AWS is going to support MongoDB version 3.6?If you have questions or concerns related to AWS DocumentDB, I recommend asking on Stack Overflow or an AWS product community.In case you want to use a managed MongoDB service on AWS, MongoDB Atlas builds on the MongoDB Enterprise server and does not require compatibility workarounds.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi Kushagra, Thanks for response. will post over aws community link shared by you.", "username": "Pankaj_Tiwari1" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
ASW cluster for MongoDB 3.6 support how long available
2023-10-06T05:48:39.544Z
ASW cluster for MongoDB 3.6 support how long available
241
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "const mongoose = require(\"mongoose\");\n\nconst orderSchema = new mongoose.Schema(\n {\n user: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"User\",\n required: true,\n },\n addressId: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"UserAddress.address\",\n required: true,\n },\n totalAmount: {\n type: Number,\n required: true,\n },\n items: [\n {\n productId: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Product\",\n },\n payablePrice: {\n type: Number,\n required: true,\n },\n purchasedQty: {\n type: Number,\n required: true,\n },\n },\n ],\n paymentStatus: {\n type: String,\n enum: [\"Pending\", \"Completed\", \"Cancelled\", \"Refund\"],\n required: true,\n },\n paymentType: {\n type: String,\n enum: [\"CoD\", \"Card\", \"Wire\"],\n required: true,\n },\n orderStatus: [\n {\n type: {\n type: String,\n enum: [\"Ordered\", \"Packed\", \"Shipped\", \"Delivered\"],\n default: \"Ordered\",\n },\n date: {\n type: Date,\n },\n isCompleted: {\n type: Boolean,\n default: false,\n },\n },\n ],\n },\n { timestamps: true }\n);\n\nmodule.exports = mongoose.model(\"Order\", orderSchema);\nconst mongoose = require(\"mongoose\");\n\nconst addressSchema = new mongoose.Schema({\n name: {\n type: String,\n required: true,\n trim: true,\n min: 10,\n max: 60,\n },\n mobileNumber: {\n type: String,\n required: true,\n trim: true,\n },\n pinCode: {\n type: String,\n required: true,\n trim: true,\n },\n locality: {\n type: String,\n required: true,\n trim: true,\n min: 10,\n max: 100,\n },\n address: {\n type: String,\n required: true,\n trim: true,\n min: 10,\n max: 100,\n },\n cityDistrictTown: {\n type: String,\n required: true,\n trim: true,\n },\n state: {\n type: String,\n required: true,\n required: true,\n },\n landmark: {\n type: String,\n min: 10,\n max: 100,\n },\n alternatePhone: {\n type: String,\n },\n addressType: {\n type: String,\n required: true,\n enum: [\"home\", \"work\"],\n required: true,\n },\n});\n\nconst userAddressSchema = new mongoose.Schema(\n {\n user: {\n type: mongoose.Schema.Types.ObjectId,\n required: true,\n ref: \"User\",\n },\n address: [addressSchema],\n },\n { timestamps: true }\n);\n\nmongoose.model(\"Address\", addressSchema);\nmodule.exports = mongoose.model(\"UserAddress\", userAddressSchema);\nconst Order = require(\"../models/order\");\nconst Cart = require(\"../models/cart\");\nconst Address = require(\"../models/address\");\nconst Product = require(\"../models/product\");\n\nexports.getOrders = (req, res) => {\n Order.find({ user: req.user._id })\n .select(\"_id paymentStatus paymentType orderStatus items addressId\")\n .populate(\"items.productId\", \"_id name productImages\")\n .populate(\"addressId\")\n .exec((error, orders) => {\n if (error) {console.log(error) \n return res.status(400).json({ error });}\n if (orders) {\n res.status(200).json({ orders });\n }\n });\n \n};\n", "text": "Been pulling my hair out for hours now, just can’t figure out why the field refuses to populate. What I want to do is return the AddressId field populated with values instead of just an ID, but nothing I’ve tried works, none of the solutions I found do anything.If you need any other code from the project, I will update the question. Any help is highly appreciated.Order Model:Address Model:Code that runs the query:", "username": "Marin_Vilic" }, { "code": "ref:\"UserAddress.Address\"ref:\"UserAddress.address\"mongoose.model(\"Address\", addressSchema);address: [addressSchema]ref:\"Address\"mongoose.model(\"Address\", addressSchema);\"UserAddress\"", "text": "First, I know nothing about mongoose so what I suggest might be completely wrong.You register addressSchema as the Address model. Everywhere ref: is used, it looks like a model name, rather than the field name of another model.So I would try first to use ref:\"UserAddress.Address\" rather than ref:\"UserAddress.address\", that is the name you use inmongoose.model(\"Address\", addressSchema);rather than the name you use inaddress: [addressSchema]If that fails, I would try ref:\"Address\" because you domongoose.model(\"Address\", addressSchema);You might need to export it like you do for \"UserAddress\".", "username": "steevej" }, { "code": "", "text": "I also got the same error as you. How did you fix it?", "username": "Minh_Hi_u_Nguy_n" }, { "code": "module.exports = mongoose.model( \"user\" , userSchema );\ncustomer: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"User\"\n}\n", "text": "I got the exact same error,\nmy models were being called correctly everything was correct exceptuserModel.jsorderModel.jsspelling error in the reference. 1 hour wasted on a capital letter left out in the past ", "username": "Adam_Hannath" }, { "code": "mongoose.createConnectionyourConnection.model(modelName)refconst { productsDbConnection, usersDbConnection } = require(\"../db\") ; // connections imported from db file //\n\nconst UserSchema = new mongoose.Schema({\n // ...\n cart: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: productsDbConnection.model('women_collection') // Make sure you have registered the women_collection model in you Db//\n },\n ],\n});\n", "text": "Hi, my name is Mohsin Hassan Khan, If you’re encountering the “MissingSchemaError” while trying to reference a collection from another MongoDB database, follow these steps:Here’s an example:Thanks, Please let me know if this helps anyone.", "username": "khan_ali" } ]
MissingSchemaError: Schema hasn't been registered for model UserAddress.address
2022-05-22T14:57:14.086Z
MissingSchemaError: Schema hasn&rsquo;t been registered for model UserAddress.address
13,933
null
[ "replication" ]
[ { "code": "", "text": "Hi Team,We are hosting a mongoDb in onprem(Linux platform). We need to create a replica for the instance.\nPlease advise steps by steps.", "username": "Kiran_Joshy" }, { "code": "", "text": "Hello @Kiran_Joshy,To get started, I recommend utilizing the official documentation for spinning up the replica set:Additionally, for further guidance and insights, you may want to watch the M103 - Basic Cluster Administration video tutorial available on MongoDB University.These resources will provide you with comprehensive information and guidance to successfully set up your MongoDB Replica Set on a Linux environment. If you have any questions or encounter any issues along the way, feel free to reach out for assistance.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
How to create cluster , ie, replica in onprem (linux platorm)
2023-10-06T04:46:22.850Z
How to create cluster , ie, replica in onprem (linux platorm)
236
null
[ "storage" ]
[ { "code": "", "text": "Hi Team,We are in server migration process.\nDo we need to add manually any datafile (like in oracle database ) in mongodb or it is automatically adding upto our disk space in linux.\nPlease advise on it.", "username": "Kiran_Joshy" }, { "code": "", "text": "We are in server migration process.how are you doing the migration? where are you migrating to ?", "username": "Kobe_W" }, { "code": "", "text": "From cloud to noprem, it is linux platform.", "username": "Kiran_Joshy" } ]
Do we need to add any datafile (like in oracle database ) in mongodb
2023-10-05T10:58:01.136Z
Do we need to add any datafile (like in oracle database ) in mongodb
228
null
[ "security" ]
[ { "code": "", "text": "We are using the Atlas API to create users with custom roles.\nWe use the custom roles to restrict users to reading/writing specific collections within a database.\nWe do not allow the users to access all collections within the database and we do not allow them to access any other databases.We want to give our users the ability to list the collections in a specific database.\nIs there a way to use the API to grant the listCollections permission?", "username": "john_m" }, { "code": "customRole2searchdbcustomRole2testdbcurl --user '<PUBLICKEY>:<PRIVATEKEY>' --digest \\\n --header 'Content-Type: application/json' \\\n --include \\\n --request PATCH \"https://cloud.mongodb.com/api/atlas/v1.0/groups/<PROJECT_ID>/customDBRoles/roles/customRole2\" --data '\n {\n \"actions\" : [ {\n \"action\" : \"LIST_COLLECTIONS\",\n \"resources\" : [ {\n \"collection\" : \"\",\n \"db\" : \"testdb\"\n } ]\n } ]\n}'\n", "text": "Hi @john_m,We are using the Atlas API to create users with custom roles.\nWe want to give our users the ability to list the collections in a specific database.\nIs there a way to use the API to grant the listCollections permission?It sounds like you have already created the custom roles and users via the API. If you wish to update custom roles via the Atlas API to grant the listCollections permission, the Update a Custom Role documentation may help.As an example, I have an existing custom role named customRole2 with read access to the database named searchdb:\nimage777×42 2.19 KB\nUsing the example below API to update the custom role customRole2, I am able to update it with the listCollections permission on the database testdb :\nimage850×62 3.82 KB\nAdditionally, you can update existing database users via the API to assign them a custom role.If there are any concerns, you can always test this against a test custom role or database user initially to see if the API request gets you the desired result.I hope this helps.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": " {\n \"actions\" : [ {\n \"action\" : \"LIST_COLLECTIONS\",\n \"resources\" : [ {\n \"collection\" : \"\",\n \"db\" : \"testdb\"\n } ]\n } ]\n}\n{\"detail\":\"Received JSON is malformed.\",\"error\":400,\"errorCode\":\"MALFORMED_JSON\",\"parameters\":[],\"reason\":\"Bad Request\"}* Connection #0 to host cloud.mongodb.com left intact\n", "text": "Please share how to create customRole in MongoDB Atlas? I’m having issue facing this", "username": "David_Aw" } ]
Can Atlas API give custom roles the listCollections permission?
2021-05-04T15:42:12.680Z
Can Atlas API give custom roles the listCollections permission?
2,644
null
[ "migration" ]
[ { "code": "Initial Sync Complete!Extend Time", "text": "I am currently doing some test Live Migration Pulls getting some metrics and timings for our Prod migration.\nWhen the Initial Sync Complete! occurs I see the 120 hour cutover timer. I also see the Extend Time link, which can add another 24 hours. I clicked a few times and see it adding more time.What is the max number of extends times you can do? or max hours?120hrs/5 days should be more than enough, but I just want to make note of what the max is for Extending time, should we need to resolve anything during validation.Our data size that we are migrating is ~ 6TBthanks", "username": "Chad_Cannell" }, { "code": "", "text": "I don’t think we enforce the overall max yet", "username": "Alexander_Komyagin" } ]
Live Migration - Extend Time maximum?
2023-10-05T20:36:10.589Z
Live Migration - Extend Time maximum?
190
null
[ "dot-net" ]
[ { "code": "app = Realms.Sync.App.Create(appConfiguration);", "text": "I cannot get a Maui.net app to initialize the realm for IOS. it works fine on windows and android, but not IOS.\nfor troubleshooting purposes, I installed the .net “todo” example referenced in the Setup tutorial.\nIt is hanging up at the same location and giving the same error.\nI’m using Realm 11.5 (li even tried older versions)\ntarget platform is net7.0 for all platforms.\nI’ve tried deleting the bins, cleaning, rebuilding, etc…during debugging it seems the problem isSnippetapp = Realms.Sync.App.Create(appConfiguration);this line of code throws an exception(only on IOS)“The type initializer for ‘Realms.Sync.AppHandle’ threw an exception.”my output window shows the following…\n[0:] An error occurred: ‘realm-wrappers’. Callstack: ’ at Realms.SynchronizationContextScheduler.Initialize()\nat Realms.NativeCommon.Initialize()\nat Realms.Sync.AppHandle…cctor()’\n2023-10-02 14:17:04.275 Xamarin.PreBuilt.iOS[1766:1382615] Warning: observer object was not disposed manually with Dispose()…The is from the unmodified “todo” app (except the config file reflects app id)\nit works perfect for windows and android.", "username": "byron_D" }, { "code": "", "text": "We’ll need some info about your build environment to be able to try and replicate this:", "username": "nirinchev" }, { "code": "app = Realms.Sync.App.Create(appConfiguration);", "text": "I’m developing on windows (visual studios 2022 ver 17.6.5)\nthe mac connected is using Ventura 13.5.2\nXcode is ver 15.0\nIOS ver is 16.7 on Iphone Xr (i also tried on older phone with ver 15.7.9)\nI am using real device.\nThe app runs and deploys if I remark out the codeapp = Realms.Sync.App.Create(appConfiguration);", "username": "byron_D" }, { "code": "", "text": "Are you deploying a hot-restart enabled app? If so, this is not supported and tracked by Wrappers not found in hot restart-enabled build for MAUI · Issue #3137 · realm/realm-dotnet · GitHub. Unfortunately, there’s no workaround at the moment as it seems to be a limitation of the way Microsoft packages native assemblies inside the container app.", "username": "nirinchev" } ]
Error initializing Realm on IOS only (maui.net)
2023-10-02T21:24:03.842Z
Error initializing Realm on IOS only (maui.net)
349
null
[ "realm-web" ]
[ { "code": "Error: Request failed (POST https://ap-southeast-1.aws.realm.mongodb.com/api/client/v2.0/app/<APP_ID>/auth/providers/anon-user/login): TypeError: Cannot access member 'db' of undefined (status 400)\n at bundle.dom.es.js:2852:24\n at Generator.next (<anonymous>)\n at asyncGeneratorStep (asyncToGenerator.js:3:1)\n at _next (asyncToGenerator.js:22:1)\n at _ZoneDelegate.invoke (zone.js:375:26)\n at Object.onInvoke (core.mjs:24210:33)\n at _ZoneDelegate.invoke (zone.js:374:52)\n at Zone.run (zone.js:134:43)\n at zone.js:1278:36\n at _ZoneDelegate.invokeTask (zone.js:409:31)\n at resolvePromise (zone.js:1214:31)\n at resolvePromise (zone.js:1168:17)\n at zone.js:1281:17\n at _ZoneDelegate.invokeTask (zone.js:409:31)\n at core.mjs:23896:55\n at AsyncStackTaggingZoneSpec.onInvokeTask (core.mjs:23896:36)\n at _ZoneDelegate.invokeTask (zone.js:408:60)\n at Object.onInvokeTask (core.mjs:24197:33)\n at _ZoneDelegate.invokeTask (zone.js:408:60)\n at Zone.runTask (zone.js:178:47)\n", "text": "I am getting the following error when I try to login to realm app with anonymous credentials. This has been working without any issues before. But it seems like it doesn’t work anymore.Can someone please tell me what the problem is?", "username": "Kanchana_Senadheera" }, { "code": "context.services.get(\"MongoDB\")mongodb-atlas", "text": "@Kanchana_Senadheera You have context.services.get(\"MongoDB\") but there is no MongoDB service. It is mongodb-atlas is the service name", "username": "Ian_Ward" }, { "code": "", "text": "Hello Ian, thanks for the reply.However, I am not sure if I understood the problem correctly. Can you please elaborate?", "username": "Kanchana_Senadheera" } ]
Mongo Atlas Realm Anonymous Login - 400
2023-10-05T13:49:04.746Z
Mongo Atlas Realm Anonymous Login - 400
256
null
[ "data-recovery" ]
[ { "code": "", "text": "Hi all,I have a collection, where some documents were deleted by accident. Despite having daily backups, this deletion happened before the backups that are available (more than 10 days ago).\nSince the database does not change much or frequently, is it possible these deleted documents are still somewhere in the database files?\nIs there any way to check/find them?", "username": "Georgios_Petasis" }, { "code": "", "text": "My mongod version is:db version v4.2.8\ngit version: 43d25964249164d76d5e04dd6cf38f6111e21f5f\nOpenSSL version: OpenSSL 1.1.1l FIPS 24 Aug 2021\nallocator: tcmalloc\nmodules: none\nbuild environment:\ndistmod: rhel80\ndistarch: x86_64\ntarget_arch: x86_64", "username": "Georgios_Petasis" }, { "code": "mongod> db.getReplicationInfo()\n{\n logSizeMB: 8423,\n usedMB: 0.01,\n timeDiff: 360,\n timeDiffHours: 0.1,\n tFirst: 'Wed Sep 29 2021 14:35:23 GMT+0000 (Coordinated Universal Time)',\n tLast: 'Wed Sep 29 2021 14:41:23 GMT+0000 (Coordinated Universal Time)',\n now: 'Wed Sep 29 2021 14:41:30 GMT+0000 (Coordinated Universal Time)'\n}\ntest [direct: primary] test> db.coll.insertMany([{name: \"Max\"}, {name: \"Alex\"}, {name: \"Claire\"}])\n{\n acknowledged: true,\n insertedIds: {\n '0': ObjectId(\"61547bd83bbc8bc533a5c784\"),\n '1': ObjectId(\"61547bd83bbc8bc533a5c785\"),\n '2': ObjectId(\"61547bd83bbc8bc533a5c786\")\n }\n}\ntest [direct: primary] test> db.coll.deleteMany({})\n{ acknowledged: true, deletedCount: 3 }\ntest [direct: primary] test> use local\nswitched to db local\ntest [direct: primary] local> db.oplog.rs.find({op: 'i', ns: 'test.coll'}, {o:1})\n[\n { o: { _id: ObjectId(\"61547bd83bbc8bc533a5c784\"), name: 'Max' } },\n { o: { _id: ObjectId(\"61547bd83bbc8bc533a5c785\"), name: 'Alex' } },\n { o: { _id: ObjectId(\"61547bd83bbc8bc533a5c786\"), name: 'Claire' } }\n]\ntest [direct: primary] local> db.oplog.rs.aggregate([{$match: {op: 'i', ns: 'test.coll'}},{$replaceRoot: {newRoot: '$o'}}, { $merge: { into: {db: \"test\", coll: \"coll\"}, on: \"_id\", whenMatched: \"replace\", whenNotMatched: \"insert\" } }])\n\ntest [direct: primary] local> use test \nswitched to db test\ntest [direct: primary] test> db.coll.find()\n[\n { _id: ObjectId(\"61547bd83bbc8bc533a5c784\"), name: 'Max' },\n { _id: ObjectId(\"61547bd83bbc8bc533a5c785\"), name: 'Alex' },\n { _id: ObjectId(\"61547bd83bbc8bc533a5c786\"), name: 'Claire' }\n]\n", "text": "Hi @Georgios_Petasis and welcome in the MongoDB Community !If you have a standalone mongod, then no, it’s lost forever.\nIf you have a Replica Set (even a single node), then it means all the write operations are written to the Oplog.The Oplog is a system collection that has a limited size (capped collection) and overwrite the oldest entries as new ones arrive.You can retrieve information about your Oplog with the command:Depending how much write operations are performed on the cluster, the oplog time window can be large or small. It’s a good practice to have a confortable size.If you inserted these documents recently and if you have a large oplog windows, they are still in the oplog.See this little example:With an aggregation pipeline, I can even restore them into the original collection:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi Mazime,You solution is very nice i was tested my project in DEV servers it is worked successful.\nbut next i move to my production servers duting recover as you mention steps following\ni need you helpi am waiting you reply…Thanks,\nSrihari", "username": "hari_dba" }, { "code": "local", "text": "The solution I explained above is NOT something you want to use in a production environment on a regular basis. It must be considered as a last resort action when nothing else is suitable (for example a full restore of a daily backup).You can’t trust this solution to work each time because the oplog is a capped collection and old documents will disappear from it eventually.For sharded clusters, you’ll have to apply the same method “locally” on each shard because mongos can’t access the system local database. Each shard has its own oplog completely independent from the other shards.Again, to me, this is an extreme mesure that should never be used. When you remove a document in MongoDB, you should consider that it’s gone for good. If recovering old docs is part of your requirements, I would use another strategy like “soft deletes” (i.e. just set a boolean {deleted:true} and use it to filter with an index).Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "How to use “soft deletes” (i.e. just set a boolean {deleted:true} and use it to filter with an index). ?can you please brief explain with exampleThanks,\nSrihari", "username": "hari_dba" }, { "code": "deletedtrue", "text": "I believe what Max is talking about is instead of actually deleting the document(s), you would instead add a field to the document called deleted with a value of true. While this might work in some cases, it could lead to a collection growing to larger sizes. I could however be misunderstanding what he is suggesting.", "username": "Doug_Duncan" }, { "code": "{deleted:true}{deleted:true}{deletedAt: new Date()}deleted", "text": "Nope that’s it @Doug_Duncan !Replace delete operation with update $set {deleted:true}.\nAnd find operation should now include something like $exists deleted false to avoid including “soft” deleted documents unless you actually want to access these “deleted” docs. Then you can find and filter on {deleted:true}.But @Doug_Duncan is also correct that this can lead to collections infinitely growing in size and an additional “deleted” field in all the indexes to support the queries (so more RAM).Every now and then you will also want to actually delete the docs for real once they have been soft deleted for long enough.For this, I would suggest using a TTL index on another additional field {deletedAt: new Date()} which would be set when the deleted field is set and it would actually delete automatically for real this time the docs after X seconds.There is a trade-off for sure to consider.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi,Do you know if there is anyway to recover document deleted/purged by a TTL index ?\nI had done some testing using oplog, but I couldn’t find any resolution and I don’t think it can be done.\nCan you confirm ?Thanks !\nSally", "username": "Yook_20450" }, { "code": "", "text": "Hi @Yook_20450,Sorry, I’m just reading this now.\nWhen a document is deleted from MongoDB (by a TTL or not), it’s the same result in the oplog. I provided an example in this topic above to explain how a document could be “saved” using the oplog, but it would only work if the oplog is large enough so it still contains the entry that created this doc. If that’s the case, then it will also contain all the following updates that may have occurred to this doc.\nElse it’s lost if you don’t have a backup. Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Single document was deleted , we need recover that document restore into exited collection.\nWe do not have backupHow to do step by step explain but oplog collection it is placeddb.oplog.rs.find({“ns”:“empdb.emptbl”,“op”:“d”,“o”:{“_id” :ObjectId(“64ea1ce2b1084f3d73a33001”)}}).sort({$natural:1}).limit(10).pretty()\n{\n“op” : “d”,\n“ns” : “empdb.emptbl”,\n“ui” : UUID(“c3387486-31b7-4398-9688-9274fd585315”),\n“o” : {\n“_id” : ObjectId(“64ea1ce2b1084f3d73a33001”)\n},\n“ts” : Timestamp(1696357679, 5),\n“t” : NumberLong(3),\n“v” : NumberLong(2),\n“wall” : ISODate(“2023-10-03T18:27:59.868Z”)\n}", "username": "Srihari_Mamidala" }, { "code": "db.coll.insertOne({\n \"_id\" : ObjectId(\"64ea1ce2b1084f3d73a33001\"),\n \"ts\" : Timestamp(1696357679, 5),\n \"t\" : NumberLong(\"3\"),\n \"v\" : NumberLong(\"2\"),\n \"wall\" : ISODate(\"2023-10-03T18:27:59.868Z\")\n})\n", "text": "Hi @Srihari_Mamidala,If it’s just that one document, I would just re-insert manually:Else the pipeline I provided above will work just fine with the right filter.But again: DO NOT use this method to recover documents. It’s a last resort method.Also here you are just recovering the document when it was inserted. You are not recovering the updates.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Recover deleted documents?
2021-09-29T10:52:25.210Z
Recover deleted documents?
12,275
null
[]
[ { "code": "", "text": "Hi everyone,\nI’ve started my certification journey at mongodb university with my personal email account instead of github account. Is there any chance to link github account to my existing account ? I don’t want to lose my course progress.Thanks.", "username": "Nicholas_Webster" }, { "code": "", "text": "Hi @Nicholas_Webster\nPlease reach out to [email protected] with your request. They will be happy to help solve this for you.\nThanks!", "username": "Heather_Davis" } ]
Link Github student pack account to university email and password account
2023-10-02T11:09:13.438Z
Link Github student pack account to university email and password account
306
null
[ "atlas-device-sync", "kotlin", "flexible-sync" ]
[ { "code": "realm.syncSession.pause()\n", "text": "Hello,I’m trying to optimize the number of changeset requests sent by my mobile app. I, of course, try to use as much write batching as possible and optimize app logic, but wondering if there is a way to tell Realm to batch changesets and set them upstream, say, no more often than every minute or maybe even triggered manually? This way I could rate limit requests for free users and enable real-time sync for paid users.I was able to achieve it by using:I paused sync and did a bunch of actions in my app. Then resumed and checked out Atlas logs, it clearly has batched everything into a single request. But this approach feels a little bit hacky. Can anyone help me out here, is this the only way, or maybe there is a better option?Thanks,\nAlex.", "username": "TheHiddenDuck" }, { "code": "", "text": "Hi @TheHiddenDuck,The approach you mentioned will indeed batch more changesets into an upload message, but I’m interested in what you’re trying to achieve? In general, keeping the sessions active will reduce load on your cluster by reducing the amount of “catch-up queries” the sync service needs to perform when a client connects after being offline for some time.Also note that if this is for the purposes of billing, uploads are billed per realm transaction, not per upload message. So batching will not reduce the number of billing requests.", "username": "Kiro_Morkos" }, { "code": "", "text": "Hello @Kiro_Morkos . Thank you for your reply. I am trying to optimize for billing. I have checked the documentation and it says:Sync Operations , such as when a sync client uploads a changeset, when App Services resolves a conflict in an uploaded changeset, or when App Services sends changesets to a connected sync client.so I assumed each uploaded changeset is a single billable request. Is that not so? So, for example, if I execute 10 write transactions while offline, and they get uploaded in a single changeset, will I be billed for one request or for 10?", "username": "TheHiddenDuck" }, { "code": "", "text": "A changeset is 1-1 with a realm transaction, so 10 write transactions will generate 10 changesets. How they are batched for upload is an implementation detail that will not impact billing.", "username": "Kiro_Morkos" }, { "code": "", "text": "I see. Thank you, this is helpful. I was confused by the fact entries in Atlas logs always looked like a single batched transaction for all entities I inserted/updated.So does this mean that the only way to optimize for cost here is to try to batch as many changes as possible into single transactions and not have 1 to 1 mapping between, say user switching a toggle in preferences and a Realm transaction?", "username": "TheHiddenDuck" }, { "code": "", "text": "That’s correct. Although you should keep in mind the best practices for realm transactions. A realm transaction is an atomic unit of work and should be kept small in order to avoid hurting performance or encountering issues with sync.", "username": "Kiro_Morkos" }, { "code": "", "text": "Hi, good chat.I think the real problem of SYNC cost is they charge by user connected time. If user is logged all day and no data is update, charge will occur.Formula: (# Active Users) * (Sync time (min / user)) * ($0.00000008 / min)Old thread Session.Stop() stop Billing?", "username": "Sergio_Carbonete1" }, { "code": "", "text": "Hm, that wasn’t a big concern of mine. In my app, I pause/resume sync when the app is backgrounded/foregrounded, so the numbers I got for this are kind of minuscule for my amount of users and their usage patterns, but requests can add up very fast.", "username": "TheHiddenDuck" } ]
Introduce sync frequency threashold
2023-09-16T13:57:02.691Z
Introduce sync frequency threashold
624