image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "", "text": "Hi Team,we have 3 node mongodb cluster. Recently we had an issue where one of the mongodb node did not take connections causing intermittent issues to the application.\nReviewing the logs, we observed below errors.socket errors for 9001 for multiple app instances.\n2022-08-05T16:42:53.664+0000 [conn137308] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] serverconnection to other two nodes failed.\n2022-08-05T17:07:25.455+0000 [rsHealthPoll] couldn’t connect to prod-cirrusmongo-eu05: couldn’t connect to server prod-cirrusmongo-eu05-xxxxx27017 failed, connection attempt failed2022-08-05T17:07:25.455+0000 [rsHealthPoll] warning: Failed to connect to 10.0.3.21:27017, reason: errno:106 Transport endpoint is already connectedsyslog is showing below error.\n[LIVE] root@prod-cirrusmongo-eu06 [/var/log]$ zcat syslog.5.gz | grep mongodb\nAug 5 16:42:52 prod-cirrusmongo-eu06 mongodb_exporter[1760]: E0805 16:42:52.923749 1760 connection.go:48] Cannot connect to server using url mongodb://localhost:27017: no reachable serversThe issue is finally resolved after restarting mongod service on the instance and taking connections now.\nThe node was primary earlier after restart it became secondary.I would like to know the root cause of above errors to avoid the issue again.Please suggest,", "username": "Abhinav_Avanisa" }, { "code": "", "text": "I have the same issue. I have also stored the log. I get“Error sending response to client. Ending connection from remote”errmsg: “Broken pipe”“code”:9001,“codeName”:“SocketException”And I don’t know how to fix it.", "username": "Anton_Tonchev" } ]
SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server
2020-09-01T09:27:11.766Z
SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server
2,862
null
[ "on-premises" ]
[ { "code": "", "text": "I have successfully installed mongodbcharts with Docker, the service is active but cannot access it through the browser http: // localhost or http://127.0.0.1", "username": "Pierre-Yves_Touche" }, { "code": "", "text": "Hi @Pierre-Yves_Touche,Can you confirm you followed:\nhttps://docs.mongodb.com/charts/current/installation/Additionally please provide output of the commands in our troubleshooting guide:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "PS D:\\chartsmongo> docker service logs r5bcggfrejnk\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ parsedArgs\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ installDir ('/mongodb-charts')\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ log\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ salt\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ productNameAndVersion ({ productName: 'MongoDB Charts Frontend', version: '1.9.1' })\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ gitHash (undefined)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ supportWidgetAndMetrics ('on')\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ tileServer (undefined)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ tileAttributionMessage (undefined)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ rawFeatureFlags (undefined)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ stitchMigrationsLog ({ completedStitchMigrations: [] })\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ featureFlags ({})\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ lastAppJson ({})\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ existingInstallation (false)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ tenantId ('b05dc81e-bb47-4583-a9e1-d0f413a4c330')\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ chartsMongoDBUri\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ tokens\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ encryptionKeyPath\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ stitchConfigTemplate\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ libMongoIsInPath (true)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ mongoDBReachable (true)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ stitchMigrationsExecuted ([ 'stitch-1332', 'stitch-1897', 'stitch-2041', 'migrateStitchProductFlag', 'stitch-2041-local', 'stitch-2046-local', 'stitch-2055', 'multiregion', 'dropStitchLogLogIndexStarted' ])\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ minimumVersionRequirement (true)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ stitchConfig\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ stitchConfigWritten (true)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ stitchChildProcess\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ indexesCreated (true)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ stitchServerRunning (true)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ stitchAdminCreated (false)\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✔ lastKnownVersion ('0.9.0')\nmongodb-charts_charts.1.szg9zu143ddy@docker-desktop | ✖ existingClientAppIds failure: An error occurred authenticating: invalid username/password", "text": "Yes i confirm this installation and service is enable\nI changed the local port to 8888 instead of 80 but nothing on url : http://localhost:8888r5bcggfrejnk mongodb-charts_charts replicated 1/1 Quay *:443->443/tcp, *:8888->80/tcp", "username": "Pierre-Yves_Touche" }, { "code": "mongodb-charts_keys", "text": "Hi @Pierre-Yves_Touche -The service won’t be accessible through a browser if it does not start successfully, which is what is happening here.The “existingClientAppIds failure: An error occurred authenticating: invalid username/password” error seen in your logs can happen if the metadata database contains a previous installation of Charts, but the mongodb-charts_keys volume does not contain the required credentials. If you’re attempting a fresh installation, the easiest thing is to delete all data in the database and remove the Docker volume and start again.Tom", "username": "tomhollander" }, { "code": "", "text": "No errors in logs but no access to localhost…", "username": "Pierre-Yves_Touche" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Charts installed on localhost Windows 10, but cannot access through browser
2020-07-23T07:28:43.798Z
MongoDB Charts installed on localhost Windows 10, but cannot access through browser
4,108
null
[ "graphql" ]
[ { "code": "linkingObjects", "text": "I used linkingObjects for the inverse relationship as described here.But un-intuitively, it seems relationship was not created accordingly for the JSON schema. So I was not able to query the related objects with GraphQL.What would be the recommended way to use inverse relationship with GraphQL?", "username": "Toshi" }, { "code": "", "text": "@Toshi I’m not sure how you went about creating the inverse relationship but it looks like you tried to create it locally in your RN application based on the link you provided.You should be able to confirm the inverse relationship was created in the “Relationships” tab on the “Rules” page to ensure it’s also available for your GraphQL Schema.Since I can’t see what your app looks like, you can also message me the link to your application so I can take a look.", "username": "Sumedha_Mehta1" }, { "code": "class Chat {\n public _id: ObjectId;\n public content: string;\n public thread: Thread;\n\n public static schema: Realm.ObjectSchema = {\n name: \"Chat\",\n primaryKey: \"_id\",\n properties: {\n _id: \"objectId\",\n content: \"string\",\n thread: \"Thread\",\n },\n };\n}\n\nclass Thread {\n public _id: ObjectId;\n public title: string;\n public chats: Realm.List<Chat>;\n\n public static schema: Realm.ObjectSchema = {\n name: \"Thread\",\n primaryKey: \"_id\",\n properties: {\n _id: \"objectId\",\n title: \"string\",\n chats: {\n type: 'linkingObjects',\n objectType: 'Chat',\n property: 'thread'\n }\n },\n };\n}\nrealm.writerealm.write(() => {\n const thread = realm.create(\"Thread\", {\n _id: new ObjectId(),\n title: \"Parent thread\",\n });\n\n const chat = realm.create(\"Chat\", {\n _id: new ObjectId(),\n content: \"Child chat\",\n thread: thread,\n });\n});\nquery{\n chats{\n content\n thread{\n title\n }\n }\n}\n{\n \"data\": {\n \"chats\": [\n {\n \"content\": \"Child chat\",\n \"thread\": {\n \"title\": \"Parent thread\"\n }\n }\n ]\n }\n}\n", "text": "Hi Sumedha, thanks! Please let me share the code then.So let’s say a Thread has many Chats, and a Chat belongs to one Thread.\nI defined a schema based on TypeScript Class, by reading this page.I enabled Sync, and executed realm.write.Then the chat->thread relationship was successfully generated like a charm!\nAnd So I was able to query GraphQL perfectly as expected.This responds successfully like:", "username": "Toshi" }, { "code": "thread.chatsconst threads = realm.objects(\"Thread\");\nthreads.map((thread) => {\n thread.chats.map((chat) =>{\n console.log(chat.content);\n });\n})\nquery{\n threads{\n title\n chats{\n content\n }\n }\n}\n\"message\": \"Cannot query field \\\"chats\\\" on type \\\"Thread\\\".\",\n", "text": "(*I split my reply into two, due to 1-photo-per-post restriction)However, the problem is thread->chats part.Using JavaScript (TypeScript), I was able to retrieve thread.chats intuitively:On the other hand, using GraphQL, I didn’t find relationship definition and thus can’t query relationship:\nWhat I want to query is like this:But it getsWhat do you think?", "username": "Toshi" }, { "code": "", "text": "Hi Toshi –So, the inverse relationships definitions in Realm are local to the database and do not get pushed to cloud, even though one-way relationships do.To work around this, you can use a Computed Property Custom Resolver to mimic the same functionality where the thread computes the chat that it belongs to and returns the chat object.", "username": "Sumedha_Mehta1" }, { "code": "exports = async (thread) => {\n const cluster = context.services.get(\"mongodb-atlas\");\n const chats = cluster.db(\"main\").collection(\"chats\").find({\n thread: thread._id\n }).toArray();\n return chats;\n};\n", "text": "Hi Sumedha, thank you. I think I succeeded to implement a custom resolver to achieve the inverse relationship. JFYI the code was like:Custom Resolver looks really nice and versatile. It is also great as it’s explicit to show what kind of queries will actually run behind. I thought if there is a guide about this topic in the Custom Resolver page or somewhere else in Realm GraphQL docs, that would be even better.I guess at some point I will need to implement another resolver like mutations, but for now, it’s great and I can proceed. Thank you!", "username": "Toshi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
linkingObjects for GraphQL?
2020-08-30T20:19:29.372Z
linkingObjects for GraphQL?
3,877
null
[ "compass" ]
[ { "code": "$ npm start\n...\n\n\n Thu, 27 Aug 2020 04:34:42 GMT electron-squirrel-startup processing squirrel command `F:\\compass`\n2020-08-27T04:34:42.315Z hadron-auto-update-manager auto updater ready and waiting. {\n version: '1.22.0-dev.0',\n feedURL: 'https://compass.mongodb.com/api/v2/update/compass/stable/windows/1.22.0-dev.0'\n}\n2020-08-27T04:34:42.397Z mongodb-compass:menu init()\nApp threw an error during load\nError: Module did not self-register.\n at process.func (electron/js2c/asar.js:138:31)\n at process.func [as dlopen] (electron/js2c/asar.js:138:31)\n at Object.Module._extensions..node (internal/modules/cjs/loader.js:828:18)\n at Object.func (electron/js2c/asar.js:138:31)\n at Object.func [as .node] (electron/js2c/asar.js:138:31)\n at Module.load (internal/modules/cjs/loader.js:645:32)\n at Function.Module._load (internal/modules/cjs/loader.js:560:12)\n at Module.require (internal/modules/cjs/loader.js:685:19)\n at require (internal/modules/cjs/helpers.js:16:16)\n at Object.<anonymous> (F:\\compass\\node_modules\\storage-mixin\\node_modules\\keytar\\lib\\keytar.js:1:14)\nhandling uncaughtException Error: Module did not self-register.\n at process.func (electron/js2c/asar.js:138:31)\n at process.func [as dlopen] (electron/js2c/asar.js:138:31)\n at Object.Module._extensions..node (internal/modules/cjs/loader.js:828:18)\n at Object.func (electron/js2c/asar.js:138:31)\n at Object.func [as .node] (electron/js2c/asar.js:138:31)\n at Module.load (internal/modules/cjs/loader.js:645:32)\n at Function.Module._load (internal/modules/cjs/loader.js:560:12)\n at Module.require (internal/modules/cjs/loader.js:685:19)\n at require (internal/modules/cjs/helpers.js:16:16)\n at Object.<anonymous> (F:\\compass\\node_modules\\storage-mixin\\node_modules\\keytar\\lib\\keytar.js:1:14)\nMongoDB Compass Dev has encountered an unexpected error: ${app.getName()} version ${app.getVersion()}\nStacktrace:\nError: Module did not self-register.\n at process.func (electron/js2c/asar.js:138:31)\n at process.func [as dlopen] (electron/js2c/asar.js:138:31)\n at Object.func (electron/js2c/asar.js:138:31)\n at Object.func [as .node] (electron/js2c/asar.js:138:31)\n at Object.<anonymous> (F:/compass/node_modules/storage-mixin/node_modules/keytar/lib/keytar.js:1:14)\nF:\\compass\\node_modules\\electron\\dist\\resources\\electron.asar\\browser\\api\\dialog.js:38\n throw new Error('dialog module can only be used after app is ready');\n ^\n\nError: dialog module can only be used after app is ready\n at checkAppInitialized (F:\\compass\\node_modules\\electron\\dist\\resources\\electron.asar\\browser\\api\\dialog.js:\n38:15)\n at messageBox (F:\\compass\\node_modules\\electron\\dist\\resources\\electron.asar\\browser\\api\\dialog.js:97:5)\n at Object.showMessageBox (F:\\compass\\node_modules\\electron\\dist\\resources\\electron.asar\\browser\\api\\dialog.j\ns:157:16)\n at Object.showMessageBox (F:\\compass\\node_modules\\electron\\dist\\resources\\electron.asar\\common\\api\\deprecate\n.js:144:32)\n at process.<anonymous> (F:\\compass\\src\\main\\index.js:40:27)\n at process.emit (events.js:205:15)\n at process._fatalException (internal/process/execution.js:146:25)\n", "text": "Hi everyone, I’m a bit of a Javascript newb trying to build Compass from source on Windows 10.I cloned the source code from github at GitHub - mongodb-js/compass: The GUI for MongoDB.I’ve installed node and npm on this machine.\n$ node -v\nv12.18.3\nnpm -v\n$ npm -v\n6.14.6I’ve also installed electron by running ‘npm install electron --save-dev’ and ‘npm install --save-dev hadron-build’.I’m trying to run ‘npm start’ from within the compass subdirectory in a cygwin terminal. Running from within the windows command terminal fails at the ‘rm -rf .compiled-sources’ step. Here’s the error:Is this just some odd behavior from trying to build under the cygwin terminal under Windows? If so, what is the preferred method for building Compass from scratch on Windows?Thank you all for your time", "username": "Justin_Hopkins" }, { "code": "rm -rf .compiled_sourcespackage.json\"prestart\": \"rd -r .compiled_sources\"rm -rf.compiled_sources\"prestart\": \"\"", "text": "Hi @Justin_Hopkins!If you’re building from source on Windows, you can try two things to get bypass that rm -rf .compiled_sources step:", "username": "irina" }, { "code": "", "text": "Hi Irina,Thank you for this suggestion. This did help me get past that step. After some further twiddling I realized I forgot to ran ‘npm install’ from inside the compass subdirectory. It is now compiled and running.Cheers", "username": "Justin_Hopkins" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trouble building Compass from source
2020-08-27T05:49:07.838Z
Trouble building Compass from source
2,708
https://www.mongodb.com/…f5198ea9ba0.jpeg
[ "mongodb-shell" ]
[ { "code": "", "text": "Hello, Everyone! I tried to install mongo shell. But I’m facing a problem here. image966×858 154 KB I have done following all of the steps here, but Catalina cannot verify the installer. What I’m supposed to do?Best regards,\nFahmi", "username": "Fahmi_Hidayat" }, { "code": "", "text": "Please check this link", "username": "Ramachandra_Tummala" } ]
Cannot Install Mongo Shell
2020-08-31T17:34:44.373Z
Cannot Install Mongo Shell
2,049
null
[ "node-js", "data-modeling" ]
[ { "code": "", "text": "How to save video and how to display the video using node js", "username": "Praveen_Gupta" }, { "code": "", "text": "Tell me more about what you are trying to accomplish. Where are your video files hosted? I actually wouldn’t recommend saving video files in a MongoDB database. Instead, I would recommend blob storage like S3, and you breaking up the file into chunks that can be streamed to your clients. There is a great walkthrough on how to do this here: Video Stream With Node.js and HTML5 | by Diogo Spínola | Better ProgrammingLet me know if you want me to clarify anything ", "username": "JoeKarlsson" } ]
How to save video and how to display the video using node js
2020-08-24T13:29:36.943Z
How to save video and how to display the video using node js
14,037
null
[]
[ { "code": "", "text": "Hi,\nI am developing SaaS around MongoDB for other applications to be developed on it. I have 2 questions:Thanks.", "username": "Sandeep_Kalra" }, { "code": "host:port", "text": "Welcome to the community @Sandeep_Kalra!if a single node of MongoDB is scaled up to say 3 or 5 replica-set, then do we have to restart the first node’s MongoDB or can i simply add replica-set info to it.If your single node was started as a replica set, you can update the existing replica set configuration without restarting the original instance. Drivers and applications using replica set connections will automatically discover configuration changes. See Add Members to a Replica Set.However, it is best practice to provide a seed list of 2 or more replica set members in your driver connection strings, so typically you would want to create a base replica set first (three members). The seed list of host:port pairs provided in a connection string is used by drivers/clients for initial discovery of the current replica set configuration. Upon successful connection to a member of the seed list, clients will retrieve a canonical list of replica set members to connect to, which may be different from the original seed list. If your seed list only includes a single member of your replica set, your application would be unable to connect if that member is unavailable.I want to put hooks on internal events such as one of the replica-set node went down, or, leader went down, and re-election happened (and new leader is selected etc). Is that possible? If yes, how? Reference to some examples will do.There are no built-in hooks for triggering external actions on election events, since failover is handled by internal logic.However, you can set up alerts & monitoring using operational tooling like MongoDB Cloud Manager (cloud hosted monitoring, backup, and automation) or MongoDB Ops Manager (on-prem monitoring, backup, and automation). Both of these products include alerting and APIs which you can integrate with your operational processes.You could also consider MongoDB Atlas for a managed global cloud data service so you can focus on developing your SaaS application.If you are interested in exploring operational approaches for the SaaS platform you are building, I suggest contacting the MongoDB sales team. We have worked with many companies building SaaS platforms and there are operational solutions ranging from fully self-hosted to cloud managed.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks a lot. It clarified a lot. I am not using enterprise software as my focus is purely on FOSS at this point. I have one follow up question here:\nLet’s say in a 5 node setup, and quorum of 3 [0=leader, 1-thru-4=replica] the leader was configured with users and roles (admin). and through connection I know who is the leader. At some point the leader went down, and say replica-2 is now a new leader and HA still holds true. Will it transfer the user and roles along with user-data or do I need to recreate the users and roles again?", "username": "Sandeep_Kalra" }, { "code": "localstartup_logoplog.rsadmin.system.usersadmin.system.rolesprimarysecondary", "text": "Let’s say in a 5 node setup, and quorum of 3 [0=leader, 1-thru-4=replica] the leader was configured with users and roles (admin). and through connection I know who is the leader. At some point the leader went down, and say replica-2 is now a new leader and HA still holds true. Will it transfer the user and roles along with user-data or do I need to recreate the users and roles again?Hi @Sandeep_Kalra,All data written through the current primary (including users & roles) is replicated to secondaries, with the exception of the local database which has instance-specific data like the local startup_log and oplog.rs collections.User and role information is stored as documents in System Collections (admin.system.users and admin.system.roles) which are reserved for internal use.If you are new to administering MongoDB deployments, I would strongly recommend taking some of the free online courses in the MongoDB University DBA Track. The latest session of M103 (Basic Cluster Administration) just started this week and provides a good introduction to admin concepts and tasks. All curriculum (chapters & exercises) is available when you enrol, and you have two months to complete a course after enrolment.Note: the preferred terminology for replica set roles is primary and secondary.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This was helpful. Thanks a lot.", "username": "Sandeep_Kalra" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SaaS Operations
2020-08-24T21:31:21.390Z
SaaS Operations
1,962
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 3.6.20-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 3.6.19. The next stable release 3.6.20 will be a recommended upgrade for all 3.6 users.\nFixed in this release:3.6 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 3.6.20-rc1 is released
2020-08-31T16:29:23.640Z
MongoDB 3.6.20-rc1 is released
1,973
null
[ "python" ]
[ { "code": "db.createCollection(\"collect\", {\n validator: {\n $jsonSchema: {\n bsonType: \"object\",\n additionalProperties: true,\n required: [\"component\", \"path\"],\n properties: {\n component: {\n bsonType: \"string\"\n },\n path: {\n bsonType: \"string\",\n description: \"Set to default value\"\n }\n }\n }\n)\n", "text": "Hello everyone,Is there any example to create $jsonschema validator in pymongo library(in python)?\nEx:i require to implement in pymongo instead of js.Best Regards,\nM Jagadeesh", "username": "Manepalli_Jagadeesh" }, { "code": "from pymongo import MongoClient\n\n\ndef create_collection(coll_name):\n client = MongoClient('mongodb://localhost:27017/')\n db = client.test\n result = db.create_collection(coll_name, validator={\n '$jsonSchema': {\n 'bsonType': 'object',\n 'additionalProperties': True,\n 'required': ['component', 'path'],\n 'properties': {\n 'component': {\n 'bsonType': 'string'\n },\n 'path': {\n 'bsonType': 'string',\n 'description': 'Set to default value'\n }\n }\n }\n })\n\n print(result)\n\n\nif __name__ == '__main__':\n create_collection('my_coll')\n", "text": "Here is a code sample using pymongo to create a new collection with a jsonschema:As you can see in this screenshot, I got the correct jsonschema in my MDB server:image862×732 14.1 KB", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you for example in pymongo!! @MaBeuLux88.", "username": "Manepalli_Jagadeesh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Defining data schema using pymongo
2020-08-31T11:43:14.029Z
Defining data schema using pymongo
9,207
null
[]
[ { "code": "", "text": "Hello,\nI have collection with amount (type: number) as a field. I want 2 decimals to be placed after number in aggregation pipeline. Is there any way to do that in aggregation or any other work around.Eg: 25.toFixed(2) => ‘25.00’ in javascript.", "username": "sudeep_gujju" }, { "code": "{ $round : [ <number>, <place> ] }\n{ $trunc : [ <number>, <place> ] }\n", "text": "Hello @sudeep_gujjuyou can use $roundor $trunc depending on your needsCheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Both of them are not working.Before: totalCredit= 5000Pipeline:\n[{$set: {\ntotalCredit: {$round: [’$totalCredit’, 2]}\n}}]After: totalCredit= 5000Expected: totalCredit= 5000.00", "username": "sudeep_gujju" }, { "code": "// Select the database to use.\nuse('testFormat');\n\n// The drop() command destroys all data from a collection.\ndb.format.drop();\n\n// Insert a few documents into the format collection.\ndb.format.insertMany([\n{ '_id' : 1, 'totalCredit' : 5000 },\n{ '_id' : 2, 'totalCredit' : 5000.005 }\n]);\n\n// Run an aggregation\nconst aggregation = [\n { $project: { roundedValue: { $round: [ \"$totalCredit\", 2 ] } } }\n];\ndb.format.aggregate(aggregation);\n[\n {\n _id: 1,\n roundedValue: 5000\n },\n {\n _id: 2,\n roundedValue: 5000.01\n }\n]", "text": "Hello @sudeep_gujjuBoth are working, but you do not want to do formatting in mongodb queries. MQL is not a programming language.\nIf you really want to do formatting in the query you could convert to a string and use $cond with regex.\nI am not adding an example since I really can not suggest use formatting in a query. Trivial things such as formatting are better handled in client code. Part of the NoSQL philosophy in general is to get rid of the API bloat and just focus on the tasks at hand…This code show how it works, you did found this by your tests.Result:", "username": "michael_hoeller" } ]
MongoDB Aggregation .toFixed()
2020-08-29T19:56:17.825Z
MongoDB Aggregation .toFixed()
4,503
null
[]
[ { "code": "", "text": "Why Position:TYPE is not showing as one of the valuetypes in schema view?\nwhy only coodinates it showing as a value type?", "username": "Sakshi_Saxena" }, { "code": "", "text": "Hi @Sakshi_Saxena,Thanks for surfacing this. Let me sync with the Compass team. I will get back to you shortly.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "I think type:“Point” is basically what makes it Geospatial data. I work on geospatial data. There are various type of geospatial data types like point, linestring, polygon. This defines the shape of the geospatial data. The type point define what shape would be shown on the map. As you can see the data is displayed in the form of points on the map.I think compass work this way. further @Shubham_Ranjan can verify this.", "username": "Rabia_Naz_Khan" }, { "code": "", "text": "Hi @Rabia_Naz_Khan,Yes, that’s correct.I had made a post earlier on a similar topic here. Not every form of data is directly supported.Also check this resource : Analyze Your Data Schema~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
M001:MongoDB Documents: Geospatial Data
2020-06-22T10:01:32.012Z
M001:MongoDB Documents: Geospatial Data
1,401
null
[]
[ { "code": "", "text": "Is it a good practice to make keys as string or is it obligatory?", "username": "Rabia_Naz_Khan" }, { "code": "", "text": "Hi @Rabia_Naz_Khan,Is it a good practice to make keys as string or is it obligatory?What other data types you have in mind for keys ?", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Is it necessary to make keys as String?
2020-08-30T16:08:09.005Z
Is it necessary to make keys as String?
1,376
null
[]
[ { "code": "doc = col.find_one({\"key\": \"value\"})\nprint(doc.key) # => \"value\"\n", "text": "I wander if there are some features in pymongo which enable attribute-like access in the query result.For example, I want:It seems that current API does not support it (which returns python dictionary, instead), or I’ve not carefully read the documentation.\nThanks for all your generous helps!", "username": "sn_w" }, { "code": "nullprint(doc.key) # => \"value\"", "text": "Hello @sn_w,The findOne method returns a document or null. In PyMongo it is the same as specified at find_one.To get the value of a field you have to the print(doc.key) # => \"value\". But, you can use projection in the query to retrieve limited fields from the document.There are ORM like tools which are listed at Tools - PyMongo - this might be something you are looking for.", "username": "Prasad_Saya" }, { "code": "doc[key]", "text": "To print a value in python the syntax is different from JavaScript. You must say doc[key].", "username": "Joe_Drumgoole" } ]
Return python object rather than dictionary for `find` method
2020-08-31T07:34:15.759Z
Return python object rather than dictionary for `find` method
6,479
null
[ "atlas-triggers" ]
[ { "code": "", "text": "I’d like to delete anonymous users when they log out, since I only use them for accessing public static data for users that are not logged in to my app. Can I do that using triggers? I can’t find a “logout” trigger.Thanks!", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hi @Jean-Baptiste_Beau,Currently there is no trigger based on a logout event.I think you have 2 options :Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,I guess the scheduled trigger could work. How can I get all the users from a function?Thanks,\nJB", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hi @Jean-Baptiste_Beau,I will sketch something for you in the upcoming days…Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "anon-user<APP-ID>exports = async function() {\n\n // Get Atlas Parameters and application id\n const AtlasPrivateKey = context.values.get(\"AtlasPrivateKey\");\n const AtlasPublicKey = context.values.get(\"AtlasPublicKey\");\n const AtlasGroupId = context.values.get(\"AtlasGroupId\");\n const appId = '<APP-ID>';\n \n \n // Authenticate to Realm API\n const respone_cloud_auth = await context.http.post({\n url : \"https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login\",\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"]},\n body : {\"username\": AtlasPublicKey, \"apiKey\": AtlasPrivateKey},\n encodeBodyAsJSON: true\n \n });\n \n const cloud_auth_body = JSON.parse(respone_cloud_auth.body.text());\n \n // Get the internal appId\n const respone_realm_apps = await context.http.get({\n url : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps`,\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`]\n }\n \n });\n \n const realm_apps = JSON.parse(respone_realm_apps.body.text());\n \n \n var internalAppId = \"\";\n \n realm_apps.map(function(app){ \n if (app.client_app_id == appId)\n {\n internalAppId = app._id;\n }\n });\n \n \n // Get all realm users \n const respone_realm_users = await context.http.get({\n url : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps/${internalAppId}/users`,\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`]\n }\n \n });\n \n \n const realm_users = JSON.parse(respone_realm_users.body.text());\n \n \n // Filter only anon-users \n var usersToDelete = [];\n \n realm_users.map(function(user){ \n if (user.identities[0].provider_type == \"anon-user\")\n {\n usersToDelete.push(user._id);\n }\n });\n console.log(JSON.stringify(usersToDelete));\n \n \n // Delete the users on the list\n usersToDelete.map(function(id){ \n const respone_realm_users_delete = context.http.delete({\n url : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps/${internalAppId}/users/${id}`,\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`]\n }\n \n });\n });\n \n};\n", "text": "Hi @Jean-Baptiste_Beau,Please see the way I implemented the Realm anon-user deletes.PrerequistesYou will need to create 3 secrets in your application to authenticate with realm APII’ve created the following trigger running each 5min:\nAnonDeleteTrigger3012×1470 226 KB\nAnd added the following function to it which runs the entire logic, remember to replace <APP-ID> with your application id :This will log all the user ids that were deleted.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Delete anonymous users upon log out: trigger?
2020-08-23T15:46:24.307Z
Delete anonymous users upon log out: trigger?
5,489
null
[ "compass" ]
[ { "code": "", "text": "Dear developers, please update Compass for OS Catalina, it’s not working for it. Thanks in advance", "username": "Gabi_Kamilova" }, { "code": "", "text": " Hi @Gabi_Kamilova and welcome to the forums.I am running on macOS Catalina and can run the most recent build of Compass without any issue. What issues are you having?\nScreenshot_2020-04-02 18.02.42_b3Med4973×863 260 KB\n", "username": "Doug_Duncan" }, { "code": "", "text": "@Gabi_Kamilova Compass works on Catalina but because the application is not notarized (we are working on it) MacOS complains. You can get around that warning by going to Applications, right clicking on MongoDB Compass, and selecting Open from the contextual menu. You will still get the warning but you’ll have an additional button to open it anyways.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Hi, @Massimiliano_Marcon, I am not getting the option of open anyways on my laptop.", "username": "Ayushi_Dalmia" }, { "code": "", "text": "@Ayushi_Dalmia go to finder then Applications then right click. you should be fine. cheers", "username": "Samuel_Adranyi" }, { "code": "", "text": "@Samuel_Adranyi @Massimiliano_Marcon I am having the same problem, and the fix you stated does not work for me. When I right-click Compass in the Applications folder and press Open in the dropdown menu, the same error box pops up as if I opened it using the desktop icon; it does not give me an “Open Anyways” button (see screenshot; I would show you my dropdown menu screenshot as well but it’s only letting me add one image. I assure you, I did press “Open” from the dropdown menu).Screen Shot 2020-08-28 at 5.46.29 PM1678×586 186 KBThe other fix I have seen is as described at the bottom of Apple Support Page HT202491 (Safely open apps on your Mac). However, when I go to Security & Privacy on my Mac the “was blocked” blurb does not appear for me, and there is no “Open Anyways” button.Any help appreciated!", "username": "Sophia_C_Freaney" }, { "code": "", "text": "In your security settings, do you have “Allow apps downloaded from App Store and identified developers”?\nimage780×685 73.5 KB", "username": "Massimiliano_Marcon" } ]
Update Compass for new MacOS
2020-04-02T21:35:17.911Z
Update Compass for new MacOS
4,759
null
[]
[ { "code": "ValueError: [TypeError(\"'ObjectId' object is not iterable\"), TypeError('vars() argument must have __dict__ attribute')]\nuser = users.find_one({\n '_id': ObjectId(time_data.user_id)\n})\n", "text": "Hello, Whenever I’m trying to run the find_one query I’m getting this error:Here is the query that I’m running:What I’m doing wrong here?", "username": "aby669" }, { "code": "time_data.user_id_id", "text": "Hello @aby669, welcome to the community.Please post sample values of the variable time_data.user_id and the field _id of the collection’s document. They need to be of same data type for the query to run without errors.", "username": "Prasad_Saya" }, { "code": "_idtime_data.user_id", "text": "@Prasad_Saya thanks for replying,Here is the document:I want to search this using _id which is automatically generated. the time_data.user_id is in the string formate.", "username": "aby669" }, { "code": "ObjectId", "text": "Hi @aby669, hope you had a good weekend.Here is an example of such usage; querying a collection by ObjectId using as string value of it: Querying By ObjectId", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Value error on running find_one query
2020-08-29T08:57:21.933Z
Value error on running find_one query
6,268
null
[ "dot-net" ]
[ { "code": "public class Employee \n{\n [BsonRepresentation(BsonType.ObjectId)]\n public string Id { get; set; }\n}\nBsonClassMap.RegisterClassMap<Employee>(cm => \n{\n cm.AutoMap();\n cm.IdMemberMap.SetSerializer(new StringSerializer(BsonType.ObjectId));\n});\n", "text": "https://mongodb.github.io/mongo-csharp-driver/2.10/reference/bson/mapping/\nI follow the docs before, I found it’s works like thisbut it doesn’t work like this", "username": "funk_xie" }, { "code": "ObjectId", "text": "Hi @funk_xie,With AutoMap() you don’t need to register ObjectId, it should already be registered by default.\nCould you elaborate with examples on what are you trying to do ? For example, it would be helpful to provide:Regards,\nWan.", "username": "wan" }, { "code": " var myClass = new MyClass{ Name = \"testClass\" };", "text": "Hi wan,\nThanks for your reply.\nI expect that the property Id will convert to string type when reading data from the database and will convert the string type back to an ObjectId type when writing data to the database.\nThe operation is insert a new data into database, for example, this is my class named MyClass\npublic class MyClass\n{\npublic string Id { get; set; }\npublic string Name { get; set; }\n}\nthis is the entity\n var myClass = new MyClass{ Name = \"testClass\" };\nwhen i use attribute [BsonRepresentation(BsonType.ObjectId)] it will create a new objectId automatic but when i use lambda expressions to register it won’t create new id, it will insert a null value. Must i create a new objectId value manual?", "username": "funk_xie" }, { "code": "ObjectId public class MyClass \n {\n [BsonRepresentation(BsonType.ObjectId)]\n public string Id { get; set; }\n public string Name {get; set;}\n }\nIdvar document = new MyClass { Name = \"Foo\" };\ncollection.InsertOne(document);\nIdObjectIdvar document2 = new MyClass {Id=\"5f488a68d1314f2be9ea5b7d\", Name=\"Bar\"}; \ncollection.InsertOne(document2);\nIdvar result = collection.Find(new BsonDocument()).FirstOrDefault();\nConsole.WriteLine(result.Id);\n// Outputs string \"5f488a68d1314f2be9ea5b7d\" \n", "text": "Hi @funk_xie,I expect that the property Id will convert to string type when reading data from the database and will convert the string type back to an ObjectId type when writing data to the database.You don’t need to register the BSON representation for ObjectId. For example, you could have the following class mapping:When you perform an insert as below. The value of Id will be automatically filled with an ObjectId.You could also specify your own Id (with a valid ObjectId hex string format) as below. The string value will be serialised into ObjectId in the database.When you read the value of the Id field, that would automatically be mapped back to string, for example:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thank U very much! ", "username": "funk_xie" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem mapping classes with .NET driver 2.11.0
2020-08-19T03:48:15.084Z
Problem mapping classes with .NET driver 2.11.0
5,244
null
[]
[ { "code": "", "text": "Hi All,I am a new MongoDB Charts user. I have worked with Tableau before and am trying to replicate an actions functionality in Charts.Tableau has a functionality where I can give a hyperlink string in a visualization and then I can give an action through one can navigate to that site.Is there any way to replicate the same? Any leads would be appreciated.Thanks!", "username": "Sachchit_Chaudhary" }, { "code": "", "text": "Hi @Sachchit_Chaudhary -Unfortunately this functionality does not exist in Charts today. However we are working on a feature which will make this possible for embedded charts. Essentially we will raise events when a user interacts with a chart, which can be handled by the host application - navigating to a URL is one example of what the app could do in response. Will this meet your needs, or are you looking for something similar on the dashboard view?Tom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Hyperlink Action Functionality
2020-08-30T20:19:20.101Z
Hyperlink Action Functionality
2,267
null
[ "stitch" ]
[ { "code": "", "text": "I received the following error when connecting to Realm using pymongo. I use the server API key method to connect to the Realm. e.g. mongodb://_:[API-KEY]@realm.mongodb.com:27020/?authMechanism=PLAIN&authSource=%24external&ssl=true&appName=[myapp id]:mongodb-atlas:api-keyIt was working fine until 19th Wed. Not sure any upgrade was done to realm caused this.pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed: realm.mongodb.com:27020: [Errno 0] Error, Timeout: 30s, Topology Description: <TopologyDescription id: 5f4278f71d115b964f46808f, topology_type: Single, servers: [<ServerDescription (‘realm.mongodb.com’, 27020) server_type: Unknown, rtt: None, error=AutoReconnect(‘SSL handshake failed: realm.mongodb.com:27020: [Errno 0] Error’)>]", "username": "Roshan_Prabashana" }, { "code": "", "text": "Hey @Roshan_Prabashana, did you solve this issue? we are experiencing the same since two days ago.", "username": "Mauricio_Giraldo" }, { "code": "", "text": "No . We decided to move away from Realm . Mongodb support person suggested to go for to paid support for answering this as they don’t have Realm support. Even I can’t get it worked from my desktop. As we can’t rely on this sort of service we decided not to use for any production until we decide go for paid support version( we have taken this service for evaluation purpose) . I am pretty sure we did not do any changes from our end. Suddenly it stop working.", "username": "Roshan_Prabashana" }, { "code": "", "text": "Hi @Mauricio_Giraldo and @Roshan_Prabashana,We are actively working on investigating and fixing the issue. We apologise for any inconvenience you have experienced, we will update you with any updates we have as soon as possible.We appreciate your patience as we work through this issue.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Mauricio_Giraldo and @Roshan_Prabashana,The Realm application should be able to receive your connections now as we fixed the backend issue.Please let us know if that works for you. Again apologise for this inconvenience.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny, It’s working now. Thank you for the follow up on this. ", "username": "Mauricio_Giraldo" } ]
Connecting error with pymongo
2020-08-23T14:37:02.194Z
Connecting error with pymongo
6,934
null
[]
[ { "code": "", "text": "Hi,We are planning to deploy MongoDB Community Edition 4.0 on Azure Linux VM, is MongoDB support AES256 for database backup and Data-at-Rest?What Data Encryption features (Data-at-rest and Data-at-transit) available in Community edition 4.0?Dinesh", "username": "D_P" }, { "code": "", "text": "Hi @Dinesh_Gopinathan,The 4.0 Community edition support SSL/TLS data encryption.The encrypted backups and WiredTiger encryption at rest are available in the enterprise version of the product as well as other security features.The enterprise version require a licences and I can contact you with the relevant sales representative if you want to.MongoDB 4.2 introduced Field Level Encryption in the driver layer.See our full security check list:If you intend to run your instances in Azure I will strongly recommend using mongodb Atlas which have encrypted storage volumes and backups ,plus all the advantages of a managed service with best security measures (SSL, boc peering, auditing etc.)Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
AES256 support in Community Edition
2020-08-28T20:05:41.306Z
AES256 support in Community Edition
1,595
null
[]
[ { "code": "mongodb://mongos0.example.com:27017,mongos1.example.com:27017,mongos2.example.com:27017\n", "text": "Hey there,\nI couldn’t find information in the documentation on what happens if I specify multiple mongos instances within the connection string, e.G.:Will all instances of my application connect to the mongos0.example.com instance, or will they randomly pick one, or how is the actual mongos instance being selected?I am using mongoose which uses the latest [email protected] node driver.", "username": "Jascha_Brinkmann" }, { "code": "mongosmongosmongoslocalThresholdMSmongos", "text": "Hi @Jascha_Brinkmann,If there is more than one mongos instance in the connection seed list, the driver determines which mongos is the “closest” (i.e. the member with the lowest average network round-trip-time) and calculates the latency window by adding the average round-trip-time of this “closest” mongos instance and the localThresholdMS . The driver will load balance randomly across the mongos instances that fall within the latency window.See more details here:Best\nPavel", "username": "Pavel_Duchovny" } ]
Mongos selection
2020-08-29T23:23:16.696Z
Mongos selection
1,700
null
[]
[ { "code": " \"$each\": .document(try BSONEncoder().encode(kittenFriends))\nstruct Kitten: Content { \n var _id: BSONObjectID? \n let name: String \n let color: String \n let favoriteFood: CatFood \n var friends: [Kitten]? \n}\n\napp.patch(\"kittens\", \":name\", \"friends\") { req -> EventLoopFuture<Response> in\n \n let nameFilter = try getNameFilter(from: req)\n\n let kittenFriends = try req.content.decode([Kitten].self)\n \n let updateDocument: BSONDocument = [\n \"$push\": [\n \"friends\": [\n \"$each\": .document(try BSONEncoder().encode(kittenFriends))\n ]\n ]\n ]\n\n return collection.updateOne(filter: nameFilter, update: updateDocument)\n .hop(to: req.eventLoop)\n .flatMapErrorThrowing { error in\n throw Abort(.internalServerError, reason: \"Failed to update kitten: \\(error)\")\n }\n .unwrap(or: Abort(.internalServerError, reason: \"Unexpectedly nil response from database\"))\n .flatMapThrowing { result in\n guard result.matchedCount == 1 else {\n throw Abort(.notFound, reason: \"No kitten with matching name\")\n }\n return Response(status: .ok)\n }\n }", "text": "Hey allI have an optional array field within a document where values can be added to it over time and for some reason I can’t seem to get it to work. As an example, I’ve expanded on code from the ComplexVaporExample so hopefully this makes some sense and isn’t complete gibberish. Please let me know otherwise.I’ve added a friends property to Kitten , with a route where I want to be able to append single and / or multiple values at a time from an array.I am struggling to work how to create the required BSON to make it work for the specific line below. I have tried using the BSON array modifier but that creates an array within an array.Full code below. Unable to get correct syntax for the updateDocument", "username": "Piers_Ebdon" }, { "code": "let kittenFriends : BSON = [.string(\"Garfield\")]\nlet updateDocument: BSONDocument = [\n \"$push\": [\n \"friends\": [\n \"$each\":kittenFriends\n ]\n ]\n ]\n\nlet updateResult = try collection.updateOne(filter: [\"Name\":\"Felix\"], update: updateDocument)\nBsonDocument", "text": "Hi @Piers_Ebdon and welcome to the forums,I am struggling to work how to create the required BSON to make it work for the specific line below. I have tried using the BSON array modifier but that creates an array within an array.The syntax of $each with $push operator requires it to be an array and not a document. For example:I have a limited knowledge on Vapor, but please see MongoSwift BSON Library documentation to find out more about how to work with BsonDocument.Regards,\nWan.", "username": "wan" }, { "code": "let encodedKittens: [BSON] = try BSONEncoder().encode(kittenFriends).map { .document($0) }\n\nlet updateDocument: BSONDocument = [\n \"$push\": [\n \"friends\": [\n \"$each\": .array(encodedKittens)\n ]\n ]\n ]\n", "text": "To handle encoding the kittens correctly into the document, I think what you need is something like:", "username": "kmahar" }, { "code": "", "text": "Thanks @kmahar ! that worked ", "username": "Piers_Ebdon" } ]
Using $push + $each in Swift Driver
2020-08-27T19:47:19.020Z
Using $push + $each in Swift Driver
2,263
null
[ "atlas-search" ]
[ { "code": "", "text": "Hi,\nMy team is looking into Atlas and is impressed with the full text search capabilities. I see there is a limit on the number of search indexes on M0,M3 and M5 clusters but the documentation doesn’t provide information on higher cluster tiers. Any ideas where I can get more information about the index support per cluster tier before we move our data in Atlas cluster?", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi @Supriya_Bansal,With dedicated clusters there are no specific limitations to the search indexes rather than regular bounds that are enforced by your instance type ( storage,Ram,CPU) .Having said that I would recommend going into the following link and review all recommendationshttps://docs.atlas.mongodb.com/reference/atlas-search/performance/#index-size-and-configurationBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Perfect! Thank you!!", "username": "Supriya_Bansal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas full text search index limit
2020-08-28T14:21:28.679Z
Atlas full text search index limit
3,935
null
[]
[ { "code": "", "text": "Hey everyone, I’ve hit a wall with figuring out NSCache in conjunction with Realm. Basically, I have a function for opening up the Realm and storing results from some Plant objects, and that works great. Additionally, I was able to set up another model for caching Image URLs, and I’m able to pass through a URL string from the Plant object retrieved from Realm just fine.The problem is, I can’t seem to figure out how to cache ALL the data from the Realm Object once it’s called - right now, everything’s based on EnvironmentObject/Observable Object, so every single time the view is accessed, the Realm function shoots off and it takes a few seconds to propagate even 5 documents from Realm. I’d like to avoid that by caching objects altogether.Would this require shifting data from the ObservableObject Realm Object into CoreData? I’d like to avoid that and stick with NSCache if possible, but would love any advice on the matter!", "username": "Aabesh_De" }, { "code": "", "text": "NSCache is probably not the right solution. More importantly, where are the images actually being stored?", "username": "Jay" }, { "code": "", "text": "What would you recommend for caching the actual object content? NSCache is recommended commonly for images, as it’s thread-safe and removes items from cache when memory is needed by other apps - the images are called via a URL link (hosted by another site), and then stored locally on the device.", "username": "Aabesh_De" }, { "code": "", "text": "I was more asking what process you’re using to work with the images. e.g. you have a Realm Object that a url property? Or you’re using NSData to actually store the data in Realm itself? If they are stored locally, do you need to cache them?", "username": "Jay" }, { "code": "", "text": "Ah so it’s a URL property within the Realm Object that’s called through a URLSession. Once the URL is accessed, I pass it through a URLImageModel that then loads the image and caches the image itself locally. Check out the pastebin for the model here: Pastiebin.com 5f49448078708I feel like I have to do something similar with the above model and key in the other properties/data from the Realm Object and cache it similarly?", "username": "Aabesh_De" }, { "code": "", "text": "That’s going to be hard to say without understanding the entire use case.However, Realm is an offline-first database so there would not be any reason to cache the Realm objects since they are locally stored anyway. Additionally, the Realm objects are lazily loaded so the memory impact of a Results object is minimal - again, no cache’ing would be needed.", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Caching Data in Swift from a Realm Object?
2020-08-26T23:10:38.618Z
Caching Data in Swift from a Realm Object?
3,399
null
[]
[ { "code": "", "text": "Hey I am a newbie to MongoDB and planning to port from SQL for my existing iOS app.I have a question about how to structure the main database which stems from my lack of understanding of how to handle multiple users.BACKGROUND\nMy app is a patient database with a bunch of relational tables but I think I know how to structure into the document model of MongoDB Realm. Currently my app does not sync to the cloud and is a local SQL database on the device itself. I have many users but their data is visible only to them on their device.DATABASE SETUP?\nMy question relates to how to set up the database in mongoDB? Is it one big database on a server and each user logs in and then I filter patients related to that user on their device? Or, does each user have their own copy of the database of only their patients and each of these databases for each user is backed up to a server?if it is one big database how do we keep any local device data limited to just that user’s data (ie I don’t want to sync some giant database back and forth between all the users who really are only ever accessing their own segment of the database).Having asked this, I guess it would be cool to have it all on one big DB because I could look at usage trends and other analytics for users as a whole. I don’t believe the app will ever share patients with other users but it might not be a bad thing to leave it that way just in case.Any help on this would be greatly appreciated.", "username": "Gregory_Moore" }, { "code": "owner_idowner_id", "text": "Having asked this, I guess it would be cool to have it all on one big DB because I could look at usage trends and other analytics for users as a whole. I don’t believe the app will ever share patients with other users but it might not be a bad thing to leave it that way just in case.Any help on this would be greatly appreciated.Hello @Gregory_Moore.I think your demands are exactly solved by the per user partition in MongoDB Realm, quoting from the docs:If your application stores data privately for each individual user, you could create an owner_id field containing a specific user ID for every document. Selecting the owner_id field as your partition key would partition your database into realms corresponding to individual users.More information is found here.I hope that helps your usecase.", "username": "Christian_Huck" } ]
Newbie Question on how to set up multi user data model for iOS
2020-08-27T19:45:45.629Z
Newbie Question on how to set up multi user data model for iOS
1,955
null
[]
[ { "code": "", "text": "Starting yesterday, I’ve been getting a failed status when deploying to my Github:Failed: failed to import app: error validating Service: pti: only [uri, clusterId, clusterName, clusterType, clusterSize, clusterUpdated, dbUsername, dbPassword, lastUpdated, wireProtocolEnabled, regionName, groupName, orgName, readPreference, readPreferenceTagSets, sync] are allowed config optionsI’ve sent this to support and no one can help me.", "username": "Solutegrate_Support" }, { "code": "", "text": "Hi @Solutegrate_Support,There was a known issue which was fixed a few hours back.Can you retry the deploy?Best\nPavel", "username": "Pavel_Duchovny" } ]
Realm Github deploy: error validating Service
2020-08-27T19:46:02.304Z
Realm Github deploy: error validating Service
1,866
https://www.mongodb.com/…7842d0686a6f.gif
[ "swift" ]
[ { "code": "", "text": "\nEver thought about using Swift with MongoDB? Jump on with @nraboy and me tomorrow! We’ll go live at 1 pm EDT tomorrow, August 28th and I’ll demonstrate the use of the MongoDB Swift Driver and provide a walkthrough of Xcode and this awesome development environment.Twitch - August 28th @ 1pm EDTI’ll be placing some resources and links back here in the Drivers & ODMs section. In the meantime, who’s using Swift on the Server-side? Anybody?", "username": "Michael_Lynn" }, { "code": "csvimport-swift --file=/Users/mlynn/Desktop/example.csv --database=students --collection=people", "text": "Hey Folks - thanks for tuning in. If you want to check out the source code for the demo today, visit GitHub - mrlynn/csvimport-swift: Import CSV File using MongoDB Swift Driver - let me know your thoughts on adding argument processing to this project. I’d like to accept arguments like this:csvimport-swift --file=/Users/mlynn/Desktop/example.csv --database=students --collection=peopleWould you use a third party library - or - do you use native swift libraries for argument parsing?", "username": "Michael_Lynn" } ]
Hey folks! Join me tomorrow for a Twitch Swift Driver Livestream Event!
2020-08-27T16:28:35.281Z
Hey folks! Join me tomorrow for a Twitch Swift Driver Livestream Event!
2,082
https://www.mongodb.com/…318ac394c53e.png
[]
[ { "code": "", "text": "I am using Ubuntu 16.04 on virtualbox. I’ve installed MongoDB server 4.0 and have just enabled authentication, created admin user with localhost exception.I am importing a dataset with mongoimport. The command I’ve used and the output it gave is this:strange_mongoimort_behavior991×455 118 KBExpected output is obvious, it shouldn’t import as it requires authentication (I forgot to supply username and password) and it’s not imported so that’s fine.But one thing to notice here are the logs printed on the command line!It says 58.9% completed, 100.0% completed and finally “imported 18000 documents”.Is it correct message even though it’s been failed to import?Just wanted to share as I came across this instance.", "username": "viraj_thakrar" }, { "code": "", "text": "Agree\nTested it on Windows\nNoted same observationWithout access control enabled the command works and import is successful\nWith access control enabled it fails though log shows imported docs2020-08-28T07:44:21.062+0530 num failures: 474\n2020-08-28T07:44:21.062+0530 error inserting documents: command insert requires authentication\n2020-08-28T07:44:21.062+0530 imported 50000 documents", "username": "Ramachandra_Tummala" } ]
Mongoimport: Sharing output as it doesn't quite match with what it says
2020-08-27T10:15:28.078Z
Mongoimport: Sharing output as it doesn&rsquo;t quite match with what it says
2,467
null
[]
[ { "code": "", "text": "Testing if the path was set correctly.\n/Users/wgrizzle/mongodb/mongodb-macos-x86_64-enterprise-4.4.0/bin\nAfter the restart of terminal, I executed the below but received the message.\nLast login: Fri Aug 28 07:20:49 on ttys000wgrizzle@MAGNETO ~ % mongo --nodbzsh: command not found: mongo", "username": "Wayne_Grizzle_Grizzl" }, { "code": "", "text": "Is mongodb path updated correctly?\nCan you run /Users/wgrizzle/mongodb/mongodb-macos-x86_64-enterprise-4.4.0/bin/mongo --nodbecho $PATH should show your mongodb/bin", "username": "Ramachandra_Tummala" } ]
Mongodb Shell on OSX
2020-08-28T12:49:52.265Z
Mongodb Shell on OSX
1,372
null
[ "aggregation" ]
[ { "code": "{\n _id: ObjectId(\"5f3dd02a1b4d50831f1334d2\")\n location: \"SEA\"\n color: 'red'\n}\n{\n _id: ObjectId(\"5f3dd02a1b4d50831f1336c6\")\n location: \"SEA\"\n color: 'black'\n}\n{\n _id: ObjectId(\"5f3dd02a1b4d50831f133413\")\n location: \"WA\"\n color: 'red'\n}\n{ \"group_id\" : \"SEA\", \n \"other_related_fields\" : [ \n { \"_id\" : ObjectId(\"5f3dd02a1b4d50831f1334d2\"), \"color\" : \"red\" }, \n { \"_id\" : ObjectId(\"5f3dd02a1b4d50831f1336c6\"), \"color\" : \"black\" }, \n ]\n},\n{ \"group_id\" : \"WA\", \n \"other_related_fields\" : [ \n { \"_id\" : ObjectId(\"5f3dd02a1b4d50831f133413\"), \"color\" : \"red\" }, \n ]\n}\ndb.pen.aggregate([\n{ \"$unwind\" : { \"path\" : \"$location\", \"preserveNullAndEmptyArrays\" : true } },\n{ \"$group\" : { \"_id\" : { \"_1\" : \"$location\" } } },\n{ \"$lookup\":\n { \"from\" : \"pen\",\n \"let\" : { \"v1\" : \"$_id._1\" },\n \"pipeline\" : [\n { \"$match\" : { \"$expr\" : { \"$eq\" : [ \"$$v1\", \"$location\" ] } } },\n { \"$project\" : { \"color\" : 1 } },\n { \"$limit\" : 2 }\n ],\n \"as\" : \"other_field\"\n }\n},\n{$project: { \"_id\": 0, \"group_id\": \"$_id._1\", \"other_field\": \"$other_field\" }}\n])\nQuery 2\ndb.pen.aggregate([\n{ \"$unwind\" : { \"path\" : \"$location\", \"preserveNullAndEmptyArrays\" : true } },\n{ \"$group\" : { \"_id\" : { \"_1\" : \"$location\" } } },\n{ \"$lookup\":\n { \"from\" : \"pen\",\n \"let\" : { \"v1\" : \"$_id._1\" },\n \"pipeline\" : [\n { \"$match\" : { \"$expr\" : { \"$in\" : [ \"$$v1\", \"$location\" ] } } },\n { \"$project\" : { \"color\" : 1 } }\n ],\n \"as\" : \"other_field\"\n }\n},\n{$project: { \"_id\": 0, \"group_id\": \"$_id._1\", \"other_field\": \"$other_field\" }}\n])\n_id D1: {location:[SEA, WA]}, {color:black},\n D2: {location: SEA}, {color:red}\n D1: {location:SEA}, {color:black}\n D1: {location:WA}, {color:black}\n D2: {loctation:SEA}, {color: red}\nQuery3\ndb.pen.aggregate([\n{ \"$unwind\" : { \"path\" : \"$location\", \"preserveNullAndEmptyArrays\" : true } },\n{ \"$group\" : { \"_id\" : { \"_1\" : \"$location\" } } },\n{ \"$lookup\":\n { \"from\" : \"pen\",\n \"let\" : { \"v1\" : \"$_id._1\" },\n \"pipeline\" : [\n { \"$unwind\" : { \"path\" : \"$location\", \"preserveNullAndEmptyArrays\" : true } },\n { \"$match\" : { \"$expr\" : { \"$eq\" : [ \"$$v1\", \"$location_sca\" ] } } },\n { \"$project\" : { \"color\" : 1 } }\n ],\n \"as\" : \"other_field\"\n }\n},\n{$project: { \"_id\": 0, \"group_id\": \"$_id._1\", \"other_field\": \"$other_field\" }}\n])\n", "text": "The thing I am trying to do, is to $group documents into different groups,\nand in the meantime, get some other fields of each document.\nFor example, group by “location” (after $unwind so that each group key is\nscalar instead of array). And then I want to also get the “color” of each\ndocument.The example result will be as follows:The query I used is as follows:What it does is to do the group-by first, which generates a separate\ndocument for each group. And then do a $lookup on the same collection as\nto join with those temporary group-by documents by matching the group_id.\nTo match the group_id, I have to use the $expr.This works perfectly for specific data set. For example, if the value of\n“location” is scalar-only. However, the performance degraded heavily when\nthings get more complicated.1.2) Another way is to $unwind the “location” in the $lookup, and use the\n$expr + $eq which works simiarly to the scalar type. (Query 3)However, this brings the problem that I cannot sort and limit the\nresults based on the ‘other_field’. This is because after $unwind,\nthe array field will be split into serveral individual documents.\nAs a result I’ll have duplicate _id’s (or whatever unique key it is).\nFor example, if I have 2 documents:After $unwind, I’ll get:If I $sort on “color” and $limit:2 in the $lookup subpipeline, I’ll\nget 2 D1’s instead of D1 and D2.Thanks for reading my questions. To summarize:", "username": "Peng_Huang" }, { "code": "", "text": "I understand the question is a bit too long, but it is really important to me.\n@slava would you please kindly share some thoughts on this ^^?", "username": "Peng_Huang" }, { "code": "db.test1.insertMany([\n {\n _id: 101,\n location: ['SEA', 'WA'],\n color: 'red',\n },\n {\n _id: 102,\n location: ['SEA'],\n color: 'blue',\n },\n {\n _id: 103,\n location: 'WA', // scalar\n color: 'green',\n },\n]);\n[\n {\n \"_id\" : \"SEA\",\n \"docs\" : [\n { \"_id\" : 101, \"color\" : \"red\" },\n { \"_id\" : 102, \"color\" : \"blue\" }\n ]\n },\n {\n \"_id\" : \"WA\",\n \"docs\" : [\n { \"_id\" : 101, \"color\" : \"red\" },\n { \"_id\" : 103, \"color\" : \"green\" }\n ]\n }\n]\ndb.test1.aggregate([\n {\n $unwind: '$location',\n },\n {\n $group: {\n _id: '$location',\n docs: {\n $push: {\n // mention here only the props,\n // that you need from each document\n _id: '$_id',\n color: '$color',\n },\n },\n },\n },\n]).pretty();\n", "text": "Hello, @Peng_Huang! Welcome to the community!It seems, that you’ve over-complicated the things a bit \nLet me show you how it can be done in a more simpler way.So, we have a sample dataset (with arrays and scalars):And if we need to get the result, similar to this:We can use this simple aggregation pipeline:Also, if you know, that your field will contain one or more elements, better to make it an array initially (or migrate it later) - this way it will be easier to write aggregations (no need to use conditionals for field type, for example) and perform update operations.", "username": "slava" }, { "code": "", "text": "Hi Slava,Thanks for replying! And yes, using the $push is perfect for some scenarios. The reason I did not use $push is that I need to sort and limit on the other fields (see the point 1.2). In our example, if I sort on the “color” and limit to 2 using $push, I need to get all the documents, and then do the sort and limit myself.\nSo it would be good if there is something I leverage in MongoDB. That’s why I get to the $lookup stage.", "username": "Peng_Huang" }, { "code": "", "text": "@Peng_Huang, it would be much easier for me to understand your problem, if you:", "username": "slava" } ]
Problem with $expr on multikey indexes
2020-08-24T23:31:24.254Z
Problem with $expr on multikey indexes
3,922
null
[]
[ { "code": "", "text": "I want to do a group project for school interfacing with MongoDB. Will I be able to do this for free with the Github student discount?", "username": "Benjamin_Loshe" }, { "code": "", "text": "Hi @Benjamin_Loshe,Atlas offer a free tier with unlimited life time.This tier can be scaled to a paid tier at any desired time.MongoDB Realm serverless platform also have a generous free tier limits.Additionally if you are a github developer student you can claim credits to use for ur tasksMongoDB Student PackBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Free for school project
2020-08-27T20:24:38.055Z
Free for school project
1,629
null
[]
[ { "code": "", "text": "What is the correct way to add a secondary member to an existing shard replica set? I don’t see any documentation about this. Does the shard have to be removed and re-added to the cluster? Would I be able to clone the primary shard and stand up a secondary using that image?", "username": "Firass_Almiski" }, { "code": "dbPath", "text": "Hi @Firass_Almiski,What is the correct way to add a secondary member to an existing shard replica set? I don’t see any documentation about this.The procedures for adding and removing members from shard replica sets are the same as working with a standalone replica set. Changes in shard replica set configuration or availability are automatically discovered by clients/drivers as part of MongoDB’s distributed database design.For a guide, see: Add Members to a Replica Set. This tutorial is for the latest production release series of the MongoDB server (currently version 4.4), but versions for older releases can be selected in the navigation on the left column of that docs page. In general I would make sure you are referring to documentation matching your server version, as there may be differences in procedures or available options.Does the shard have to be removed and re-added to the cluster?Definitely not. The process of removing a shard will migrate all data from the shard being removed to existing shards, so you would only want to use that approach when decommissioning a shard.Would I be able to clone the primary shard and stand up a secondary using that image?Yes, you can use a valid file copy backup of an existing replica set member as the starting dbPath for a new member. This approach is included in the tutorial I linked above.Regards,\nStennie", "username": "Stennie_X" } ]
Add secondary member to a shard rs?
2020-08-27T19:48:12.325Z
Add secondary member to a shard rs?
2,642
null
[ "indexes" ]
[ { "code": "", "text": "What is the difference between Sort that doesn’t not use Index and Sort that uses Index?Sort that doesn’t use Index occur in memory and Sort that uses Index occur in disk, so isn’t it good to have a Sort that doesn’t use index in terms of speed?I would also like to know the comparison of results when used with other aggregates other than a only Sort.", "username": "Kim_Hakseon" }, { "code": "allowDiskUsefind", "text": "Sort that doesn’t use Index occur in memory and Sort that uses Index occur in disk, so isn’t it good to have a Sort that doesn’t use index in terms of speed?Hi @Kim_Hakseon,A sort that is supported by an index only has to fetch matching results. In the context of an aggregation pipeline, this also allows a sort stage to be non-blocking: results can be passed to other stages for processing because documents are already returned in sorted order using the index.If there isn’t an index to support a sort operation, an in-memory sort is required. In this case the server has to allocate a buffer in memory to temporarily store the search results and sort them in-place. This adds memory and computational overhead to a query and is a blocking stage for aggregation: further processing cannot happen until the results have been sorted.There is a limit for in-memory sorts to mitigate potential resource usage for a large number of concurrent in-memory sorts (100MB in MongoDB 4.4 or 32MB in earlier server versions).The aggregation framework has an allowDiskUse option which is also available for find queries in MongoDB 4.4+. This option allows data for a large in-memory sort to be written to temporary files (if necessary) rather than having the operation fail because of the in-memory sort limit.However, the optimal approach is avoiding in-memory sorts where possible.As a clarification for your description of “index on disk”, note that actively used indexes and documents are loaded into the working set in the WiredTiger internal cache which is 50% of (RAM-1GB) by default. Indexes are persisted to disk, but loaded into memory when used or updated.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Um… Is that a manual explanation?What I was curious about was this.I did this commanddb.collecion.find( { field0 : { $gt : 10000 } } ).sort( { field1 : 1 } )I’ve used this in three cases.i) no index\nii) { field1 : 1}\niii) {field1 : 1, field0 : 1 }The results of each action were like this.i) executionTimeMillis : 4541,\nsort - sort_key_generator - collscanii) executionTimeMillis : 35878\nfetch - ixscaniii) executionTimeMillis : 38614\nfetch - ixscanYou can see no index took longer.My question was this.How does Index affect the Sort to show this result?", "username": "Kim_Hakseon" }, { "code": "explain(true)", "text": "Um… Is that a manual explanation?Hi @Kim_Hakseon,My response was not copied from the documentation, if that is what you are asking. I did create it “manually” (in the English sense of putting words together myself ;-)) and included some links to relevant topics in the MongoDB documentation.My question was this.How does Index affect the Sort to show this result?I provided a general description of the processing differences to address your original question, which did not include any mention of specific queries. Hopefully that is helpful for forming a mental model of what happens with an in-memory sort.Comparing specific benchmark outcomes is a different discussion and needs more details about your testing methodology and results.Please provide more details for your testing scenario:How many times did you run your queries? Running a query will load required indexes and matching result documents into the working set in the WiredTiger cache. Subsequent queries will be faster because the cache has been primed by the first query.What is the output of explain(true) for each of those queries? The detailed Explain Output has essential information about the processing work involved if you are trying to understand performance.I recommend watching @Christopher_Harris’ talk from the recent MongoDB .live conference for a practical guide to understanding and tuning query performance: Tips and Tricks for Query Performance: Let Us .explain() Them.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "It must have been a bit strange to say that I asked the question using a translator, but thank you for answering me.I will study more and ask you more detailed questions later.Thank you.", "username": "Kim_Hakseon" }, { "code": "", "text": "It must have been a bit strange to say that I asked the question using a translator, but thank you for answering me.Hi @Kim_Hakseon,Not a problem. What language are you translating from? Are you using software translation?Our interaction in the forum is generally in English but perhaps someone in the community can provide clarity on specific aspects that may be less clear with software translation.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I am translating Korean using Papago.Ok, thank you~", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
About Sort and Index
2020-08-26T06:58:25.184Z
About Sort and Index
4,449
null
[ "app-services-user-auth", "react-js", "stitch" ]
[ { "code": "", "text": "I have used Email/Password Authentication Providers and once the user logs in for the first time a form appears for the user to enter custom data, all that worked fine but Stitch does not dynamically update the user’s custom data and only fetches a new copy of the data whenever the user refreshes their access token. This makes it really inconvenient to display the custom data after the user enters it, the data is accessible only after 30 mins if the user is logged in, I am using React.js for the UI so is there a way around this? to make the data appear in the UI right after the user enters the data for at least 30 minutes?!", "username": "Ahlam_bey" }, { "code": "", "text": "Interested in the same since I also use React, btw are you using CRA, Gatsby or Nextjs what worked best for you?", "username": "Ivan_Jeremic" }, { "code": "", "text": "Hi Folks – The current best practice is to refresh the token periodically or if you make a frontend change where you believe custom user data will be effected. Automatically updating the token when user data is changed is a good improvement and I’ve added it to feedback.mongodb.com.", "username": "Drew_DiPalma" }, { "code": "Realm.open(...)Realm.User.refreshCustomData()", "text": "A note on thisIf you preface all calls to Realm.open(...) with calls to Realm.User.refreshCustomData() you should be able to avoid this issue.\n(I’ve not fully tested this - but I assume this is what this function is for)B", "username": "Benjamin_Storrier" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Way around stale Custom User Data
2020-06-01T13:05:45.381Z
Way around stale Custom User Data
3,743
null
[]
[ { "code": "", "text": "HiDo we have an option for enabling the Draft for Github deployment of Realm APP? What I meant here. When the code is pushed to a Github branch. it should automatically show review and deploy option on Realm deployment page.Seems like automatic deployment does not give the option to review the code and deploy, In other words, if automatic deployment is not selected, can I assume there is no way to deploy the code in Github to any Realmapp ( any way of manual deployment from Github)", "username": "Roshan_Prabashana" }, { "code": "", "text": "From my understanding, Github deployment is meant for ‘Automatic’. In other words, if you are deploying an update to Github and you have this activated in Realm, then it’s “assumed” that you have already reviewed your code before deploying, which makes sense.", "username": "Solutegrate_Support" } ]
Enabling draft option for github deployment
2020-07-14T08:42:14.717Z
Enabling draft option for github deployment
1,313
null
[]
[ { "code": "jest-mongodbmongodb-memory-server", "text": "Hi MongoDB community,I noticed that the MongoDB CE download center contains linux_x86_64 (generic Linux) builds for v4.0 and before, but not for v4.2 and v4.4, as these only have distro-specific builds available. Where did the generic Linux build go for those new versions?To explain why I need a generic Linux version, I’m developing an application in TypeScript NodeJS and I want to use the jest-mongodb package, which uses mongodb-memory-server under the hood. This needs to download a MongoDB binary, for which it tries to recognise the distro I’m running. However, since I’m running Manjaro Linux (based on Arch, I know there is no official build for Arch Linux) it fails to download a distro-specific version, thus it falls back to the generic linux_x86_64 version. Since we’re running v4.2 in production, I would like to be able to run my tests against MongoDB v4.2, but this requires a generic linux_x86_64 variant of v4.2 (or above), so that the tests will run successfully on all of our development machines as well as the CI.", "username": "BvOBart" }, { "code": "", "text": "Hi @BvOBart,Thanks for your question. We discussed the rationale behind the removal of Legacy Generic Linux x64 Tar packages in this blog post, which I encourage you to read.You may also be interested in watching and voting for https://jira.mongodb.org/browse/SERVER-39459.Best,\nKelsey", "username": "Kelsey_Schubert" }, { "code": "", "text": "Hi Kelsey_Schubert,Thank you for your reply, the blog post certainly clarifies why the generic Linux tar packages were removed.Regarding the JIRA issue that you’re referring to, I’d love to vote on it, but it seems I need an account on the MongoDB JIRA in order to do so, which I don’t have. Would you be able to cast a vote for me, please?Additionally, it may be interesting for this issue to note the existence of the following AUR packages that already provide unofficial means of installing MongoDB on Arch Linux / Manjaro / derivatives:Perhaps these can serve as inspiration for an official implementation of the Arch Linux build.", "username": "BvOBart" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Where did the generic linux_x86_64 build of MongoDB v4.2 and higher go?
2020-08-26T20:44:09.626Z
Where did the generic linux_x86_64 build of MongoDB v4.2 and higher go?
2,473
null
[ "aggregation", "compass" ]
[ { "code": "", "text": "HiI would like to know how to perform an union (collection below another collection) in aggregation framework.–\nThank you in advance\nEzequias", "username": "Ezequias_Rocha" }, { "code": "$mergeObjects$setUnion", "text": "Hello @Ezequias_Rochacould you please provide a brief example what you want to archive? There are options to build something like SQL union statement. But generally the ‘need’ of an union, raises a question about your data model.In case you want to combine multiple documents into a single document you can use:\n$mergeObjectsIf you want to combine two or more arrays into one array containing the elements that appear in any input array you can use\n$setUnionMichael", "username": "michael_hoeller" }, { "code": "", "text": "I actually want to merge two collections (or subsets of collections) in a single document. Maybe I must use the $merge but how to I pass the documents to the aggregation pipeline?Thank you.", "username": "Ezequias_Rocha" }, { "code": "", "text": "Hi @Ezequias_Rocha,There’s also the $unionWith aggregation stage currently scheduled for MongoDB version 4.4 that may meet your needs.As a quick word of caution, version 4.4 hasn’t been finalized yet so there are no guarantees it will make it into the release.Thanks,Justin", "username": "Justin" }, { "code": "", "text": "I had a similar situation and this stackoverflow helped", "username": "Natac13" }, { "code": "", "text": "Hello @Ezequias_Rochathe $unionWith aggregation stage which @Justin mentioned looks very promising and will simplify the code a lot. There is a release candidate 4…4.0.rc0 available you may want to try that out (I will next week).I once had this issue too, but could fix it by adopting the needs to my data model, that is not always possible. Checking stackoverflow I found this anwers (which comes close to the one @Natac13 already posted.)\ngrafik606×1130 77.7 KB\nHope that helps\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hi @michael_hoellerCould you tell me why this aggregation keywords (stages) doesn’t appears at MongoDB Compass?How could I apply this aggregation in Compass? Could you tell me how?Regards\nEzequias Rocha", "username": "Ezequias_Rocha" }, { "code": "", "text": "Hi @Ezequias_Rocha!The $unionWith stage is available in Compass editions 1.22.0 and up, all of which are in some form of beta.They are available to download, just keep the beta state in mind!", "username": "yo_adrienne" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I perform an union of two distinct collection in aggregation framework
2020-05-26T14:02:31.600Z
How do I perform an union of two distinct collection in aggregation framework
5,621
null
[]
[ { "code": "{\n _id: 1,\n name: 'avengers',\n leader_id: 'L1'\n},\n{\n _id: 2,\n name: 'justice league',\n leader_id: 'L2'\n},\n{\n _id: 3,\n name: 'suicide squad',\n leader_id: 'L3'\n}\n{\n _id: 'L1',\n name: 'ironman',\n organization: 'MCU'\n},\n{\n _id: 'L2',\n name: 'superman',\n organization: 'DC'\n},\n{\n _id: 'L3',\n name: 'harley quinn',\n organization: 'DC'\n}\n", "text": "I am trying to make a pagination and use the countDocuments() method to return the total number of documents that would only match to all teams who’s leader is under the organization of DC.Teams CollectionLeaders CollectionMy question is, is it possible to perform $lookup aggregation to mongoDB’s countDocuments() to match the documents from 2 collections?", "username": "Jayvee_Mendoza" }, { "code": "$lookup", "text": "Hi @Jayvee_Mendoza, there is no specific method to count documents in a collection which have matching documents (EDIT ADD: in another collection). The aggregation’s $lookup operation is a way to go with. Otherwise, you may have to write two individual queries.", "username": "Prasad_Saya" } ]
MongoDB countDocuments() with lookup
2020-08-26T20:41:58.182Z
MongoDB countDocuments() with lookup
2,761
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.4.1-rc2 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.0. The next stable release 4.4.1 will be a recommended upgrade for all 4.4 users.\nFixed in this release:4.4 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.4.1-rc2 is released
2020-08-27T16:04:24.803Z
MongoDB 4.4.1-rc2 is released
2,062
null
[ "kafka-connector" ]
[ { "code": "curl -X GET http://schema-registry:8081/subjects/mongo.test.pageviews-value/versions/1/schema curl -X POST http://localhost:8083/connectors -H \"Content-Type: application/json\" -d '{\n\t \"name\": \"sr-connector-test\",\n\t \"config\": {\n\t\t\"key.converter\":\"io.confluent.connect.avro.AvroConverter\",\n\t\t\"key.converter.schema.registry.url\":\"http://schema-registry:8081\",\n\t\t\"value.converter\":\"io.confluent.connect.avro.AvroConverter\",\n\t\t\"value.converter.schema.registry.url\":\"http://schema-registry:8081\",\n\t\t\"connector.class\":\"com.mongodb.kafka.connect.MongoSourceConnector\",\n\t\t\"key.converter.schemas.enable\":\"true\",\n\t\t\"value.converter.schemas.enable\":\"true\",\n\t\t\"connection.uri\":\"mongodb://uri:[email protected]:27017,mongo-shard-00-01.mongodb.net:27017,mongo-shard-00-02.mongodb.net:27017/db.Collection?ssl=true&authSource=admin&replicaSet=mongo--shard-0\",\n\t\t\"database\":\"databa-Name\",\n\t\t\"collection\":\"Collection\",\n\t\t\"topic.prefix\": \"sr_topic\",\n\t\t\"publish.full.document.one\": \"true\"\n\t}}'\n{\"name\":\"sr-connector-test\",\"connector\":{\"state\":\"RUNNING\",\"worker_id\":\"connect:8083\"},\"tasks\":[{\"id\":0,\"state\":\"RUNNING\",\"worker_id\":\"connect:8083\"}],\"type\":\"source\"}\"{\"_id\": {\"_data\": {\"$binary\": \"gl5O1i8AARmVwXaNDYq7CWduUosquBA==\", \"$type\": \"00\"}}, \"operationType\": \"update\", \"ns\": {\"db\": \"name\", \"coll\": \"tx\"}, \"documentKey\": {\"_id\": {\"$oid\": \"5e4de1f01b8406\"}}, \"updateDescription\": {\"updatedFields\": {\"updated\": {\"$date\": 15824943562}}, \"removedFields\": []}}\"", "text": "Hi,I’m using MongoSourceConnector, to connect a Kafka I’m getting the message on the topic without a problem, but when I wanna try to do a schema-registry from this I’m getting this:{“schema”:{“type”:“string”,“optional”:false}On the schema registry:\ncurl -X GET http://schema-registry:8081/subjects/mongo.test.pageviews-value/versions/1/schema\n“string”Looks if the change stream returns a JSON string, any way to change this to return a JSON?Driver connector:Connector Status:{\"name\":\"sr-connector-test\",\"connector\":{\"state\":\"RUNNING\",\"worker_id\":\"connect:8083\"},\"tasks\":[{\"id\":0,\"state\":\"RUNNING\",\"worker_id\":\"connect:8083\"}],\"type\":\"source\"}Topic returned:\n//Always return the msg with double quotes \" \"\n\"{\"_id\": {\"_data\": {\"$binary\": \"gl5O1i8AARmVwXaNDYq7CWduUosquBA==\", \"$type\": \"00\"}}, \"operationType\": \"update\", \"ns\": {\"db\": \"name\", \"coll\": \"tx\"}, \"documentKey\": {\"_id\": {\"$oid\": \"5e4de1f01b8406\"}}, \"updateDescription\": {\"updatedFields\": {\"updated\": {\"$date\": 15824943562}}, \"removedFields\": []}}\"", "username": "Ivan_Dario_Trebilcoc" }, { "code": "", "text": "Thanks @Ross_Lawley for you answer, was very helpful.", "username": "Ivan_Dario_Trebilcoc" }, { "code": "", "text": "Hi @Ross_Lawley,I’m currently having similar problems, however I can’t find the issue you refer to.Do you have any updates regarding this?Thanks,\nMiguel", "username": "Miguel_Azevedo" }, { "code": "", "text": "My Apologies, please check out KAFKA-124 instead.Ross", "username": "Ross_Lawley" }, { "code": "", "text": "Thank you very much @Ross_Lawley, it seems to be exactly what we need to achieve our goal.Do you have any idea or roadmap on when this would be put into a release? (To be accessed via Confluent Hub for example)Miguel", "username": "Miguel_Azevedo" }, { "code": "", "text": "I am creating a demo that leverages this enhancement in addition to the other stuff that will be in 1.3. Here is a link to the demo, https://github.com/RWaltersMA/kafka1.3. The JAR file in the github is just a snapshot build and still in development so don’t use in production. If you have any questions LMK, also if it fits your needs or have suggestions we’d love to know too.", "username": "Robert_Walters" } ]
Mongo-Kafka source connector change stream return string?
2020-02-26T19:09:11.145Z
Mongo-Kafka source connector change stream return string?
4,239
null
[ "sharding" ]
[ { "code": "", "text": "mongos> sh.isBalancerRunning()\ntruemongos> sh.getBalancerState()\nfalseI have stopped the balancer but it seems to be processing a very long-standing migration. How can I see the status of this migration? How can i force-stop all migrations safely?Also what is the impact of trying to index a populated, sharded, collection while the balancer is enabled/running? Is this safe? I ask because I attempted to build indexes on a few sharded very large sharded collections in the cluster but noticed one of the shard nodes did not completely build all the indexes when I ssh’d into it and checked. This node in particular kept repeatedly trying to build a fulltext index over the span of a few days, with this message in it’s mongod.log“Restarting index build”My sharded cluster is running 4.4 community edition on AWS graviton r6g (ARM) servers", "username": "Firass_Almiski" }, { "code": "", "text": "Follow up – any advice?", "username": "Firass_Almiski" } ]
Debugging balancer? How to safely stop all in-prog migrations?
2020-08-16T20:33:54.291Z
Debugging balancer? How to safely stop all in-prog migrations?
1,679
null
[]
[ { "code": "UPDATE teams SET active = 0 WHERE id = 3 AND leader_id IN (SELECT id FROM users WHERE organization = 'Accounting Department'){\n id: 1,\n name: \"white horse\",\n leader_id: \"L1\",\n active: 1\n},\n{\n id: 2,\n name: \"green hornets\",\n leader_id: \"L2\",\n active: 1\n},\n{\n id: 3,\n name: \"pink flaminggo\",\n leader_id: \"L3\",\n active: 1\n}\n{\n id: \"L1\",\n name: \"John Doe\",\n organization: \"Software Development\",\n active: 1\n},\n{\n id: \"L2\",\n name: \"Peter Piper\",\n organization: \"Software Development\",\n active: 1\n},\n{\n id: \"L3\",\n name: \"Mary Lamb\",\n organization: \"Accounting Department\",\n active: 1\n}\n", "text": "I started working with mongodb a month ago and still having problems working with complex queries but anyway, can anyone help me rewrite this SQL update query to mongoDB query? UPDATE teams SET active = 0 WHERE id = 3 AND leader_id IN (SELECT id FROM users WHERE organization = 'Accounting Department')Below is my sample data.Teams Collection:Users Collection:", "username": "Jayvee_Mendoza" }, { "code": "// First, collect all the ids of users \n// from 'Accounting Department' \ndb.users.aggregate([\n {\n $match: {\n organization: 'Accounting Department',\n },\n },\n {\n $group: {\n _id: null,\n usersIds: {\n $push: '$_id',\n },\n },\n },\n]);\n// Then, query and update your document \n// with known filter params\ndb.teams.updateOne({\n id: 3,\n leader_id: {\n $in: usersIds, // use value from previous operation\n },\n}, {\n active: 0,\n});\n", "text": "Hello, @Jayvee_Mendoza!Currently, MongoDB does not support sub-queries in update operations.\nSo, to perform your update, you need to know the result of sub-query upfront.Example solution:", "username": "slava" }, { "code": "", "text": "Hi @slava,Thank you for your reply. I was stuck with my task trying to look for answers but now its clear to me that its not yet possible. I appreciate you giving me ideas with your example solution.", "username": "Jayvee_Mendoza" } ]
Mongodb update with subquery
2020-08-26T05:04:02.313Z
Mongodb update with subquery
3,553
null
[]
[ { "code": "", "text": "Hi there,\nI’m trying to configure a scheduled job to pause and resume an Atlas cluster using mongocli. The task is pretty straightforward, however i can’t find an official mongocli docker image.\nDoes anybody know if i can rely in any existing image or should i create one myself?\nMany thanks.", "username": "Guido_Giosa" }, { "code": "", "text": "Hi Guido,There is no docker image just for a mongocli.You will need to create your own. You can also consider using Atlas triggers to pause and resume clusters:Learn how to automate your MongoDB Atlas cluster with scheduled triggers.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Official mongocli docker image
2020-08-24T23:38:59.263Z
Official mongocli docker image
2,522
null
[]
[ { "code": "", "text": "Hello,I am building a Realm app and I am wondering if there is a way to grant read-only permissions to anonymously authenticated users, but read & insert permissions to email/password users without any additional data on the document in question?", "username": "Harry_Merzin" }, { "code": "", "text": "Yes - you can add this logic to your ‘Apply When’ when configuring Roles. One way to do it is:Add a Anon Role and add the following ‘Apply When’ expression (since anon users don’t have an email associated with them).{\n“%%user.data.email”: { “%exists”: false }\n}Grant Read only permissions to this RoleAdd an Email Password Role and add the following apply when expression{\n“%%user.data.email”: { “%exists”: true }\n}Grant Read and Write permissions to this RoleRoles/Apply When Reference\nUser Object Reference", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Permissions for anonymous users
2020-08-24T21:34:13.858Z
Permissions for anonymous users
3,977
null
[]
[ { "code": "var config = Realm.Configuration(schemaVersion: 83)\nguard let fileUrl = config.fileURL else {\n throw AppError.invalidState(\"fileURL of realm config is nil\")\n}\nconfig.fileURL = fileUrl.deletingLastPathComponent().appendingPathComponent(\"content.realm\")\nconfig.readOnly = false\nlet realm = try Realm(configuration: config)\nclass Grade: Object {\n @objc dynamic var id=\"\"\n @objc dynamic var sno=0\n @objc dynamic var name=\"\"\n @objc dynamic var version=0\n @objc dynamic var timestamp = Date()\n let linkedPrimarySkills = List<String>()\n override static func primaryKey() -> String? {\n return \"id\"\n }\n \n override static func indexedProperties() -> [String] {\n return [\"sno\", \"name\"]\n }\n}\nlet grade3 = realm.object(ofType: Grade.self, forPrimaryKey: \"003\")\n", "text": "I’m integrating the MongoRealm into existing iOS App. In this App - I have a Local Writable Realm file with some objects that I don’t wish to participate in Realm Sync as this contains a lot of catalog data.So I have a used a specific file name based Realm Configuration to initialise the Realm object.With the above - I’m able to see the file created in the documents folder.\nI’m also able to create a new object in that Realm using the following definitionAnd when I query the above realm using the realm.objects(Grade.self) - It returns the object I wrote as expected.But if I use the find by primary key method to pick a specific object -it returns NIL.Since the above class is NOT part of a Synced Realm - I don’t have a partition key. Also unlike the collections in Synced Realm - I’m using a simple primary key called id. All this code is from Classic Realm usage as a local data store.So I’m puzzled how this basic functionality is broken ? Is it not possible to have a local (Non-sync) realm co-exist with a sync realm.Thanks\nRam", "username": "Ram_Sundaram" }, { "code": "let grade3 = realm.object(ofType: Grade.self, forPrimaryKey: \"003\")let filtered = grades.filter(NSPredicate(format: \"id == %@\", \"003\"))", "text": "While trying to further collect details on the above - I also did predicate queriesWhile let grade3 = realm.object(ofType: Grade.self, forPrimaryKey: \"003\") is returning NIL - I’m able to do use the filtered query let filtered = grades.filter(NSPredicate(format: \"id == %@\", \"003\")) to find the record .So looks like a bug to me. It will be great if someone can help confirm?Best regards\nRam", "username": "Ram_Sundaram" }, { "code": "", "text": "Have you committed the transaction before querying on the primary key? If I use createOrUpdateModified to add the object to Realm and then get all objects of that type before committing, I get one object. However, if I query for the object by its primary key, nil is returned.", "username": "Nina_Friend" }, { "code": "", "text": "Yes I’m not writing & immediately trying to read it. Write is committed. I’ve also closed the app & double checked by opening the Realm file in realm studio. Object is definitely there. But when I relaunch the app & query based on Primary key - its nil", "username": "Ram_Sundaram" }, { "code": "", "text": "@Ram_Sundaram This looks a correct to me, can you open an issue here with a repro case?Realm is a mobile database: a replacement for Core Data & SQLite - GitHub - realm/realm-swift: Realm is a mobile database: a replacement for Core Data & SQLite", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_Ward Yes I had done so yesterday even before I posted here.Realm.object returns NIL object when queried by primary key in Realm 10.0 beta2 · Issue #6672 · realm/realm-swift · GitHub - sorry missed referring to that issue over here.thanks\nRam", "username": "Ram_Sundaram" }, { "code": "", "text": "Since the above class is NOT part of a Synced Realm - I don’t have a partition key.Just a quick follow up on this topic.Like the OP, I also read this from the documentationRealm Sync ignores any collections that lack a Realm Schema as well as any documents that lack a valid value for the partition key.to mean that if the object does not have a partition key, it is ignored.That is NOT what it means.That statement is referring to documents on the server side only. e.g. If you add a realm object to your app, with the intention for example, of using it only locally, Realm will attempt to sync it anyway and when it doesn’t have a _partitionKey, the console will throw errors, your app will stop working and nothing will sync. You’ll then have to wipe the client to correct the issue.Per support,if you want to use Objects in your project that do not sync, you need to manually specify the objectTypes under ConfigurationAt this moment however, what that process is or how to make to make it work is a mystery. The documentation doesn’t specify that process or when to modify the objectTypes property - possibly before the first sync?When I figure it out, I will post some sample code. Please feel free to contribute if you know.", "username": "Jay" } ]
Using Local NON sync Realm file
2020-07-26T13:08:55.924Z
Using Local NON sync Realm file
2,365
null
[ "connecting" ]
[ { "code": "", "text": "Hi,I was trying to connect my IBM Cloud Project to my MongoDB cluster but wasn’t able to do so. But when I tried to connect to host “cluster0-shard-00-00-jxeqq.mongodb.net” , which is created for the mongodb university(training purpose), I was able to connect. Basically any IBM Cloud isn’t able to connect to host that I created but my IBM Cloud instance was able to connect to the one that is setup by you guys. I was wondering the issue could be the different ways the host are set up(configured) but could not pinpoint what exactly it is that is making the difference?", "username": "Harpreet_Kaur" }, { "code": "localhostmongo", "text": "Welcome to the community @Harpreet_Kaur!If you have created a new self-hosted MongoDB installation, it will only be listening to localhost by default. If your IBM Cloud application is running on a different instance from the one where you have installed MongoDB, you will need to set up appropriate firewall and security access.Please review the MongoDB Security Checklist for your version of MongoDB server. In particular, you will want to configure access control, network encryption (TLS/SSL), and firewall access to limit exposure to your MongoDB instance.If you are still having difficulty, please provide more information including:I recommend using the mongo shell to test connectivity to your MongoDB deployment before trying to set up a driver connection.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks Stennie for your response. Lets ignore the IBM for a minute. I have not been able to connect to one of the hosts within the replica set using the mongo shell. Although I am able to connect to the entire replica set using the mongo+srv record using the shell. Also I have been able to connect to individual host using the same credentials through compass. I feel if I am able to resolve this then I will be able to fix the IBM thing.Sincerely,\nHarpreet", "username": "Harpreet_Kaur" }, { "code": "", "text": "If you are able to connect by Compass to single node sameway you can connect by shell\nWhat command you used for Compass?\nI can connect to individual nodes of Class cluster(jxeqq) using shellMongoDB Enterprise Cluster0-shard-0:PRIMARY> exit\nMongoDB Enterprise Cluster0-shard-0:SECONDARY> exit", "username": "Ramachandra_Tummala" } ]
Inability to connect IBM Cloud to Mongodb host
2020-08-21T00:07:24.927Z
Inability to connect IBM Cloud to Mongodb host
1,919
null
[ "indexes" ]
[ { "code": "{\n \"mobile\": {\n \"sparse\": true,\n \"unique\": true\n }\n}\n{\n \"mobile\": [\"sparse\", \"unique\"]\n}\n{\n \"mobile\": \"unique\"\n}\n", "text": "Can anyone indicate the right way to create an index for a field that is both sparse and unique, in Mongo Cloud, using the web interface?Basically I want it so when a value is provided for the field it is unique, but it may be null and null does not count towards duplicates.What I have tried:Also this should have worked as basic.No errors, but no index on mobile field either. I had one previously, but removed it since it was only set to unique.The only possibility that could maybe be causing a problem (which it should not), is a text index “name_text_mobile_text”, given I just had a mobile only index before removing it for rebuilding?I did try reading the documentation reference from the dialogue, but it doesn’t give anything useful contextually for the fields in that dialogue.", "username": "Andre-John_Mas" }, { "code": "{\nmobile : 1\n}\n{\n \"sparse\": true,\n \"unique\": true\n }\n", "text": "Hi @Andre-John_Mas,You need to use the Data Explorer and specify the field with its index order (ASC 1, DESC -1)And in the option section you need to specify both sparse and unique as true:More info here: Manage Indexes in Data Explorer — MongoDB Cloud ManagerBest regards,\nPavel", "username": "Pavel_Duchovny" } ]
Create new sparse, unique index in Mongo CLoud
2020-08-25T20:53:41.460Z
Create new sparse, unique index in Mongo CLoud
2,058
null
[ "security" ]
[ { "code": "security:\nclusterAuthMode: x509\nnet:\ntls:\nmode: requireTLS\ncertificateKeyFile: <path to its TLS / SSL certificate and key file>\nCAFile:\nclusterFile:\nbindIp: localhost, <hostname (s) | ip address (es)>\n", "text": "in my studio environment I have a replicaset with mongodb 4.2.8 community edition. When I configure RS members to work with internal authentication, I must necessarily define 2 certificates: CAFile and certificateKeyFile. It is not possible to insert only clusterFile, in fact if it is not present\ncertificateKeyFile I get this error when mongod starts:“Failed global initialization: BadValue: need tlsCertificateKeyFile or certificateSelector when TLS is enabled”.clusterFile is optional and if it is not present, mongodb uses certicateKeyFile.On the certificateKeyFile certificate, mongod verifies the subject; on the clusterFile certificate instead, there is no verification on the subject and I can put any certificate signed by the CA, even of other members, it can be the same for all members or not, and everything works correctly\n(I created a certificate with a CN equal to the name of the replica set, and I distributed it on the members and it works).I therefore miss the meaning of the clusterFile certificate.In the Mongodb documentation, its meaning is described like this:But if no check is carried out on the subject of the certificate, what verification is carried out on membership authentication? What certification obligation does clusterFile take me to?thanks!!", "username": "Walter_Fortunato" }, { "code": "mongod", "text": "Hi @Walter_Fortunato,Thr clusterFile is a property to use a different certificate explicitly for internal communication of the members. The certificateKeyFile is expected to be presented as a server certificate to the application client.If clusterFile is not provided the mongod also uses certificateKeyFile certificate as a client to other members communication. Therefore, in this case, its usage needs a client and server roles.Let me know if you have any questionsPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,Thank you for your answer.My doubt lies in the fact that it is not possible to start a mongod instance using only the clusterFile, as it is also necessary to specify certificateKeyFile.Also no check is done on the ClusterFile certificate: for example I could use any valid certificate that contains anything in CN. Why produce two different certificates if what really matters is certificateKeyFile?Furthermore, I believe it is correct to expect the ClusterFile certificate to uniquely identify a node as a member of that cluster, perhaps using the cluster name as the CN, as well as certificateKeyFile uniquely identifies the client node.Thank you\nRegards", "username": "Walter_Fortunato" }, { "code": "", "text": "Good morning Walter_FortunatoI am trying to configure my replica set cluster to change the internal authentication of the cluster members. Currently I have keyfile authentication established, which is the simplest, but I need to know how to configure that internal authentication using self-signed certificates. I have followed the documentation and got nothing.I have created a thread in the mongodb community in case you want to look at it where I explain what happens to me.Best regards.", "username": "Eduardo_HM" } ]
Enabling Internal X.509 Authentication: clusterFile
2020-08-20T11:45:02.015Z
Enabling Internal X.509 Authentication: clusterFile
2,856
null
[ "python" ]
[ { "code": "firstCollection = {{\"company\":\"apple\"},{\"company\":\"tesla\"},{\"company\":\"google\"}}\nsecondCollection = {{\"number\":\"1\"},{\"number\":\"2\"},{\"number\":\"3\"}}\nimport flask\nimport pymongo\nfrom flask import abort, Flask, render_template\nfrom flask_pymongo import PyMongo\n\n\napp = Flask(__name__)\napp.config[\"MONGO_URI\"] = \"mongodb://localhost:27017/mrstock\" \nmongo = PyMongo(app)\n\[email protected](\"/\")\ndef test():\n\n coll1 = mongo.db.firstCollection\n coll2 = mongo.db.secondCollection\n\n coll1Datas = coll1.find({})\n coll2Datas = coll2.find({})\n\n return render_template(\"test.html\",coll1Datas=coll1Datas,coll2Datas=coll2Datas)\n\n\nif __name__ == \"__main__\" : \n app.run(debug=True) \n \n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>Document</title>\n</head>\n<body>\n\n\n {% for coll1 in coll1Datas %}\n {{coll1.company}}\n {% for coll2 in coll2Datas %}\n {{coll2.number}}\n {% endfor %}\n {% endfor %}\n</body>\n</html>\napple 123 tesla google\napple 123 tesla 123 google 123\n", "text": "I’m having a trouble with using for loop in flask jinja2. This is what I did.mongodb collectionrun.pytest.htmlthe result is looking like thisthis is for loop in for loop so, I was expecting likewhy the second loop inside the first loop only working one time? I did several test, and I think after for loop, jinja2 do not have data information anymore. How can I use data ‘for loop’ again and again which send from flask? Thank you for your support!", "username": "JJ_Lee" }, { "code": "findlistlistcoll1Datas = list(coll1.find({}))\ncoll2Datas = list(coll2.find({}))\n", "text": "Hi @JJ_Lee! Great question - this is a problem encountered by many Python developers!A PyMongo find call returns an iterable - which is something that can be looped over with a for loop. This has the advantage that the data downloads in chunks as you as you loop through the results - which means you don’t need to store the whole result in memory! This does mean that you can only loop through it once, as you’ve discovered.The solution is to add all the results to a list - which may use more memory, because all the results are now loaded into memory at one time - but it means you can loop through the list multiple times. Fortunately, this is quite straightforward in Python - you wrap the iterable with list, like this:I hope this helps! If you want to learn more about loops and iterables in Python, you should check out this talk by my friend Trey Hunner.", "username": "Mark_Smith" }, { "code": "", "text": "Oh My~! really really thank you!! I couldn’t find solution and had almost two days spent for this problem. Really thank you!! Have a great day Sincerely,\nfrom Korea.", "username": "JJ_Lee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I reload the for loop in flask jinja2?
2020-08-26T01:06:12.656Z
How can I reload the for loop in flask jinja2?
3,952
null
[]
[ { "code": "", "text": "Hi folks,So, since there is no Terraform module for MongoDB Cloud Manager.I have created this PoC of MongoDB Cloud Manager API using node.jsIt creates a project automatically and replica set with some users already in it and enables monitoring too.One question for MongoDB, why Digest Authentication, jeez is hard to find a library that supports it!Please note: Is just PoC, not expect a super awesome code. You can fork it and make it super awesome.", "username": "Mario_Pereira1" }, { "code": "", "text": "Hi @Mario_Pereira1,Thanks for sharing this I will make sure someone from our automation teams will check it.Have you tried our new mongo-cli for cluster management on Cloud Managerhttps://docs.mongodb.com/mongocli/master/Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,I didn’t check cli because, all my infrastructure is already in code (terraform and ansible inside Gitlab ci/cd, except mongodb cloud manager).CLI is nice when you have 1 project, is not very nice when you have 40 mongodb cloud manager projects .", "username": "Mario_Pereira1" } ]
MongoDB Cloud Manager Automation
2020-08-25T23:51:05.278Z
MongoDB Cloud Manager Automation
1,508
null
[]
[ { "code": "", "text": "I need details about pricing on managed mongo db instance on azure. I have gone through the Azure pricing sheet and have some questions. We need the estimated resources that will be required to store at least a million records and sustain certain number of requests per minute. Is the pricing based per database/collection/throughput ?. We also want to know what services would be included in the managed instance.We have tried reaching out the support team to help us out in choosing the correct tier of Dedicated+Managed clusters but no response in 4 hours. I have seen the pricing sheet and want to know which tier would be suitable for us.", "username": "Vishal_Mishra" }, { "code": "", "text": "Hi @Vishal_Mishra,The Atlas tier is based on the managed instance type similar to VM tiering. The instance types differ per CPU, Storage, RAM and amount of connection limits/network capabilities.\nCluster price can also differ by regions and by amount of nodes you run.The following docs should help you startedhttps://docs.atlas.mongodb.com/sizing-tier-selection/I suggest you keep this effort with support to better fit it for your needsPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Also, would it be possible to talk to a representative regarding the best options to purchase the required tier; our need is somewhat urgent", "username": "Vishal_Mishra" }, { "code": "", "text": "Hi @Pavel_Duchovny ,\nThanks for the response. There are a few things that are still unclear. Say I have a requirement to store 50 million records with 100K incoming requests per minute, what tier should I choose for Azure ? Should the M60 tier suffice in this case or should I opt for M80 tier ? I understand the cluster and storage auto scaling part, but I want to start with a price estimation as I saw on pricing sheet.Thanks\nVishal", "username": "Vishal_Mishra" }, { "code": "", "text": "Hi @Vishal_Mishra,The needed size is a subject of load testing for your secific use case. The main points is storage should be sufficient to store the 50M documents and Memory big enough to keep your “hot” working set. However, for one document type it can be 50GB but for another it can be 500GB or 1TB so its hard to guess.One of Atlas advantages is you can start your storage and compute tier at one place and transparently, if application considerations are respected, scale your cluster to fit the growth requirements as you go.We have a great serious of blogs for schema design and performance best practices which I recommend:Best practices for delivering performance at scale with MongoDB. Learn about the best way to evaluate performance with benchmarks.A summary of all the patterns we've looked at in this seriesSince sizing your production is a complex task we offer consultancy packages for this.Best regards,\nPavel", "username": "Pavel_Duchovny" } ]
How to get estimation for my org's requirements
2020-08-26T01:20:54.682Z
How to get estimation for my org&rsquo;s requirements
1,928
null
[ "server" ]
[ { "code": "", "text": "The files in the dbPath directory must correspond to the configured storage engine. Mongod will not start if dbPath contains data files created by a storage engine other than the one specified by --storageEngine.What’s mean this sentences…", "username": "Kim_Hakseon" }, { "code": "mongoddbpath--dbpath", "text": "Hello @Kim_Hakseon,By default, MongoDB’s storage engine is WiredTiger. But, MongoDB server can be configured with other storage engines. The way the files are created and stored is different for different storage engines. In the MongoDB server versions prior to v4.2, there was an option to use MMAPv1 storage engine (see Storage Engines v4.0).The mongod can be started by specifying the –storageEngine option. In case you have started the server using a specific storage engine, and then try to restart with another storage engine, the server will not start - when the dbpath points to the same directory as before (the --dbpath option is used to specify the directory where the database files are stored - your collections, indexes, definitions, data, etc.).NOTE: Starting in version 4.2, MongoDB removed the deprecated MMAPv1 storage engine.", "username": "Prasad_Saya" }, { "code": "", "text": "Ah-ha!So, the meaning of that sentence is, “You have to use the storage engine that you first used.”, is that ?", "username": "Kim_Hakseon" }, { "code": "", "text": "Yes. What is the version of MongoDB you are working with?", "username": "Prasad_Saya" }, { "code": "", "text": "I am 4.2 version.And 4.4 version studying.", "username": "Kim_Hakseon" }, { "code": "", "text": "Generally, using a configuration file to start the mongod is a good practice - so that you don’t have to type the options on the command-line each time when starting the server. While typing the options there can be a typo or can miss an option - there can be many options for a fully configured production system.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you, Thank you~It really helped me a lot of help.", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
About MongoDB dbPath in Manual
2020-08-26T02:02:50.788Z
About MongoDB dbPath in Manual
1,232
null
[]
[ { "code": "_idNumberLongdb.Collection.find()db.Collection.find()", "text": "Hello there!So I’ve been changing my _id from “string-ints” to actual NumberLong values…This way, I’ve been trying to reIndex the collection which contains over 900k documents so they are ordered by descending number, 1 being the first and Int64 the last…However, when I check out db.Collection.find() with mongo, it prints them out all randomly, for example: 3, 5, 1, 4, 2Is it just the db.Collection.find() that makes them look like it or are they actually all random?\nIf they are all random, is there a way to fix that?Thanks", "username": "Sylmat_gaming" }, { "code": "db.Collection.find()sort()sort()db.Collection.find().sort({_id:1})\n_id_id", "text": "Welcome to the community @Sylmat_gaming!Is it just the db.Collection.find() that makes them look like it or are they actually all random?If you haven’t explicitly specified the order of results using sort() criteria, the default order is undefined outside of the special case of capped collections. This is called natural order.For more details see: How does MongoDB sort records when no sort order is specified?.If they are all random, is there a way to fix that?Include sort() criteria:For more tips on sorting efficiently, see: Use Indexes to Sort Query Results. This example of sorting on _id without any filter criteria will be able to use the _id index, which is a required index automatically created for MongoDB collections.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "So does it mean that in the database they are all sorted by descending order?", "username": "Sylmat_gaming" }, { "code": "sort()", "text": "Hi @Sylmat_gaming,The natural order of documents at the storage layer is generally not defined except for the special case of capped collections (which has associated usage restrictions, like disallowing direct document deletion).If you want results in an expected order, you need to provide explicit sort() criteria.Otherwise results will be returned in the most efficient path (i.e. “as they are found”) and the implementation is not required to provide any strict ordering. You can see this empirically in your own results: the natural order is not deterministic.does it mean that in the database they are all sorted by descending orderIt means that there is no expected result ordering unless you explicitly request one. The value of the primary key does not determine the physical ordering of documents in storage.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Fair enough, thanks ", "username": "Sylmat_gaming" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Indexes not ordered correctly
2020-08-26T01:06:16.652Z
Indexes not ordered correctly
3,253
null
[ "replication" ]
[ { "code": "", "text": "I have 3 node P-S-S.In Primary, I did “rs.stepDown(20)”Then, replica set become S-P-S for 20 sec.Then, replica set will be P-S-S.However, it was not.What is “rs.stepDown”'s first option meaningIf what I know is right, why not come back?please help me, and thank you", "username": "Kim_Hakseon" }, { "code": "stepDownstepDownstepDown(20)", "text": "Hi @Kim_Hakseon!In a replica set, the servers will communicate with each other and decide on a machine to become primary. Any of the machines in the cluster can become primary.Whichever machine is primary will stay the primary unless something happens which means it can’t be primary any more, such as a reboot, or a call to stepDown. At that point another machine in the cluster (if possible) will be elected the new primary.At this point, the same rules apply - the new machine will stay primary until something happens to stop it.The first option to stepDown is the amount of seconds during which the primary you are stepping down is not allowed to be primary, but that doesn’t mean it will automatically become primary again after 20 seconds.When you’ve called stepDown(20) you’ve forbidden the primary you’re connected to from becoming primary again for 20 seconds, but unless a new election happens after that time, the machine will stay as a secondary indefinitely.Here’s the documentation for this call, in case you need more details.", "username": "Mark_Smith" }, { "code": "", "text": "Oh, I was mistaken.Thank you. ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
rs.stepDown() Question
2020-08-25T06:23:55.463Z
rs.stepDown() Question
1,836
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "# .NET Driver Version 2.11.1 Release Notes\n\nThis is a patch release that fixes a couple of bugs reported since 2.11.0 was released.\n\nAn online version of these release notes is available at:\n\nhttps://github.com/mongodb/mongo-csharp-driver/blob/master/Release%20Notes/Release%20Notes%20v2.11.1.md\n\nThe list of JIRA tickets resolved in this release is available at:\n\nhttps://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.1%20ORDER%20BY%20key%20ASC\n\nDocumentation on the .NET driver can be found at:\n\nhttps://mongodb.github.io/mongo-csharp-driver/\n\n## Upgrading\n\nIf you are writing a WinForms application you will want to upgrade to this version of the driver (see CSHARP-3182).\n\n", "text": "This is a patch release that fixes a couple of bugs reported since 2.11.0 was released.An online version of these release notes is available at:The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.1%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:If you are writing a WinForms application you will want to upgrade to this version of the driver (see CSHARP-3182).There are no known backwards breaking changes in this release.", "username": "Robert_Stam" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C#/.NET Driver 2.11.1 Released
2020-08-25T23:21:05.621Z
MongoDB C#/.NET Driver 2.11.1 Released
2,257
null
[ "student-developer-pack" ]
[ { "code": "", "text": "hello i am using github student account… how do i take the certification exam for free and get a certificate", "username": "Collins_Jimu" }, { "code": "", "text": "Welcome to the community @Collins_Jimu!Quoting from some earlier discussion on Free Certification for Student :Once you’ve completed one of our Learning Paths , you’ll receive 100% discount to the exam. So it’s either for the DBA or for the Developer certification exam.Please go to your dashboard at https://www.mongodb.com/students and follow the instructions underneath ‘Free Certification’.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is the exam free for github students
2020-08-25T22:37:34.162Z
Is the exam free for github students
6,979
null
[]
[ { "code": "", "text": "Hi,I’m currently researching read and write operations on various NoSQL datastores in my master theses. I would like to know specifically which kind of caches are involved during a read and write of documents with mongodb. I’ve found an older presentation from percona (slide no. 9 - https://www.percona.com/sites/default/files/presentations/Monitoring-MongoDBs-Engines-in-the-Wild.pdf ) which is describing what I’m searching for. But of course I would like to have a more reliable source for my research. My main question is if there is any technical paper, documentation or something similar that could help me?Thank you!\nSebastian", "username": "Sebastian1" }, { "code": "", "text": "That’s a very good question. For example, I like the descriptions about Cassandra’s write-path and read-path in WritePathForUsers - CASSANDRA2 - Apache Software Foundation\n, ReadPathForUsers - CASSANDRA2 - Apache Software Foundation\n. So I’d like to see more reliable descriptions about mongo’s equivalents.", "username": "Lewis_Chan" }, { "code": "", "text": "That’s funny - I’m comparing mongodb to cassandra and I really love the datastax guide (nice graphics, which makes it even better understandable): How Cassandra reads and writes dataIt would be great if some mongodb/wiredtiger dev could help me out! ", "username": "Sebastian1" }, { "code": "", "text": "Hi Sebastian,WiredTiger uses memory as a cache for all the data on the disk and the data in memory forms the current working set, overall it is similar to any key-value system will look like. It uses a least-recently-used algorithm to continuously moves data to disk that is currently not being accessed out of the memory to free up enough space to read data that are requested by the user but currently reside on the disk back into memory.Consider two options while reading data from WiredTiger.Cache management is a huge portion of WiredTiger. Can you be more specific on what you are looking for. ?Thanks,\nRavi", "username": "Ravi_Giri" }, { "code": "", "text": "The cache contains the btree index and requested data, Yes ? If it does, why bother to search the index ? Just return the data in cache.Thanks you for describing the read path. How about the write path ?", "username": "Lewis_Chan" }, { "code": "", "text": "Hi Ravi,first of all - thank you very much! This made it much more clear how a read works. To be a bit more specific about my research issue I would like to refer to my newsgroup post: https://groups.google.com/g/wiredtiger-users/c/lyS1HoGVErU/m/5XZSYEW5BgAJ Maybe you can clarify the question I’ve brought up there In addition to @Lewis_Chan question: My guess is that the data is organized (in cache / on-disk) as a b-tree? Is the “_id”-index seperate from the data in another b±tree?", "username": "Sebastian1" }, { "code": "", "text": "Is somebody maybe explain to explain why I do see way less “pages read into cache” then “pages requested from cache”. How is WiredTiger able to request pages that are not in the cache yet? Or does the request just means an request which doesn’t need to return a page as result (like a non fulfilled request)?", "username": "Sebastian1" }, { "code": "Pages read into the cache from disk,Pages requested by the workload from the cachepage requested from the cachepages read into the cache", "text": "Hi Sebastian,Sorry for the delay in responding.Pages read into the cache is actually Pages read into the cache from disk, and Pages requested from the cache is actually Pages requested by the workload from the cache.Suppose you/workload are trying to read a key/value pair, we will search the btree to find the particular page and that page is considered as page requested from the cache. If that page is already in the cache then no need to do anything, but if the page is not in cache then we go read it from disk and that becomes pages read into the cache.\nHence, pages read into the cache are lesser and those are the ones we are reading from the disk and putting into the cache. But pages requested from cache is basically all the read you/workload is doing.Imagine a two-layer approach:That is the reason more pages are requested from the cache (in your simulation example), most of the btree that are being used for IO fits into the cache, But once you have the btree in the cache then all the reads can be done without going to the disk.The requests always return the page but if that page is already in the cache then it need not be read from the disk to cache.I hope it answers your question.Thanks,\nRavi", "username": "Ravi_Giri" }, { "code": "Pages read into the cache from diskPages requested by the workload from the cache", "text": "Hi Ravi,you’re my hero - thank’s for that perfect explanation. This makes totally sense! Just one more question: Does the Pages read into the cache from disk and Pages requested by the workload from the cache both talking about internal pages (size of 32kb) or leaf pages (size of 4kb)?Thank you ", "username": "Sebastian1" }, { "code": "", "text": "Thank you, Sebastian.Internal pages carry only keys. The leaf pages store both keys and values. WiredTiger traverses internal pages to find the leaf page.Pages requested by the workload from the cache is mostly key/value pair. Pages read into the cache from disk are both internal and leaf pages.Attached the screenshot for reference.\nThanks,\nRavi", "username": "Ravi_Giri" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Read/Write I/O Path
2020-08-10T23:35:41.539Z
MongoDB Read/Write I/O Path
4,743
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi,I am creating a model for a user entity. I work with a small team of developers and we are exchanging a lot of discussions while using the flat vs nested approach.The premise is that the User Schema has a bunch of fields such as name, email ID, phone number, address, etc, which I would prefer mostly to be a flat model ( Except for maybe fields like address ).We have a few more fields, Like 10 to 15 fields, and my senior is advising that we nest the fields according to some category: Such as profile_information OR contact_information or other groupings.I would like you guys to help me in trying to understand what the drawbacks are to either of the approach. I personally don’t prefer a nested model here because I don’t see a need to group fields. My senior mentions that the reason for grouping is so that it’s easier to access the data and improves readiibility which I don’t agree with.Also, is there any impact on the performance to nesting fields? I don’t foresee the document to grow too large.Thanks for the help!", "username": "Subrit_X" }, { "code": "profile :\n{\n name : \"steevej\" ,\n first : \"Steeve\" ,\n last : \"Juneau\"\n}\ncontact :\n{\n name : \"lukes\" ,\n first : \"Luke\" ,\n last : \"Skywalker\"\n}\n\nvs\n\nprofile_name : \"steevej\" ,\nproflle_first : \"Steeve\" ,\nprofile_last : \"Juneau\" ,\ncontact_name : \"lukes\" ,\ncontact_first : \"Luke\" ,\ncontact_last : \"Skywalker\"\n", "text": "I like nested objects. Mainly because you may $project an object in a single statement rather than projecting individual fields one by one. The benefit of a single $project is that when you add fields in the object you do not have to modify your pipelines to get the new fields. You schema is more flexible. For example, it is easier to go from 1 address object to multiple address objects. It is also easier to write and test code that only modify an object.As for readability, I think the following speak by itself.But I am a senior too. So I might be biased by my past experience just like your seniors.", "username": "steevej" }, { "code": "", "text": "Thanks @steevej. I have to agree that the data looks better structured but I didn’t think about code modifying only a part of the data!", "username": "Subrit_X" } ]
Flat vs Nested structure
2020-08-17T13:06:04.188Z
Flat vs Nested structure
7,393
https://www.mongodb.com/…b495596a2b4.jpeg
[]
[ { "code": "mongo --nodb'mongo' is not recognized as an internal or external command, operable program or batch file.", "text": "Hi all !\nI followed all the steps of Chapter 0 and when i run the command\nmongo --nodb\ni get this error\n'mongo' is not recognized as an internal or external command, operable program or batch file.Below is the screenshot of my environment variables\nenvVariables621×583 53.3 KBand the folder C:\\Program Files\\MongoDB\\Server\\4.4\\bin\nfolderpath841×645 75 KB\nStep 14 of the chapter 0 didn’t define the name of the variable so i named it mongo .Please tell me where i went wrong , thanks !", "username": "SALMAN_KHAN" }, { "code": "", "text": "This is related to Mongodb university course\nYou can post it inDiscussions for database administrators who are installing, configuring, securing, monitoring, upgrading, and scaling MongoDB deployments.You have to update your path variable\nAdd your_mongodb/bin to existing path", "username": "Ramachandra_Tummala" }, { "code": "", "text": "thanks for the reply !i changed the variable name from mongo to _mongodb/bin . it didn’t work .i’ll post it in university forum now .", "username": "SALMAN_KHAN" }, { "code": "", "text": "", "username": "Stennie_X" } ]
M001 : Chapter 0 , cmd error "'mongo' is not recognized as an internal or external command"
2020-08-24T22:13:23.872Z
M001 : Chapter 0 , cmd error &ldquo;&lsquo;mongo&rsquo; is not recognized as an internal or external command&rdquo;
5,023
null
[]
[ { "code": "", "text": "Version 3.0\nLinux Redhat 7Intermittently facing issue from application and observed that application thread is waiting on socket read", "username": "santhosh_K" }, { "code": "", "text": "MongoDB 3.0 is well out of support(EoL February 2018). You should look at upgrading to 3.6 at a minimum.You’re going to want to correlate this socket read with the query issued and look for optimisations in the query or add/update indexes to support the query.", "username": "chris" }, { "code": "", "text": "There is no long running query, written on to logs. Its very abrupt and once in evey 100k. But even that is critical and can create issues in our environmentWanted to know exactly, what it means when it says taking time on read from socket", "username": "santhosh_K" }, { "code": "", "text": "The best advice I can offer before even considering anything else is upgrade MongoDB to a supported version. This may be an already fixed issue.Don’t neglect the client drivers either.", "username": "chris" }, { "code": "", "text": "I take your point, upgrade has to happen. But for my understanding, what it means by waiting on socket? Application written on Java connects to mongoDB, via 3.0 java mongo driver and while doing a read, it waits for 5 seconds and then comes out. It is driver problem? Connection framework problem or Database problem? Wanted to understand theory… If someone can throw light", "username": "santhosh_K" } ]
Read from socket is taking time
2020-08-15T13:31:57.256Z
Read from socket is taking time
1,510
null
[]
[ { "code": "", "text": "Hello guys.We use MongoDB Atlas Cluster (3 nodes).Cluster 0, 2 occurs log like “I REPL [replication-227] Restarting oplog query due to error: NetworkInterfaceExceededTimeLimit: error in fetcher batch callback :: caused by :: timed out. Last fetched optime (with hash): { ts: Timestamp(1598332638, 1), t: 276 }[2017998953744055918]. Restarts remaining: 1”.Cluster 1 transited Secondary from Primary and Cluster 0 transited Primary from Secondary.After 30 seconds, Cluster 0 occurs log “stepping down from primary, because a new term has begun: 278” and Cluster 0 transited Secondary from Primary.What is it?\nWe had service down time because it.Please let me know.Thank you", "username": "Youseok_Nam" }, { "code": "", "text": "Hi Youseok,It sounds like your MongoDB Atlas cluster experienced a failure or automatic maintenance-based election: It’s important to consider using retryable reads and writes within your application to minimize the disruption of MongoDB Atlas’s auto-failover.Please use the lower-right chat bubble in the MongoDB Atlas UI to ask for help: we have a support team there ready to help.-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replica Down with error
2020-08-25T12:20:59.265Z
Replica Down with error
3,046
null
[ "aggregation" ]
[ { "code": "db.alert_notification.aggregate([{\n $match: {\n created_date: {\n $gte: ISODate(\"2020-02-24T00:00:00.000Z\"),\n $lt: ISODate(\"2020-08-25T00:00:00.000Z\")\n }\n }\n },{\n $project: {\n resolved_date: 1,\n device_date_time_stamp: 1,\n dateDifference: {$divide : [ { $subtract: [\"$resolved_date\" ,\"$device_date_time_stamp\"] }, 1000 * 60 * 60] }\n }\n }\n ,{\n $group: {\n _id: null,\n \n dateDifference1: { \n $sum: \"$dateDifference\"\n }\n }\n }\n \n]);\n{\n\t\"message\" : \"cant $subtract adate from a string\",\n\t\"ok\" : 0,\n\t\"code\" : 16556,\n\t\"codeName\" : \"Location16556\",\n\t\"name\" : \"MongoError\"\n}\n", "text": "//ERROR", "username": "97vaqasazeem_N_A" }, { "code": "$device_date_time_stampdb.alert_notification.aggregate([\n {\n $project: {\n typeA: {\n $type: '$resolved_date',\n },\n typeB: {\n $type: '$device_date_time_stamp',\n },\n },\n },\n]);\n/* ... */\n{ \n $subtract: [\n {\n toDate: '$resolved_date',\n }, \n {\n toDate: '$device_date_time_stamp',\n },\n ] \n}\n/* ... */\n", "text": "Hello, @97vaqasazeem_N_A!It looks like your $device_date_time_stamp property contains data of type string, but you’re trying to use it as a date type.You can debug your data-types with an aggregation like this:You can try to convert your data types on the fly, using $toDate pipeline operator:Though, it is always better to have a consistent data types across documents - this way you will be able to write your aggregations more easily, and they will perform better ", "username": "slava" } ]
Error while grouping the data to calculate the average
2020-08-25T12:20:32.589Z
Error while grouping the data to calculate the average
1,801
null
[ "react-js" ]
[ { "code": "", "text": "I have a VERY csmple application. For reasons relates to reactJS rencer cycles I need to store the page name when the user clicks a tile in the Navbar. I have a local MongoDB installation on my Mac ( Catalina ) and I need to know how to specify the database in a reactJS module (require? import? etc) and then how to (in reactJS):a) count the records in the db (there should only be 0 or 1)\nb) add a text field to the first record is the count is 0\nv) replace the first record is the count is 1I do not use Express - there is no html except in the divs exported from each component.There is a great deal of information on complex usages, but I can find nothing on these simple operations.", "username": "Stephen_Jones" }, { "code": "", "text": "Hi @Stephen_Jones welcome to the community.I’m not sure if there’s a React to MongoDB interface, seeing as MongoDB is a server-side technology and React is a client-side technology. Typically in a MERN stack (MongoDB Express React Node), there is a REST API layer using Node that lets the client communicate with the database using something like AJAX.Since MongoDB by itself does not provide a native REST interface, people typically uses Express+Node, but this can be replaced with many alternatives, such as Django (Python), or Spring (Java).See The Modern Application Stack – Part 1: Introducing The MEAN Stack for some examples.Alternatively depending on your goals & use case, you may be able to use MongoDB Realm. See Introduction to MongoDB Realm for Web Developers for more details.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Simple CRUD operations in ReactJS for local database
2020-08-24T04:01:36.993Z
Simple CRUD operations in ReactJS for local database
4,244
null
[]
[ { "code": "", "text": "Hello,Dose mongo has some way can like this query?", "username": "Zheng_Ficoto" }, { "code": "", "text": "HI @Zheng_Ficoto,Please provide more information on your topic:Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "WHERE (id1, id2) IN ((1, 2), (3, 4))", "username": "Zheng_Ficoto" }, { "code": "id1id2mongodb.mydata.find(\n { $or: [\n { id1:{ $in: [1,2] } },\n { id2:{ $in: [3,4] } }\n ]}\n)\n", "text": "Hi,Please see the SQL to MongoDB Mapping Chart for a general guide to equivalent statements. As mentioned earlier, more information would be helpful to understand what you are trying to achieve.I believe you are looking for results matching id1 (value of 1 or 2) or id2 (value of 3 or 4), so the equivalent query in the mongo shell would be:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi, I do not mean that,I want to query such as thisdb.mydata.find(\n    { $or: [\n        { id1:1,id2:2},\n        { id1:3,id2:4}\n    ]}\n)but I need some other way to query,because $or sometime make query cannt using the right index,then query is too slowly.So,Is having other way to optimizeo some query like this?Except using hint(),thanks.", "username": "Zheng_Ficoto" }, { "code": "explain(true)", "text": "I need some other way to query,because $or sometime make query cannt using the right index,then query is too slowly.So,Is having other wayHi @Zheng_Ficoto,It sounds like your question may be about query performance rather than constructing a query.Please provide more information to help understand your issue:Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I hope can use tuple to query,is mongo support using tuple to query?", "username": "Zheng_Ficoto" }, { "code": "", "text": "I hope this query only use “order_num_1_lesson_id_1”,but it need using hint(),and I hope it can use tuple query like thisWHERE (order_num, lesson_id) IN (\n(“5f3e17dc0b586f0001a2e519”, ObjectId(‘5e9014599976c5000161f330’)),\n(“5f3e17dc0b586f0001a2e519”, ObjectId(“5e9014351f6cfc00018e3103”)),\n(“5f3e17dc0b586f0001a2e519”, ObjectId(“5e9013f31f6cfc00018e30ee”)))", "username": "Zheng_Ficoto" } ]
About find $in query
2020-08-21T11:19:25.527Z
About find $in query
2,338
null
[]
[ { "code": "", "text": "with journaling enabled, if multiple write operations on the same document occur with in the CommitInterval , only the last received write operation is persisted and writes before that are ignored/missed.According to docs, every write operation will have a record with unique identifier in the journal. So even if there are write operations on the same document, all the writes should be persisted without fail. But in my case only last received write operation is only performed.", "username": "Perumalla_Giridhar" }, { "code": "", "text": "Hi @Perumalla_Giridhar, welcome to the community.By commit interval, do you mean the setting storage.journal.commitIntervalMs?If yes, then the journal in question is WiredTiger’s internal journaling mechanism (write ahead log), which I don’t believe is accessible from MongoDB. WiredTiger uses this journal mechanism to guard against acknowledged write loss in case the server was killed unexpectedly.How did you determine that some writes are being ignored by WiredTiger? Could you elaborate on the testing method?Best regards,\nKevin", "username": "kevinadi" }, { "code": " {w: 1, j: true}j:truej:true{w: 1}", "text": "Hi Kevin,\nYes, I meant storage.journal.commitIntervalMs I saw from my application logs that if multiple patch requests are sent at almost same time and on same document, I get status ok (200) from mongodb but only last sent patch request will take effect on the document.\nFor patch requests sent with some time gap (little more than 100 milli seconds), all patch requests will take effect.\nMy write concern is having default value (i.e., {w : 1}).\nLater I tried changing my write concern to {w: 1, j: true}. From then all patch requests, even when sent at same time, on same document, took effect.\nMy question here is, how come j:true solved the problem?\nj:true is only related to acknowledgement. It confirms on-disk entry of patch requests which is only useful if mongodb crashes.\nWhy are in-memory records in case of default {w: 1} acknowledgement setting is causing issue even when there is no mongodb restart or failure?\nIs there any problem with my understanding? Please correct me if I’m wrong anywhere.", "username": "Perumalla_Giridhar" }, { "code": "", "text": "Hi @Perumalla_Giridhar,Sorry I’m a bit confused. Are you using MongoDB via a driver, a REST interface, or some other method? I ask because MongoDB doesn’t reply in HTTP status code, so OK 200 doesn’t really exist in MongoDB lingo.Could you post the actual command you send to MongoDB (or the code you use to interface with MongoDB), and why you think it’s not the expected outcome? Please also post your MongoDB version and your driver version if applicable.From what I understand, you’re looking for read-your-own-writes capability. Is this correct? If yes, then you might benefit from causal consistency. See read your own writes and Causal Consistency and Read and Write Concerns for more detailed explanations.Best regards,\nKevin", "username": "kevinadi" }, { "code": "{v0: 'itsanswer'} {v1: 'itsanswer'}{answers: {v0: 'itsanswer', v1: 'itsanswer'}}{answers: {v0: 'itsanswer'}}{answers: {v1: 'itsanswer'}}{w: 1, j: true}j: true4.2.7", "text": "Hi KevinLet me explain you the whole story.I have multiple versions of an application running which are clients of mongodb, say app.v0 and app.v1\nthese apps communicate to mongodb through a python eve interface. where the application send http request to the interface and get response 200 if successful.If a patch request is made to interface then that interface will send the necessary update command to mongodb. To know more about details of interface, please refer python eve . the default write concern which python eve uses is {w: 1}Now app.v0 and app.v1 process documents one after other. most of the cases they process same document at same time. The processing time is also almost same most of the cases. after processing they update their answer by adding a new field in the document. app.v0 add {v0: 'itsanswer'} to the document and app.v1 will add {v1: 'itsanswer'} field to the document. if both the update request take effect then my final state of document will have the {answers: {v0: 'itsanswer', v1: 'itsanswer'}}since app.v0 and app.v1 send patch request at same time, and with default write concern {w: 1} only the last send patch request is been successful. In this case my document will have either {answers: {v0: 'itsanswer'}} or {answers: {v1: 'itsanswer'}}. In app.v0 and app.v1 , v0 is the production version. so if production answer is missed i get errors in later stage of pipeline.Later when i changed the write concern settings of python eve (interface) to use {w: 1, j: true} the problem was solved surprisingly.Now can you understand my question in the previous comment? according to my understanding j: true would only help in case of mongodb crash or restart. Is that correct?my mongodb version is 4.2.7 . I have followed the exact procedure as mentioned in the official documentation during its installation.", "username": "Perumalla_Giridhar" }, { "code": "v0v1replaceOnej: truev0v1v0v1", "text": "Hi @Perumalla_GiridharI believe what you’re seeing is a race condition between the two eve apps. I don’t think it has anything to do with MongoDB or eve at all.What I think happened is:In this scenario, it is a race between v0 and v1. The winner basically has their version of the document. This is why you see only field v0 sometimes, or only field v1 some other times.However, setting j: true has the effect of throttling both apps, meaning:However, the throttle effect cannot be guaranteed to happen since it was not explicitly designed, so I think occasionally you’ll still see either v0 or v1 like in the non-throttled case.What I don’t know is whether eve is designed to work like this. That is, multiple eve instances interfacing with a single database instance. In a simplified term, I’m not sure if multiple eve processes are “thread-safe”.If eve was never designed with these scenario in mind, then you would have to implement an external signalling process so that multiple instances of eve can work together. Alternatively, you might be able to raise an issue in eve github repository.Of course, this is all assuming that you have parallel eve interfaces running at the same time. If, however, you have a single eve instance interfacing with a single MongoDB instance, and are calling eve’s endpoint from two different REST clients, then we need to dig deeper into what could possibly cause eve to have this race.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,I am using single interface instance (python eve running in a separate container ). all the clients make their patch requests to this container only. But i am running python eve endpoint under uwsgi in that container. Do you think race condition can occur because of uwsgi ?", "username": "Perumalla_Giridhar" }, { "code": "", "text": "Hi @Perumalla_GiridharUnfortunately I don’t know enough about eve nor uwsgi to provide you with an answer. What I do know is that the condition was unlikely to be caused by MongoDB, and the symptoms thus far seem to show a race condition.I think you can start the investigation by using a change stream on that collection (note that this feature requires a replica set). Using change stream, you can monitor all the modifications in that collection, and verify what MongoDB was doing to that collection.Another avenue worth pursuing is to elevate the log level or elevate the profiling level. By default, MongoDB records queries taking more than 100ms. You can set this number to e.g. -1ms to ensure that it captures all queries into the log, and trace what happened from the server side.Note that elevated log levels or profiling level would have performance drawbacks, so please be cautious with regard to the workload, and turn them back to their original settings once the investigation is done.Best regards,\nKevin", "username": "kevinadi" } ]
Multiple writes on same document within journal commit interval are not updated in the document
2020-08-17T07:34:43.645Z
Multiple writes on same document within journal commit interval are not updated in the document
2,438
https://www.mongodb.com/…5c3526a4556.jpeg
[ "weekly-update" ]
[ { "code": "", "text": "Today we launched the MongoDB $weeklyDigest on Dev.to, a summary of the latest and greatest MongoDB content created and curated over the previous week. Check out the first edition here and let us know if you have any feedback:🎶 back again... 🎶 👋 Hi everyone! Welcome to the FIRST edition of MongoDB $weeklyUpdate, a...", "username": "ado" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB $WeeklyUpdate on Dev.to
2020-08-24T22:33:40.231Z
MongoDB $WeeklyUpdate on Dev.to
3,104
null
[]
[ { "code": "", "text": "Hello, I am very new to MongoDB. google this mongoexport syntax but couldn’t get it work on powershell. Not sure what I did wrong. thanks in advance. My server is windows 2016 , powershell 5.1.\nmongoexport --collection=<collection_name> --db=<db_name> --out=D:\\out.json --query= ‘{“createdAt”: {\"$lt\": {\"$date\": new Date(1597261765739)}}}’I got this error “too many positional arguments: [{ createdAt: {: {: new Date(1597261765)}}}]”.\nI also tried\n‘{“createdAt”: {\"$lt\": {\"$date\": “2020-02-28T00:00:00.000Z”}}}’ and\n“{“createdAt”:{”$gte\": new Date(1597261765739}}}\"… they all end up with same errorThe createdAt has value like this in the collection\n“createdAt” : ISODate(“2020-04-13T23:01:01.560+0000”)Not sure what I did wrong here.", "username": "bernie_zhang" }, { "code": "", "text": "Not sure but ot looks like you have an extra space after –query= and before the query itself.", "username": "steevej" }, { "code": " + CategoryInfo : NotSpecified: (2020-08-19T13:3...d to: localhost:String) [], RemoteException\n + FullyQualifiedErrorId : NativeCommandError\n\n", "text": "Thanks for the reply. I made some progress. look like previous error is due to the character of the quote. error from copy/paste. After modify it, it executed but 0 records returned. I tried:\n–query=\"{‘createdAt’: {’$lt’: {’$date’: ‘2020-08-13T23:01:01.560+0000’}}}\"\n–query=\"{‘createdAt’: {’$lt’: {’$date’: ‘2020-07-28T00:00:00.000Z’}}}\"\n–query=\"{‘createdAt’: {’$lt’: {’$date’: new Date(1597261765739)}}}\"\nmsg:2020-08-19T13:35:17.694-0600\texported 0 recordsI am sure there are some records before July.", "username": "bernie_zhang" }, { "code": "--query=\"{'createdAt': {'$lt': {'$date': '2020-08-13T23:01:01.560+0000'}}}\"\n--query='{\"createdAt\": {\"$lt\": {\"$date\": \"2020-08-13T23:01:01.560+0000\"}}}'\n", "text": "Quotes are indeed an issue from time to time. In particular when cut-n-pasting from a web page since html or UTF introduced fancy quotes, see Common HTML entities used for typography - W3C Wiki.Try with (I tried to make sure that it is ' and \")or (you usually what the single quote outside to make sure the shell does not do any magic)", "username": "steevej" }, { "code": "", "text": "thanks Steve. Looks like the first option–query=\"{‘createdAt’: {’$lt’: {’$date’: ‘2020-08-13T23:01:01.560+0000’}}}\"I got “exported 0 records”.\nbut got problem with second one–query=’{“createdAt”: {\"$lt\": {\"$date\": “2020-08-13T23:01:01.560+0000”}}}’return with error.\\mongoexport : 2020-08-24T15:27:10.330-0600\terror validating settings: query ‘[123 99 114 101 97 116 101 100 65 116 58 32 123 36 103 116 58 32 123 36 100 97 116 101 58 50\n48 50 48 45 48 56 45 49 56 84 48 48 58 48 48 58 48 48 46 53 54 48 43 48 48 48 48 32 125 125 125]’ is not valid JSON: invalid character ‘-’ after object key:value pair…I will play around to see what went wrong…", "username": "bernie_zhang" }, { "code": "", "text": "ok, finally. I made some progress. It is the problem with my Powershell code. I tested the mongoexport with command prompt with --query=\"{‘createdAt’: {’$lt’: {’$date’: ‘2020-08-13T23:01:01.560+0000’}}}\", the collection was successful exported.I will work on the powershell script. thank you for your help.", "username": "bernie_zhang" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoexport syntax on powershell
2020-08-18T20:59:42.505Z
Mongoexport syntax on powershell
5,348
null
[ "motor-driver" ]
[ { "code": " /usr/local/lib/python3.8/site-packages/motor/metaprogramming.py:316: DeprecationWarning: \"@coroutine\" decorator is deprecated since Python 3.8, use \"async def\" instead\n coro = framework.coroutine(attr)\n\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\n", "text": "Python 3.8 support was released in October, so I’m looking into upgrading to it.I upgraded my system and found some DeprecationWarnings in python 3.8.This deprecation is documented here: Coroutines and Tasks — Python 3.11.2 documentation", "username": "Tom_Wright" }, { "code": "", "text": "Welcome to the community @Tom_Wright!Support for Python 3.8 was added for the Motor 2.1 release last December (per MOTOR-458).What version of Motor are you using?Regards,\nStennie", "username": "Stennie_X" }, { "code": "coroutine\" decorator is deprecated since Python 3.8", "text": "coroutine\" decorator is deprecated since Python 3.8This is a known issue that we are planning to address soon. See https://jira.mongodb.org/browse/MOTOR-484.", "username": "Shane" }, { "code": "", "text": "Hi Tom. I understand that this can be a bit confusing - allow me to explain. The current version of Motor supports Python runtimes all the way back to 2.7 (using Tornado for async). Consequently, there are several places in the code where we use the old coroutine syntax. In the next release, we will drop support for older Python runtimes and also switch over to exclusively using async/await (see https://jira.mongodb.org/browse/MOTOR-373)", "username": "Prashant_Mital" }, { "code": "", "text": "Thanks for your quick reply.Hmm, looks like your jira was the place to look for this! I posted this message here after visiting your github page which stated that:\"Issues with, questions about, or feedback for Motor should be sent to the MongoDB Community Forums.For confirmed issues or feature requests, open a case in Jira in the “MOTOR” project.\"which lead me to believe (incorrectly) that all issues were discussed here prior to the creation of a ticket.It looks like you are on it though!Thanks,\nTom", "username": "Tom_Wright" }, { "code": "", "text": "Hmm, looks like your jira was the place to look for this!Hi Tom,The general suggestion is correct: discussion in our community forums will help clarify issues and we can provide advice if there is a more targeted destination for a feature request or bug. Our community forums reach the broadest audience and include members of the MongoDB engineering and product teams, so you are likely to get a faster response here than adding an issue for the development team to triage. Discussion here also benefits others in the community who can learn and share their experience.Our JIRA issue tracker is focused on development tasks, so is worth searching for context on issues that have been reported and possibly addressed in (or planned for) a release. Before diving into JIRA, I would check the driver release notes / changelog to see if there are newer releases available than your current version. The changelogs in driver documentation will usually link to more information in JIRA.There is also a MongoDB Feedback Engine site for product and feature suggestions. These suggestions are generally focused on use cases rather than very specific bugs or implementation tasks.Note: for future driver questions, it would be helpful to include your specific driver version for context and faster response. I know that Motor added support for Python 3.8 but didn’t look into the warning because you hadn’t mentioned a version yet (upgrading to the latest release is the most likely suggestion to start with).Regards,\nStennie", "username": "Stennie_X" }, { "code": "DeprecationWarning: \"@coroutine\" decorator is deprecated since", "text": "DeprecationWarning: \"@coroutine\" decorator is deprecated sinceThis issue has been fixed in Motor version 2.2. See the release announcement here: Motor 2.2.0 Released", "username": "Shane" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Supporting python 3.8 in motor
2020-06-18T08:53:01.360Z
Supporting python 3.8 in motor
4,486
null
[]
[ { "code": "COLLSCAN$neextra$neextradb.coll.explain(true).find({\"extra.tag\": {$ne: \"dummy-tag\"}})\nexplain()\"executionStats\" : {\n \"executionSuccess\" : true, \n \"nReturned\" : 10133998.0, \n \"executionTimeMillis\" : 5018.0, \n \"totalKeysExamined\" : 0.0, \n \"totalDocsExamined\" : 10184077.0, \n \"executionStages\" : {\n \"stage\" : \"COLLSCAN\", \n \"filter\" : {\n \"extra.tag\" : {\n \"$not\" : {\n \"$eq\" : \"dummy-tag\"\n }\n }\n }, \n \"nReturned\" : 10133998.0, \n \"executionTimeMillisEstimate\" : 394.0, \n \"works\" : 10184079.0, \n \"advanced\" : 10133998.0, \n \"needTime\" : 50080.0, \n \"needYield\" : 0.0, \n \"saveState\" : 10184.0, \n \"restoreState\" : 10184.0, \n \"isEOF\" : 1.0, \n \"direction\" : \"forward\", \n \"docsExamined\" : 10184077.0\n }, \n \"allPlansExecution\" : [\n\n ]\n}, \ndb.coll.createIndex({\"extra.$**\": 1})\n\"indexSizes\" : {\n \"_id_\" : 102354944.0, \n ...\n ...\n ...\n \"extra.$**_1\" : 110243840.0\n }, \n{ \n \"_id\" : ObjectId(\"5c582f5577612608f3e6a333\"), \n \"email\" : \"\", \n \"createdAt\" : ISODate(), \n \"name\" : \"\" , \n \"firstname\" : \"\", \n \"lastname\" : \"\", \n \"birthDate\" : ISODate(),\n \"gender\" : \"\", \n \"phone\" : \"\", \n \"city\" : \"\", \n \"country\" : \"\",\n \"company\" : \"\", \n \"labels\" : [\n \"dummy-label\"\n ], \n \"index\" : 0.0, \n \"state\" : \"ACTIVE\", \n \"extra\" : {\n \"tag\" : \"dummy-tag\", \n \"note\" : \"dummy note\"\n }\n}\n", "text": "I just found out that wildcard index on mongodb 4.2, doing a COLLSCAN for $ne query. So I was wondering, whether I did something wrong, or it was currently not being supported. And here I was looking a solution to use indexing for my ever growing (unstructured) extra field while using $ne operation. Because my extra field will store many kind of key-value string data.This is my query,And here’s is the explain() result,This is how I create my wildcard indexIndexes on my collectionSample of document, due to the nature of our data, I omit some of the valuesPlease let me know If I’m not clear enough with my question. Thank you", "username": "nanangarsyad" }, { "code": "$netotalDocsExamined\" : 10184077.0\nnReturned\" : 10133998.0\n", "text": "Hi @nanangarsyad,The $ne is not considered a selective operator and generally cannot utilize an index properly regardless of wildcard indexing capabilities.What is the reason you use an exclusive search for tags? What are you trying to achieve?It looks like the amount of returned data is a very big portion of total documents in this collection so COLLSCAN makes more sense anyhow.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "extra.tagextra.tagdummy-tagtotalDocsExamined\" : 10184077.0\nnReturned\" : 10133998.0\nextra.tag{ \n \"_id\" : \"not-dummy-tag\", \n \"count\" : 1.0\n}\n{ \n \"_id\" : \"dummy-tag\", \n \"count\" : 10184076.0\n}\n\nCOLLSCAN\"executionStats\" : {\n \"executionSuccess\" : true, \n \"nReturned\" : 1.0, \n \"executionTimeMillis\" : 4964.0, \n \"totalKeysExamined\" : 0.0, \n \"totalDocsExamined\" : 10184077.0, \n \"executionStages\" : {\n \"stage\" : \"COLLSCAN\", \n \"filter\" : {\n \"extra.tag\" : {\n \"$not\" : {\n \"$eq\" : \"dummy-tag\"\n }\n }\n }, \n \"nReturned\" : 1.0, \n \"executionTimeMillisEstimate\" : 460.0, \n \"works\" : 10184079.0, \n \"advanced\" : 1.0, \n \"needTime\" : 10184077.0, \n \"needYield\" : 0.0, \n \"saveState\" : 10184.0, \n \"restoreState\" : 10184.0, \n \"isEOF\" : 1.0, \n \"direction\" : \"forward\", \n \"docsExamined\" : 10184077.0\n }, \n \"allPlansExecution\" : [\n\n ]\n},\n$ne$neextraextra2extradb.coll.createIndex({\"extra2.field1\": 1})\ndb.coll.explain(true).find({\"extra2.field1\": {$ne: \"dummy-field1\"}})\nexplain()\"executionStats\" : {\n \"executionSuccess\" : true, \n \"nReturned\" : 1.0, \n \"executionTimeMillis\" : 0.0, \n \"totalKeysExamined\" : 2.0, \n \"totalDocsExamined\" : 1.0, \n \"executionStages\" : {\n \"stage\" : \"FETCH\", \n \"nReturned\" : 1.0, \n \"executionTimeMillisEstimate\" : 0.0, \n \"works\" : 3.0, \n \"advanced\" : 1.0, \n \"needTime\" : 1.0, \n \"needYield\" : 0.0, \n \"saveState\" : 0.0, \n \"restoreState\" : 0.0, \n \"isEOF\" : 1.0, \n \"docsExamined\" : 1.0, \n \"alreadyHasObj\" : 0.0, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"nReturned\" : 1.0, \n \"executionTimeMillisEstimate\" : 0.0, \n \"works\" : 3.0, \n \"advanced\" : 1.0, \n \"needTime\" : 1.0, \n \"needYield\" : 0.0, \n \"saveState\" : 0.0, \n \"restoreState\" : 0.0, \n \"isEOF\" : 1.0, \n \"keyPattern\" : {\n \"extra2.field1\" : 1.0\n }, \n \"indexName\" : \"extra2.field1_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"extra2.field1\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"extra2.field1\" : [\n \"[MinKey, \\\"dummy-field1\\\")\", \n \"(\\\"dummy-field1\\\", MaxKey]\"\n ]\n }, \n \"keysExamined\" : 2.0, \n \"seeks\" : 2.0, \n \"dupsTested\" : 0.0, \n \"dupsDropped\" : 0.0\n }\n }, \n \"allPlansExecution\" : [\n\n ]\n}, \n", "text": "Hi @Pavel_Duchovny,Thanks for replaying.\nWell, as for my reason using exclusive search for extra.tag. I was trying to find all extra.tag that isn’t dummy-tag, and then count the total them.In the example above with the result,I was using data with high variety of value for field extra.tag. To simplify my problem, I also tried to decrease the variety of my data into a data with this kind of distribution.I don’t know why, but it also gave me the same COLLSCAN query plan like this.Is it the expected behavior of $ne operation for wildcard index?,\nif it so, is there any other way to solve the $ne query for my ever growing key-value field (my extra field).As a side note. I also tried experimenting normal index (not wildcard-index) for a different field named extra2 but structured exactly like field extra, and the result was like what I expected, that is, the query planner using index to find the result.\nHere’e the detail,the explain() result,Best regards.", "username": "nanangarsyad" }, { "code": "", "text": "Hi @nanangarsyad,Well the wild card index does not support document or array inequality and not direct values:Is tag field is an array?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "extra.tagextra: {\n tag: \"dummy-tag\",\n ...\n ...\n}\n\n", "text": "Hi, @Pavel_DuchovnyUnfortunately, extra.tag just field with single string value.\neg.Thank you,\nNanang", "username": "nanangarsyad" } ]
Mongodb wildcard index not being used for $ne query
2020-08-22T20:35:53.032Z
Mongodb wildcard index not being used for $ne query
3,538
null
[ "replication" ]
[ { "code": "", "text": "Hi !I have a set up which includes three machines, m1, m2 and m3. These machines talks to each other using hostnames host-m1, host-m2 and host-m3. The IP to host mapping is retained in the /etc/hosts of each machine. For various reasons, I cannot employ a DNS server here.So, the /etc/hosts of each machine looks something like this,================== 10.46.51.1 =================\n10.46.51.3 host-m3\n10.46.51.2 host-m2\n10.46.51.1 host-m1\n================== 10.46.51.2 =================\n10.46.51.3 host-m3\n10.46.51.2 host-m2\n10.46.51.1 host-m1\n================== 10.46.51.3 =================\n10.46.51.3 host-m3\n10.46.51.2 host-m2\n10.46.51.1 host-m1Now, for various reasons, the hostnames sometime have to be changed. For example, this,10.46.51.3 host-m3\n10.46.51.2 host-m2\n10.46.51.1 host-m1might need to be changed to,\n10.46.51.3 host-m3\n10.46.51.2 host-m1\n10.46.51.1 host-m2\non all the machines.While this change is in progress, the mappings on all machines might not be the same. Meaning, on m1 “host-m3” maps to m3, while on m2 “host-m3” might map to m1While this change is happening, I will sometime do a replica set init. When I do this, I observe the following,Can someone help me understand what is mongo’s recommended approach while init-ing a replica set in a scenario where the host to IP mappings on the all the members might not be in sync.Additional details:The Actual hostname to IP mappings on the machines where,\n================== 10.46.51.5 =================\n10.46.51.5 cvm-5\n10.46.51.2 cvm-1\n10.46.51.1 cvm-2\n================== 10.46.51.1 =================\n10.46.51.5 cvm-2\n10.46.51.2 cvm-5\n10.46.51.1 cvm-1\n================== 10.46.51.2 =================\n10.46.51.5 cvm-1\n10.46.51.2 cvm-2\n10.46.51.1 cvm-5Mongodb Version 2.4.6 was running on centos 7", "username": "Mohammad_Ghazanfar" }, { "code": "", "text": "Hi @Mohammad_GhazanfarIf I understand the question correctly, you are changing the name to IP address mapping while the MongoDB process is running, and the replica set behaves strangely. Is this correct?If yes, then unfortunately there’s not much the server can do to fix itself since the situation is not under its control. A replica set node tries to connect to other nodes in the replica set, but if it asks the OS for the address for a certain node, but the address given to it is wrong, there is nothing it can do about it.I would recommend to shutdown MongoDB while these IP remapping are being done, and restart them once all the correct IP mappings are in place. Having a set of very confused replica set is generally not a good thing, especially if you’re doing writes while this is going on.Mongodb Version 2.4.6 was running on centos 7Please note that version 2.4.6 is seriously outdated now. The 2.4 series was released in March 2013 (7 years ago) and was out of support in March 2016 (4 years ago). Please consider using a supported version (see Support Policy for a list of supported versions).Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks @kevinadi for your response.This is correct: “changing the name to IP address mapping while the MongoDB process is running, and the replica set behaves strangely”Mongodb Version 2.4.6 was running on centos 7Oh man ! Sorry about that, I seem to have mistyped. I meant 4.2.6I will go ahead with your recommendation to restart mongo when the remapping happens.\nThanks again for your response.", "username": "Mohammad_Ghazanfar" } ]
Replica set behaviour when members can't communicate properly
2020-08-19T14:28:06.417Z
Replica set behaviour when members can&rsquo;t communicate properly
1,933
null
[ "aggregation" ]
[ { "code": "", "text": "Is it possible to do a $lookup aggregation between two databases in Mongodb ?I try this documentation, but the results is empty array", "username": "RindangRamadhan" }, { "code": "", "text": "It should work as documented. The empty result is probably cause by something else being wrong, like wrong field names, wrong values, wrong operators. It would hel if you share your pipeline.", "username": "steevej" }, { "code": "db.getSiblingDB(\"mai_inventory_service\").view_product_hpp.aggregate([\n {\n $lookup: {\n localField: '_id',\n from: 'view_main_dealer_package_products',\n foreignField: 'product_id',\n as: 'package_product',\n }\n }\n])\n", "text": "This is my pipeline", "username": "RindangRamadhan" }, { "code": "$lookup$lookupfromfrom", "text": "Welcome to the community @RindangRamadhan!As at MongoDB 4.4, $lookup only supports looking up from collections in the same database. Per the $lookup documentation for the from field:Specifies the collection in the same database to perform the join with. The from collection cannot be sharded.There is a relevant feature request you can upvote & watch: SERVER-34935: Support cross-database lookup.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$lookup across 2 databases
2020-08-23T20:54:51.775Z
$lookup across 2 databases
8,669
null
[]
[ { "code": "replaceOne", "text": "We’re using the replaceOne operation on the collection which uses the “filter” clause to check that it’s about to update a document with its certain field having some specific value.All is good but since the documents in the collection are updated by multiple parties working in parallel, I’d like to know whether it is guaranteed that between the time the filter has found a document to be replaced — that is, the filter has matched, — and that very document gets replaced, it is impossible for that document to be replaced by a concurrently running operation. I failed to find any statement on this in the MongoDB docs.We’re using MongoDB 3.6 (please don’t ask why) so using transactions is out of the question.I have already asked this question on SO but it collected no constructive responses so far, so I’m asking here.", "username": "kostix" }, { "code": "replaceOneupdateOnefindAndModify", "text": "Hi @kostix welcome to the community.replaceOne and similar commands like updateOne, findAndModify, etc. will only perform the operation if the document satisfies the filter criteria. They are a single atomic operation.However, replaceOne() replaces the first matching document in the collection. That is, if you have multiple documents satisfying the filter criteria, it will replace the first one it sees, so if the filter criteria doesn’t identify a single unique document, the result could be unpredictable.If you need to update/replace a single document, it’s typically recommended to use findAndModify since you’ll have more control if the filter critera can potentially match multiple documents (see Comparisons with the update Method).Best regards,\nKevin", "username": "kevinadi" }, { "code": "_id_id_id_id=ABCversion=1version=2version=3replaceOneversionversion=3version=2version", "text": "Hi, Kevin!Thanks for the response!I would like to solicit a bit more expanded definition of atomicity there — if possible. \nMaybe an example could help.Each document in my collection has a unique identifier (the _id field is naturally used to store it) and an integer field which may be though of as a version (or revision) of a datum identified by a particular _id.\nThe piece of software making use of that collection periodically receives new revisions of particular documents (from the outside) and has to update them in the collection.\nNo matter which exactly MongoDB operation we intend to use for that, the logic for the replacement has to be this: “find a document with such and such _id and with the version less than what we are about to use as a replacement”.\nThat is, if the collection has a document with a version greater or equal to than what we have, do nothing, otherwise — perform replacement.So far so good, but now let’s introduce more “updaters”: now more than a single client may receive an updated document and will attempt to replace its existing version in the collection.\nWhat I’m afraid of — in this setting — is a following situation:What I’m asking for is whether it’s possible that in the described case the replacement performed by the second updater (wanting to replace with version=3) might happen in between the query performed by the first updater (wanting to replace with version=2) allows it to proceed and it actually stores its document?This way, there is a possibility of updating the document to a lower-than-should-have-been version irrespective of the check performed by the operation’s query.That is what bothers me: the atomicity of the query and the update — as a sequence of operations.(Sorry for the wall of text but I have tried hard to explain this problem.)", "username": "kostix" }, { "code": "_idversion", "text": "Hi @kostixThat is a very detailed scenario. I would note that even though MongoDB pre-4.0 doesn’t have multiple document transaction, MongoDB post-3.2 are using WiredTiger as the default storage engine, which is a modern key-value store that notably supports transactions. Internally, MongoDB with WiredTiger has been using transactions to perform all data manipulation work since MongoDB 3.0.In fact, the scenario you described could be a bit more complicated if both the _id and the version fields are indexed (which it should, by the way ). Without leveraging WiredTiger’s transaction capabilities, there could be a moment where the document was updated but the index was not, leading to inconsistencies in the database. To ensure that the database is consistent at all times, MongoDB uses WiredTiger’s transaction extensively. The end result is, even with multiple threads/clients, there is not a moment where the database would be inconsistent from the point of view of any client. Hopefully this answers your question regarding atomicity and consistency.Having said that, in practice it is very difficult for two or more clients to try to update a single document at precisely the same time, unless the schema design forces the clients to bottleneck on a single document. This would also lower your throughput severely since you would have less concurrency. Are you expecting multiple clients to hit one document at exactly the same time once your app scales?Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replace operation: is its write stage atomic with regard to its filter stage?
2020-08-16T12:15:13.758Z
Replace operation: is its write stage atomic with regard to its filter stage?
3,121
https://www.mongodb.com/…a_2_807x1024.png
[ "php" ]
[ { "code": "$result = $collection->aggregate([\n[ '$match' => ['Assunto' => ['$nin' => [null]]] ],\n[ '$sortByCount'=> '$Assunto' ],\n[ '$sort' => ['Count' => -1] ]\n]);\n//Convert $result, a MongoDB\\Driver\\Cursor to array()\n$objects = json_decode(json_encode($result->toArray(),true));\n//Convert stdClass to Array\n$array=json_decode(json_encode($objects),true);\n$result = $collection->aggregate([\n[ '$match' => ['Assunto' => ['$nin' => [null]]] ],\n[ '$sortByCount'=> '$Assunto' ],\n[ '$sort' => ['Count' => -1] ]\n]);\n\n//Convert $result, multiple MongoDB\\Driver\\Cursor objects into stdClass Objects\n$objects = json_decode(json_encode($result->toArray(),true));\n\n//Convert stdClass Objects into Array()\n$array=json_decode(json_encode($objects),true);\n\nreturn $array;\n", "text": "Some documents on the DB have the field ‘Assunto’, I wanted to count how many times different values for ‘Assunto’ occur (ignoring when the field does not exist, so I did this query:The query works properly, my issue is with the return from aggregate. From the documentation it returns either “A MongoDB\\Driver\\Cursor or ArrayIterator object”.Also from the documentation typeMap : “Optional. The type map to apply to cursors, which determines how BSON documents are converted to PHP values. Defaults to the collection’s type map”.I read solutions on Stack Overflow on altering the collection’s typeMap to convert the cursor to an array but I couldn’t get them to work for my case, from my understanding I have multiple MongoDB\\Driver\\Cursor and it was returning only the first one of them.The next solution from Stack Overflow was to encode the cursor to JSON then decode it to an array. Like this:The problem is that this produces an stdClass Object just like this:\nScreenshot (206)987×1252 44.2 KB\nSo, to convert this stdClass to an array I need to do the same code yet again: (sorry for blurry image, happens after resizing)\nScreenshot (207)1012×1243 39 KB\nThis produces the expected output. But doing all this process seems like a waste. What would be the proper way to convert the returned values from aggregate into and array in my case? The entire code snippet in case it helps:", "username": "Joao_Victor_Daijo" }, { "code": "$array = json_decode(json_encode($result->toArray(),true), true);\n", "text": "That was my mistake. I should add true as the second parameter in the json_decode funcion like this:This converts the BSON documents to an array instead of an std object in a single line.", "username": "Joao_Victor_Daijo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Proper way to get a PHP array() from aggregate() or convert a MongoDB\Driver\Cursor to array()
2020-08-20T21:55:29.405Z
Proper way to get a PHP array() from aggregate() or convert a MongoDB\Driver\Cursor to array()
11,979
null
[]
[ { "code": "{\n \"title\": \"user\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"adress\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"city\": {\n \"bsonType\": \"string\"\n },\n \"zip_code\": {\n \"bsonType\": \"string\"\n }\n }\n },\n \"firstname\": {\n \"bsonType\": \"string\"\n },\n \"lastname\": {\n \"bsonType\": \"string\"\n },\n \"prize_id\": {\n \"bsonType\": \"objectId\"\n },\n \"prizes\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"is_favorite\": {\n \"bsonType\": \"bool\"\n },\n \"prize_id\": {\n \"bsonType\": \"objectId\"\n }\n }\n }\n }\n }\n}\n", "text": "Hi,I want to generate a relations many to many between user and prizes with array but can’t figure it out, here the schema:{\n“title”: “prize”,\n“properties”: {\n“_id”: {\n“bsonType”: “objectId”\n},\n“title”: {\n“bsonType”: “string”\n}\n}\n}Found nothing on the documentation, thank you for your help.", "username": "Nabil_Ben" }, { "code": "", "text": "Hello @Nabil_BenGenerally data modelling is a broad topic to discuss, this is due to many factors that may affect the design decision. One major factor is the application requirements, knowing how the application is going to interact with the database. With MongoDB flexible schema characteristic, developers can focus on the application design and let the database design conform for the benefit of the application. See also :A summary of all the patterns we've looked at in this seriesYou may also can checkout:A very good start is a the MongoDB University Class: M320 Data Modeling . There is an entire chapter on M:N relations and the various options to design them.Regards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Thank you, it’s not quite what i’m looking for but i found a way by creating an array of ObjectID in a join table.It’s the only way i found to make a many to many relationships with attributes depending on this relation.\nI’m generating a graphQL API from this.", "username": "Nabil_Ben" }, { "code": "", "text": "Hello @Nabil_Bensolutions depend on the specific use case and how you access the data. In case your use cases are “user driven” you may want to embed the prizes. This would be sense full when this is a point in time information, e.g. an order is shipped to a certain address. In case the user moves the address in the order will stay the same. Embedding even makes sense when you have one side of your many sides which is only rarely changing . You gain performance but you have to pay with updating duplication, this again can be done in an off peak time, in case you can accept some hours of stale data.\nAn other pattern could be the subset pattern, here you keep the latest x records as duplicates, e.g. last ordered items, you’d get the last lets say 10 very quick since they are embedded, if you want to see more than you need to do further reading in a second collection. The cost is again on maintaining the duplication. Again this can be done in off peak time or just with a longer lasting update. You get the idea? (you do not need to keep the data duplicated, you also can spread them in the subset, embedded and the majority in a further collection. If you do this you need to keep this in mind when you access the data on the second collection)\nJust step on step back, analyze the workload and access queries and than go for a pattern. $lookup as a subset of an “join” should be the very last resort.Regards\nMichael", "username": "michael_hoeller" } ]
Many to many Realm
2020-08-17T20:32:07.181Z
Many to many Realm
1,392
null
[ "dot-net" ]
[ { "code": " var fields = Builders<ResItem>.Projection.Include(p => p.MongoId)\n .Include(p => p.Title)\n .Include(p => p.Category)\n .Include(p => p.UsedStates)\n .Include(p => p.PublishDate)\n .Include(p => p.CreateDate)\n ;\n \n var results = TArticleContent.Find(condition).Project<ResItem>(fields).Skip((pageIndex - 1) * pageSize).Limit(pageSize);\n", "text": "this code is Eq Select :\nvar condition = Builders.Filter.Eq(p => p.Author, category);Now i need the code like\n(x => x.Keywords.Equals(category)like\nvar items = TArticleContent.PageList(x => x.Keywords.Equals(category), a => a.Desc(b => b.PublishDate), pageIndex, pageSize);Thanks very much…", "username": "AtlantisDe" }, { "code": " var fields = Builders<ResItem>.Projection.Include(p => p.MongoId)\n .Include(p => p.Title)\n .Include(p => p.Category)\n .Include(p => p.UsedStates)\n .Include(p => p.PublishDate)\n .Include(p => p.CreateDate)\n ;\n\nvar items = TArticleContent.Find(x => x.Keywords.Equals(category)).Project(fields).Skip((pageIndex - 1) * pageSize).Limit(pageSize).ToList();\nvar Count = TArticleContent.CountDocuments(x => x.Keywords.Equals(category));\n", "text": "I Haved do it This is solution:", "username": "AtlantisDe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I make FilterDefinitionBuilder like x => x.Keywords.Equals(category)
2020-08-23T12:57:59.157Z
How can I make FilterDefinitionBuilder like x =&gt; x.Keywords.Equals(category)
1,871
null
[ "transactions", "atlas-functions" ]
[ { "code": "", "text": "How to use transactions in the Moongodb Realm app function. If I want to insert 2 documents into 2 collections. If one of them failed the transaction in the session should be rollback. I am talking about the Realm system app function.", "username": "Roshan_Prabashana" }, { "code": "", "text": "Hi @Roshan_Prabashana,As far as I know Realm does not support transactions for now.What you can potentially do is upload a mongodb node js driver as a dependency and have a code to execute a transaction through the driver.Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "async function main(){\n \n const client = context.services.get(\"mongodb-atlas\");\n const session = client.startSession();\n \n const coll1 = client.db('TestDb').collection('items');\n const coll2 = client.db('TestDb').collection('items2');\n \n\n // const session = client.startSession();\n\n const transactionOptions = {\n readPreference: 'primary',\n readConcern: { level: 'local' },\n writeConcern: { w: 'majority' }\n };\n \n try {\n await session.withTransaction(async () => {\n // Important:: You must pass the session to the operations\n await coll1.insertOne({ name: \"abc2\" }, { session });\n \n await coll2.insertMany([{ name: \"abc2.1\" },{ name: \"abc1.2\" }], { session })\n .then(result => console.log(`Successfully inserted item with _id: ${result.insertedId}`));\n .catch(err => console.error(`Failed to insert item: ${err}`));\n }, transactionOptions);\n } catch (e) {\n console.error(e);\n await session.abortTransaction();\n }\n finally {\n await session.endSession();\n \n }\n}\n", "text": "The following worked for me", "username": "Roshan_Prabashana" }, { "code": "", "text": "Hi @Roshan_Prabashana,That’s a new capability, awesome.https://docs.mongodb.com/realm/mongodb/transactions/Hope this works to your satisfaction.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Transaction in system function of Mongodb Realm app
2020-08-01T12:16:05.499Z
Transaction in system function of Mongodb Realm app
3,088
https://www.mongodb.com/…0_2_1024x178.png
[]
[ { "code": "", "text": "Because we are short for disk space , I droped one conllection that is not useful. But the disk space remains the same after I droped a conllection.I went to the data directory find one metadata file is the biggest file (317G), and I can’t find which conllection is using this file, I checked every conllection’s uri, found no conllection has this uri, but this metadata file still exist in data dirctory and occupy a lot of disk space.Anyone know why??微信图片_202007131549031734×303 21.4 KB", "username": "Rachel_Qiu" }, { "code": "db.adminCommand({listDatabases:1}).databases.forEach(\n function(database){\n db = db.getSiblingDB(database.name); \n db.getCollectionNames().forEach(\n function(collection){\n print([database.name, collection].join('.'),'\\t',db.getCollection(collection).stats().wiredTiger.uri)\n }\n )\n }\n) \n", "text": "Welcome to the forum @Rachel_QiuDid you check every collection in every database? MongoDB Compass is handy here to sort the databases and collections by storagesize.", "username": "chris" }, { "code": "", "text": "yes, I did check and I ran the function you give me, that biggest metadata file is not on the list.", "username": "Rachel_Qiu" }, { "code": "", "text": "MongoDB / WiredTiger will keep the allocated disk space reserved because it assumes that you will re-use the space with new data at some point. I think if you want it to actually release the used space, you would have to run a compact operation. You can find the documentation here:", "username": "frankshimizu" }, { "code": "", "text": "@frankshimizu @Rachel_Qiu had dropped the collection not just delete documents. This usually result in the collection file being removed.Also as @Rachel_Qiu points out none of the collection uri points to the file in question. So which collection would need this compacting?@Rachel_Qiu What mongod version is running ?", "username": "chris" }, { "code": "", "text": "dropped the collection not just delete documents. This usually result in the collection file being removed.\nThanks for clarifying that and sorry for getting that wrong.", "username": "frankshimizu" }, { "code": "", "text": "there isn’t enough space for compact operation, I think compact operation need at least one time more than the existing data", "username": "Rachel_Qiu" }, { "code": "", "text": "I already solved this problem by restart mongo, the secondary node had reclaimed the disk space and that big metadata file disappeared. and I renamed the metadata file on the primary node, it didn’t affect the service so I move it to other machine.thank you for asking although I don’t know why", "username": "Rachel_Qiu" }, { "code": "", "text": "Hi @Rachel_QiuCould you provide the details mentioned by @chris, namely:Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "", "username": "Rachel_Qiu" }, { "code": "", "text": "Hi @Rachel_Qiuversion: 3.4.6That is quite an old version of MongoDB (released way back in July 2017). This version is affected by SERVER-31101 which was fixed in MongoDB 3.4.11 and newer. Note that MongoDB 3.4 series are not supported anymore since January 2020.I would encourage you to upgrade to a supported versions of MongoDB.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "@kevinadithanks Kevin\nI will consider that if upgrade to a new version doesn’t need data migration", "username": "Rachel_Qiu" } ]
Mongo doesn't reclaim disk space after drop collection and can't find collection based on wiredTiger metadata uri
2020-07-13T10:01:02.418Z
Mongo doesn&rsquo;t reclaim disk space after drop collection and can&rsquo;t find collection based on wiredTiger metadata uri
26,631
null
[ "connecting", "security" ]
[ { "code": "", "text": "Hi,\nWe do have environment in mongoDB 4.2 ,redhad 7 . As per the user request as a admin we have created the database as “xybc” and provided user id as “admin” and password “******” . Prior to handover the user id and password to user we want to test whether those credentials are working or not . could you please suggest what command we can give to check the same. Let me know if you need any further information .Thanks and Regards\nBibhu", "username": "Bibhusisa_Pattanaik" }, { "code": "", "text": "Is it standalone or a replicaset?\nHow was mongod started(access control enabled?)\nPlease check mongo documentation for various optionsTry this:mongo --host localhost -u user -p password --authenticationDatabase admin testormongo --host “hostname:port” -u user -p password --authenticationDatabase admin testreplace above commands with your values", "username": "Ramachandra_Tummala" }, { "code": "[root@****** ~]# mongo --host ddaa205a -u admin -p Logintoday@$2020 --authenticationDatabase admin bibhu\nMongoDB shell version v3.6.5\nconnecting to: mongodb://ddaa205a:27017/bibhu\nMongoDB server version: 3.6.5\n2020-08-22T16:22:02.057+0200 E QUERY [thread1] Error: Authentication failed. :\nDB.prototype._authOrThrow@src/mongo/shell/db.js:1608:20\n@(auth):6:1\n@(auth):1:2\nexception: login failed\n\n[root@******* ~]# mongo --host 'ddaa205a:27017' -u admin -p Logintoday@$2020 --authenticationDatabase admin bibhu\nMongoDB shell version v3.6.5\nconnecting to: mongodb://ddaa205a:27017/bibhu\nMongoDB server version: 3.6.5\n2020-08-22T16:13:47.773+0200 E QUERY [thread1] Error: Authentication failed. :\nDB.prototype._authOrThrow@src/mongo/shell/db.js:1608:20\n@(auth):6:1\n@(auth):1:2\nexception: login failed", "text": "BibhuThank you so much Ram,\nReally it will be very helpful for me . It is a standalone database.I am totally new to this environment that is why i am not sure how mongd is started . could you please suggest how can i check how the mongod is started . I have tried both the commands but it is saying authentication failed but , I am sure that i have given the same password. Could you please suggest how can i get to know about my password of the database “bibhu” from admin database,could you please provide me exact command or process and next thing how can i change the password for the database “bibhu” so that will try to connect using the same.", "username": "Bibhusisa_Pattanaik" }, { "code": "use admin\ndb.system.users.find({},{_id:1,roles:1})\n{ \"_id\" : \"test.one\", \"roles\" : [ { \"role\" : \"readWrite\", \"db\" : \"test\" } ] }\n{ \"_id\" : \"admin.two\", \"roles\" : [ { \"role\" : \"readWrite\", \"db\" : \"two\" } ] }\n\n", "text": "Depending on where you create the user/role @Ramachandra_Tummala suggestion may not work.Do you have the steps you performed to create the database and user?If you want to find where you created the user you can look them up in the system.users in the admin database:User one was created ‘in’ the test database. User two was created in the admin database.", "username": "chris" }, { "code": "[root@****** bin]# cat /tmp/db-create.js\n// switch to database\ndb = db.getSiblingDB('Bibhu');\n\n// firstly we have to clean up potential remainders\ndb.dropAllRoles();\ndb.dropAllUsers();\ndb.dropDatabase();\n\n// create admin user\ndb.createUser({\n user: 'admin',\n pwd: 'Loginto@020',\n roles: [{ role: 'dbOwner', db:'Bibhu'}]\n});\n\n// create collection\ndb.createCollection('adHocCollection')\n\n// insert documents\ndb.runCommand(\n {\n insert: \"adHocCollection\",\n documents: [\n { ingredient: \"potatoes\", origin: \"Switzerland\" },\n { ingredient: \"beef\", origin: \"Argentina\" },\n { ingredient: \"mozzarella\", origin: \"Italy\" }\n ],\n ordered: true,\n writeConcern: { w: \"majority\", wtimeout: 5000 }\n }\n)\n[root@******* bin]# mongo\nMongoDB shell version v3.6.5\nconnecting to: mongodb://127.0.0.1:27017\n2020-08-23T07:22:12.802+0200 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused\n2020-08-23T07:22:12.802+0200 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:251:13\n@(connect):1:6\nexception: connect failed\n[root@******* ~]# systemctl status mongod-inst00.service\n● mongod-inst00.service - High-performance, schema-free document-oriented database\n Loaded: loaded (/usr/lib/systemd/system/mongod-inst00.service; enabled; vendor preset: disabled)\n Active: active (running) since Tue 2020-08-18 04:45:40 CEST; 5 days ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 2222 (mongod)\n CGroup: /system.slice/mongod-inst00.service\n └─2222 /usr/bin/mongod -f /etc/mongod-inst00.conf\n\nWarning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.\n", "text": "Hi Chris,\nThank you so much for your response , even i am trying to check the admin database to get further information about the password which we set earlier because what ever password we have set it up that is not working . below script we ran to create the database.I have created the Database as “Bibhu” but when i am trying to connect using same password which i setup that is not working. so trying to connect Mongo shell so that will connect the admin database but i am getting below error while trying to connect Mongo shellI have tried to check whether Mongod is working fine or not but its working fine as mentioned the status below.Could you please suggest , as i am new to the current environment so facing problem in each step. Sorry to ask all the details.", "username": "Bibhusisa_Pattanaik" }, { "code": "", "text": "You have shown code of db/user creation but where are the outputs\nDid that script run successfully\nHow did you connect before?\nwhen you run just mongo without any options it tries to connect to mongod running on default port 27017\nSo is your mongod up and running on port 27017?\nWhat is the port mentioned in your config file?\nps -ef|grep mongodcouldn’t connect to server 127.0.0.1:27017 means mongod is not up on port 27017\nOnce mongod is up with authentication enabled you need to create superuser first on admin db\nThen that user can create other users by authenticating on admin db", "username": "Ramachandra_Tummala" }, { "code": "[root@******* ~]# ps -ef | grep -i mongod\nmongod 2222 1 1 Aug18 ? 01:15:48 /usr/bin/mongod -f /etc/mongod-inst00.conf\nroot 4106 38552 0 09:27 ? 00:00:00 sshd: melinmongodbadm04 [priv]\nmelinmo+ 4108 4106 0 09:27 ? 00:00:00 sshd: melinmongodbadm04@pts/0\nroot 4431 4177 0 09:28 pts/0 00:00:00 grep --color=auto -i mongod\n\n[root@******** ~]# mongo --host 'ddaa205a:2222' -u admin -p Logintoday@$2020 --authenticationDatabase admin bibhu\nMongoDB shell version v3.6.5\nconnecting to: mongodb://ddaa205a:2222/bibhu\n2020-08-23T09:34:13.372+0200 W NETWORK [thread1] Failed to connect to 10.183.128.52:2222, in(checking socket for error after poll), reason: Connection refused\n2020-08-23T09:34:13.373+0200 E QUERY [thread1] Error: couldn't connect to server ddaa205a:2222, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:251:13\n@(connect):1:6\nexception: connect failed\n\n[root@******** ~]# mongo --host 'ddaa205a:2222' -u admin -p Logintoday$@2020 --authenticationDatabase admin bibhu\nMongoDB shell version v3.6.5\nconnecting to: mongodb://ddaa205a:2222/bibhu\n2020-08-23T09:36:54.561+0200 W NETWORK [thread1] Failed to connect to 10.183.128.52:2222, in(checking socket for error after poll), reason: Connection refused\n2020-08-23T09:36:54.561+0200 E QUERY [thread1] Error: couldn't connect to server ddaa205a:2222, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:251:13\n@(connect):1:6\nexception: connect failed", "text": "ps -ef|grep mongodThank you so much Ram,\nI have found the mongod port as 2222 and i have given the same as well. but now getting network error as mentioned below. Could you please suggest.", "username": "Bibhusisa_Pattanaik" }, { "code": "[root@******* ~]# ps -ef | grep -i mongod\nmongod 2222 1 1 Aug18 ? 01:15:48 /usr/bin/mongod -f /etc/mongod-inst00.conf\nroot 4106 38552 0 09:27 ? 00:00:00 sshd: melinmongodbadm04 [priv]\nmelinmo+ 4108 4106 0 09:27 ? 00:00:00 sshd: melinmongodbadm04@pts/0\nroot 4431 4177 0 09:28 pts/0 00:00:00 grep --color=auto -i mongod\n\n[root@******* ~]# systemctl status mongod-inst00.service\n● mongod-inst00.service - High-performance, schema-free document-oriented database\nLoaded: loaded (/usr/lib/systemd/system/mongod-inst00.service; enabled; vendor preset: disabled)\nActive: active (running) since Tue 2020-08-18 04:45:40 CEST; 5 days ago\nDocs: https://docs.mongodb.org/manual\nMain PID: 2222 (mongod)\nCGroup: /system.slice/mongod-inst00.service\n└─2222 /usr/bin/mongod -f /etc/mongod-inst00.conf\n\nWarning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.\n\n[root@******** ~]# mongo --host ‘ddaa205a:2222’ -u admin -p Logintoday@$2020 --authenticationDatabase admin bibhu\nMongoDB shell version v3.6.5\nconnecting to: mongodb://ddaa205a:2222/bibhu\n2020-08-23T09:34:13.372+0200 W NETWORK [thread1] Failed to connect to 10.183.128.52:2222, in(checking socket for error after poll), reason: Connection refused\n2020-08-23T09:34:13.373+0200 E QUERY [thread1] Error: couldn’t connect to server ddaa205a:2222, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:251:13\n@(connect):1:6\nexception: connect failed\n\n[root@******** ~]# mongo --host ‘ddaa205a:2222’ -u admin -p Logintoday$@2020 --authenticationDatabase admin bibhu\nMongoDB shell version v3.6.5\nconnecting to: mongodb://ddaa205a:2222/bibhu\n2020-08-23T09:36:54.561+0200 W NETWORK [thread1] Failed to connect to 10.183.128.52:2222, in(checking socket for error after poll), reason: Connection refused\n2020-08-23T09:36:54.561+0200 E QUERY [thread1] Error: couldn’t connect to server ddaa205a:2222, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:251:13\n@(connect):1:6\nexception: connect failed\n", "text": "[root@******* ~]# systemctl status mongod-inst00.service\n● mongod-inst00.service - High-performance, schema-free document-oriented database\nLoaded: loaded (/usr/lib/systemd/system/mongod-inst00.service; enabled; vendor preset: disabled)\nActive: active (running) since Tue 2020-08-18 04:45:40 CEST; 5 days ago\nDocs: https://docs.mongodb.org/manual\nMain PID: 2222 (mongod)\nCGroup: /system.slice/mongod-inst00.service\n└─2222 /usr/bin/mongod -f /etc/mongod-inst00.confWarning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.Thank you so much Ram,I have found the mongod port as 2222 and i have given the same as well. but now getting network error as mentioned below. Could you please suggest.", "username": "Bibhusisa_Pattanaik" }, { "code": "", "text": "2222 /usr/bin/mongod -f /etc/mongod-inst00.conf2222 is not port.It is PID of your mongod process.So using it as port in your command is not correct\nPurpose of asking ps output is to see what all mongods running on your boxPlease check you config file on what port it is running\ncat /etc/mongod-inst00.conf and search for port\nCheck you mongod.log.It may show more details", "username": "Ramachandra_Tummala" }, { "code": "Bibhu--authenticationDatabaseBibhuBibhuuse Bibhu\nshow users\n", "text": "[root@****** bin]# cat /tmp/db-create.js\n// switch to database\ndb = db.getSiblingDB(‘Bibhu’);// firstly we have to clean up potential remainders\ndb.dropAllRoles();\ndb.dropAllUsers();\ndb.dropDatabase();// create admin user\ndb.createUser({\nuser: ‘admin’,\npwd: ‘Loginto@020’,\nroles: [{ role: ‘dbOwner’, db:‘Bibhu’}]\n});You created the user ‘in’ the Bibhu database. So when connecting with mongo just specifying the database without the --authenticationDatabase or specifying that option as Bibhu.Running the command from my earlier post should show the user if it were created. Alternatively show users while in the Bibhu database will likewise show the user.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to connect using user id and password
2020-08-22T01:38:06.771Z
How to connect using user id and password
109,954
null
[ "aggregation", "indexes" ]
[ { "code": " \"pipeline\": [\n {\n \"$group\": {\n \"uniqueValues\": {\n \"$addToSet\": \"$project\"\n },\n \"_id\": null\n }\n }\n ]\n \"pipeline\": [\n {\n \"$sort\": {\n \"project\": 1\n }\n },\n {\n \"$group\": {\n \"uniqueValues\": {\n \"$addToSet\": \"$project\"\n },\n \"_id\": null\n }\n }\n ]\n", "text": "HiI have wildcard index all over the collection.\nI’m trying to get a list of unique valuesI tried to useBut it was super slow then I read that in order to use the field index I should add the sort stage before, so I tried this one:But still it not using the index (“planSummary”: [ { “COLLSCAN”: {} } ]) and now I got “errMsg”: “Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.”How can I get the unique values and use the wildcard index?Thank you", "username": "Mordechai_Ben_Zechar" }, { "code": "“pipeline”: [\n{\n“$sort”: {\n“project”: 1\n}\n},\n{\n“$group”: {\n“_id”: \"$project\"\n}\n}]\n", "text": "Hi @Mordechai_Ben_Zechar,Your grouping does not use a specific _id therefore will result in a collection scan.The correct way is to group the relevant field for distinct:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "db.mycol.aggregate([ { \"$sort\": { \"project\": 1 } }, { \"$group\": { \"_id\": \"$project\" } }])\n2020-08-23T14:36:53.282+0300 E QUERY [js] Error: command failed: {\n \"operationTime\" : Timestamp(1598182604, 27),\n \"ok\" : 0,\n \"errmsg\" : \"Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.\",\n \"code\" : 16819,\n \"codeName\" : \"Location16819\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1598182604, 27),\n \"signature\" : {\n \"hash\" : BinData(0,\"hZjsV/H8hEr6LK1l0RtfZzT7jB0=\"),\n \"keyId\" : NumberLong(\"6811153382187728897\")\n }\n }\n} \n", "text": "Hi @Pavel_Duchovny and thank you for your answer but it still not working and not using the wildcard index,: aggregate failed :", "username": "Mordechai_Ben_Zechar" }, { "code": "allowDiskUse: true", "text": "Hi @Mordechai_Ben_Zechar,You need to allowDiskUse: true if any of the stages cross 100MB of execution memory.Please note that a full index scan is usually less effecient than collection scan , collection scans are optimised by design for initial sync of replica sets and can be very perfomant.Best\nPavel", "username": "Pavel_Duchovny" } ]
Get field unique values with wildcard index
2020-08-23T07:54:07.846Z
Get field unique values with wildcard index
2,606
null
[ "java" ]
[ { "code": "Command failed with error 251 (NoSuchTransaction): 'Transaction 3 has been aborted.'return Single.fromPublisher(collection.find(session, Filters.eq(\"item\", item)))\n .map(Optional::of)\n .onErrorReturnItem(Optional.empty())\n .observeOn(RxHelper.scheduler(context));\n", "text": "I have the following code, which should try to find a document in a collection. If that item does not exist, the flow should continue without failing or aborting the transaction. When no matching item is found in the collection, the code below currently emits Command failed with error 251 (NoSuchTransaction): 'Transaction 3 has been aborted.' and I haven’t been able to find a way to continue the transaction instead of aborting it.How do I use the reactive driver to search for an item that does not exist, without failing?", "username": "st-h" }, { "code": "", "text": "I can make it work, if I do not pass the session when I look up the element that may not exist. Not sure why that is the case, though.", "username": "st-h" } ]
Reactive Java Driver: How to search for a item that does not exist without aborting the transaction
2020-08-23T11:35:40.603Z
Reactive Java Driver: How to search for a item that does not exist without aborting the transaction
2,669
null
[ "dot-net" ]
[ { "code": "public class Warehouse\n{\n\tpublic string Id { get; private set; }\n\tpublic string Name { get; private set; }\n\tpublic LocationAddress? Address { get; private set; }\n\tpublic WarehouseShelfSetting? ShelfSettings { get; private set; }\n\tpublic WarehouseTrolleySettings? TrolleySettings { get; private set; }\n \n\tpublic Warehouse(string name)\n\t{\n\t\tthis.Name = name;\n\t\tthis.Id = ObjectId.GenerateNewId().ToString();\n\t}\n \n\tpublic Warehouse(string name, LocationAddress? address)\n\t{\n\t\tthis.Name = name;\n\t\tthis.Address = address;\n\t\tthis.Id = ObjectId.GenerateNewId().ToString();\n\t}\n \n\tpublic Warehouse(string id, string name, LocationAddress? address)\n\t{\n\t\tthis.Name = name;\n\t\tthis.Address = address;\n\t\tthis.Id = id;\n\t}\n \n\tpublic class WarehouseTrolleySettings\n\t{\n\t\tpublic LabellingStrategy SlotLabelling { get; set; }\n\t\tpublic int NumberOfSlots { get; set; }\n\t}\n \n\tpublic class WarehouseShelfSetting\n\t{\n\t\tpublic LabellingStrategy ShelfLabelling { get; set; }\n\t\tpublic LabellingStrategy SlotLabelling { get; set; }\n\t}\n \n\tpublic enum LabellingStrategy\n\t{\n\t\tAlphabetic,\n\t\tNumeric\n\t}\n}\n{\n \"_id\" : ObjectId(\"5c534452d3224cc69bdcb6ac\"),\n \"Name\" : \"Centrallagret\",\n \"Address\" : {\n \"StreetAddress\" : \"Storgatan 1\",\n \"StreetAddress2\" : null,\n \"PostalCode\" : \"123 45\",\n \"City\" : \"Stockholm\",\n \"CountryCode\" : \"se\"\n }\n}\nSystem.FormatException: An error occurred while deserializing the Address property of class Zwiftly.Items.Warehouses.Warehouse: No matching creator found.\n ---> MongoDB.Bson.BsonSerializationException: No matching creator found.\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.ChooseBestCreator(Dictionary`2 values)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.CreateInstanceUsingCreator(Dictionary`2 values)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeClass(BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.Serializers.SerializerBase`1.MongoDB.Bson.Serialization.IBsonSerializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize(IBsonSerializer serializer, BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n --- End of inner exception stack trace ---\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeClass(BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize[TValue](IBsonSerializer`1 serializer, BsonDeserializationContext context)\n at MongoDB.Driver.Core.Operations.CursorBatchDeserializationHelper.DeserializeBatch[TDocument](RawBsonArray batch, IBsonSerializer`1 documentSerializer, MessageEncoderSettings messageEncoderSettings)\n at MongoDB.Driver.Core.Operations.FindCommandOperation`1.CreateCursorBatch(BsonDocument commandResult)\n at MongoDB.Driver.Core.Operations.FindCommandOperation`1.CreateCursor(IChannelSourceHandle channelSource, BsonDocument commandResult)\n at MongoDB.Driver.Core.Operations.FindCommandOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.FirstOrDefaultAsync[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n", "text": "I created this issue in Jira: https://jira.mongodb.org/projects/CSHARP/issues/CSHARP-3175?But from past experience I know that it might take time before someone looks at that issue, so I’m posting in here as well:This is my Warehouse class:This is the document in the database:This code has been the same for very long time, but today I upgraded the MongoDB.Driver package, and now I get this exception:I backed version by version and found that this regression was introduced in version 2.10.2.", "username": "John_Knoop" }, { "code": "[BsonDefaultValue]", "text": "Welcome to the community forums @John_Knoop!from past experience I know that it might take time before someone looks at that issue, so I’m posting in here as well:MongoDB’s JIRA is the relevant channel to report a bug or regression (or search for relevant behaviour changes) in an officially supported driver, however it may take some time for the driver team to triage new issues depending on other development priorities. I do see that you have reported other issues that have been triaged into the backlog without a public comment yet – you can always ask for an update by commenting on the issue.This code has been the same for very long time, but today I upgraded the MongoDB.Driver package, and now I get this exceptionThe issue you reported is an intentional change introduced in the 2.10.2 driver via CSHARP-2889: BsonClassMap.LookupClassMap supports private constructors inconsistently.For a more detailed explanation, please see @Robert_Stam’s comment on CSHARP-3108.Paraphrasing Robert’s comment:The exception is being thrown because the document does not have a value for an expected property and therefore we don’t have all the needed values to pass to the constructor.You can tell the driver to use a default value for missing fields by using the [BsonDefaultValue] annotationRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_XThanks for your answer! But are you sure this is the same problem?The comment you quote says “The exception is being thrown because the document does not have a value for an expected property and therefore we don’t have all the needed values to pass to the constructor.”, but thats not really the case here. My persisted document contains all the values needed to call any of the constructors.Or does it actually look at the properties of the class, and not on the signature of the constructors?", "username": "John_Knoop" }, { "code": "", "text": "@Stennie_X my mistake, this actually turned out to be the same issue.Do you by any change know if it’s possible to use BsonDefaultValue(null) via a convention? So that I don’t have to sprinkle that attribute all over my domain model?", "username": "John_Knoop" } ]
Regression in v2.10.2 .NET driver
2020-08-04T23:25:53.499Z
Regression in v2.10.2 .NET driver
3,518
null
[ "dot-net" ]
[ { "code": " {\n \"_id\": {\n \"$oid\": \"5f376d7ca1787ac4d407c891\"\n },\n \"Date\": {\n \"$date\": \"2020-07-11T21:00:00.000Z\"\n },\n \"DayShift\": {\n \"Employees\": []\n },\n \"EveningShift\": {\n \"Employees\": []\n },\n \"NightShift\": {\n \"Employees\": []\n }\n}\npublic async Task UpdateDayShift(string workDayId, List<string> empIds)\n {\n // get all employees\n var employees = await _employees.AsQueryable<Employee>().ToListAsync();\n // get current instance of the workday from the database\n var newWorkDay = await _workDays.FindAsync<WorkDay>(d => d.Id == workDayId).Result.FirstAsync();\n // clear content of dayshift\n newWorkDay.DayShift.Employees.Clear();\n // add only the selected employees into the shift \n foreach (var e in employees)\n {\n if (empIds.Contains(e.Id))\n {\n newWorkDay.DayShift.Employees.Add(e);\n }\n }\n // set filter and update params\n var filter = Builders<WorkDay>.Filter.Eq(\"_id\", newWorkDay.Id);\n var update = Builders<WorkDay>.Update.Set(f => f.DayShift, newWorkDay.DayShift);\n\n //update shift\n await _workDays.FindOneAndUpdateAsync(filter, update);\n }\n", "text": "Hey there good people!I’ve noticed an issue with driver while trying to update a segment of my document in the database.\nI have the following markup for my entities - A WorkDay Object that holds a datetime, 3 Shift Types - each holding a ListI’m calling an API to make an update and pass the _id of the workday and a list of employee Ids that I want present at that particular shift on that day.I know for a fact the parameters I give are valid through debugging, and that there’s an established connection.\nMy API also returns a 200 code, but no updates actually occur on the database.Has anyone encountered this issue?Kind regards,", "username": "Dani_Rashba" }, { "code": "var filter = Builders<WorkDay>.Filter.Eq(\"_id\", newWorkDay.Id);\n_id_idvar workDayId = ObjectId.Parse(\"5f3f2a80040283351015ee93\"); \n_id", "text": "Hi @Dani_Rashba, and welcome to the forums!Without a provided class mapping, I’d guess that the issue here is because your update filter does not match any documents. This is likely because the below filter:Only trying to match where the value of field _id is equivalent to string of id. The value of _id based on your example document, is in ObjectId format. In order to convert string to an ObjectId you could utilise ObjectId.Parse() method. For example:As an side note to your question and depending on the application use case, you could alter the data modelling for optimisation.Currently you’re duplicating employee records, in the employee collection and in the workdays collection. Depending on the use case, this may not be ideal if there is an update to the employee contact number in the employee collection that is no propagated properly in the workday collection. You could try just saving the _id value of employees in workday collection, and use $lookup to view both information together. Removing this duplicate of information, will also saves your application from fetching the entire employee collection just to perform a single update.Another consideration that is depending on the use case is to insert a new workday entry instead of updating an existing one. The benefit here is to have a historical records of shifts. For example, whether employee X was on duty last week during the day, etc.See also Building With patterns: A Summary for various design pattern examples.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi Wan,Thank you very much, I’ve already resolved the issue after figuring out I was updating a null document.\nThe issue was exactly as you said, the _id was in fact - wrong.As for optimization - I definitely agree with you, right now I’m playing with .NET’s blazor wasm and trying to get an MVP for my team at work, there’s a lot to optimize on the app since right now I have an API call for each shift update while usually, a work week is scheduled for the whole week (7 days X 3 Shifts) of API update calls. I need to learn how to batch update with mongo let alone find a good way to to send all data at once.Warm regards,\nDaniel.", "username": "Dani_Rashba" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
.NET Driver | Driver Update calls not register in the database
2020-08-18T00:29:16.274Z
.NET Driver | Driver Update calls not register in the database
2,184
null
[ "dot-net" ]
[ { "code": "public class WebConfig\n {\n \n [MongoDB.Bson.Serialization.Attributes.BsonElement(\"_id\")]\n public MongoDB.Bson.ObjectId MongoId { get; set; }\n}\nMongoDB.Bson.Serialization.BsonSerializer.Deserialize<T>(Newtonsoft.Json.JsonConvert.SerializeObject(JObject));\n", "text": "how to deserialization The object “MongoDB.Bson.ObjectId”https://github.com/JamesNK/Newtonsoft.Json/issues/2378I use [Newtonsoft.Json]Newtonsoft.Json.JsonSerializationException:\n“Error converting value “5f3facb172225d2400eed3f9” to type ‘MongoDB.Bson.ObjectId’. Path ‘MongoId’.”pls help meI have try itBut i failed…Thanks", "username": "AtlantisDe" }, { "code": "", "text": "This worked wonderfully for me. I found a completed version of a class with the same name here", "username": "AtlantisDe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to deserialize "MongoDB.Bson.ObjectId"
2020-08-21T12:22:43.157Z
How to deserialize &ldquo;MongoDB.Bson.ObjectId&rdquo;
14,828
null
[]
[ { "code": "db.rfmExcerpts.insertMany([\n\t{\n\t\tisSample: false,\n\t\ttags: [],\n\t\towner: \"d\",\n\t\ttitle: \"Rating agencies slow to react\",\n\t\tpages: 26,\n\t\tbody: \"The ratings agencies should have been the first one to detect problems in the housing market. \",\n\t\tuse: \"In his book 'The Undercover Economist', (Harford, J. 2007) Joe Harford points out that etc, etc.\",\n\t\tnotes: \"Need to find other parallels\",\n\t\tsource: {}\n\t}\n]}\n", "text": "Hi,\nThis is my first post and I’m relatively new to Mongo.\nI have three collections - Sources, Tags, and Excerpts.\nA source is anything like a book, url, paper, etc.\nAn excerpt is a extract or quotation taken from a source.\nA tag is simply a way of categorising Excerpts by area of interest.\nWhen I create a new source I want to reference the Source it came from, and add Tags from the Tags collection. This is what I have so far:How do I populate the ‘tags’ field with the IDs of some specific tags I want to attach to this excerpt. for example ‘economics’ and ‘banking’ which already exist in the Tags collection, and similarly, how do I fetch the ID of the source ‘The Undercover Economist’ which I know exists in the Source table.Any help greatly appreciated, thank you.", "username": "Daniel_McLoughlin" }, { "code": "{\n\t\tisSample: false,\n\t\ttags: [ tag1, tag2],\n\t\towner: \"d\",\n\t\ttitle: \"Rating agencies slow to react\",\n\t\tpages: 26,\n\t\tbody: \"The ratings agencies should have been the first one to detect problems in the housing market. \",\n\t\tuse: \"In his book 'The Undercover Economist', (Harford, J. 2007) Joe Harford points out that etc, etc.\",\n\t\tnotes: \"Need to find other parallels\",\n\t\tsource: [{ name : xxxx , source : \"www.example.com\"}],\nexcerpts : [ \"...\",\"...\"]\n\t}\n", "text": "Hi @Daniel_McLoughlin,I think what you need is to query the ids from the collection and use $addToSet update to push them to the arrays.Having said that, it feels like an anti pattern for MongoDB where you can store all this information in the same document which makes query a book/article directly and also index tags which allows you to search based on them:Embedded arrays and objects make your objects map to application needs.Let me know if you have any further questionsPavel", "username": "Pavel_Duchovny" } ]
Referencing other collections using _id
2020-08-21T11:19:22.014Z
Referencing other collections using _id
1,374
null
[ "aggregation" ]
[ { "code": "", "text": "Hi - I need some opinion for my requirement, I have simple case of creating posts, and related comments and likes. For posts I want to build really fast news feed for the users for a particular community. I have following options.I can add all the comments and likes into a single post document. But I fear that might limit based on the max document size? or I should go for 3 collections, one each for post, comments and likes.Should I use MongoDB atlas search pipeline to aggregate results at runtime or I should prepare these counts for total likes and total comments through some other process/mechanism? My fear is the query speed on such aggregates for each post can be concerning, since it might redo aggregation every-time it runs from the scratch.Should I build a separate process to merge aggregate result like this:\nhttps://docs.mongodb.com/manual/core/materialized-views/Can you tell me best ways to accomplish this?", "username": "Tariq_Jawed" }, { "code": "", "text": "Hi @Tariq_Jawed,Considering the described use case and your consideration I would suggest having 2-3 collections:Let me know if you have any further questionspavel", "username": "Pavel_Duchovny" }, { "code": "exports = async function() {\n const mongodb = context.services.get(\"Cluster0\");\n const collection = mongodb.db(\"test\").collection(\"PostActivity\");\n \n // getting last run datetime\n var lastDate = await context.functions.execute(\"getPostAggregateHistory\");\n var currDate = new Date();\n\n\n console.log(`executing aggregatePost with LastRunDateTime: ${lastDate.LastRunDateTime}.`);\n \n return collection.aggregate( [\n { \n $match: { \n DateTime: { \n $gte: lastDate.LastRunDateTime \n } \n } \n },\n { \n $group: { \n _id: \"$PostId\",\n likesCount : { $sum: \"$Like\" },\n commentsCount : { $sum: \"$Comment\"} \n } \n },\n { \n $merge: { \n into: \"PostAggregate\", \n whenMatched: \"replace\"\n } \n }\n ] ).toArray().then(result => {\n console.log(`Successfully ran post-aggregate pipeline`)\n\n // update datetime for next run\n context.functions.execute(\"updatePostAggregateHistory\", currDate)\n return result\n }).catch(err => console.error(`Failed to run post-aggregate pipeline: ${err}`))\n};\n", "text": "Thank you very much for your reply, we have create a POST collection for post contents, then we have an PostActivity collection which holds both comments and likes. I am trying to create an aggregate view to total comments and likes for the post; by grouping and then doing merge on it. See the example belowMongoDB on-demand materialized viewwe are following similar logic to create the aggregation, but the problem we are having is that first time it aggregates fine, but when we try to run it periodically using scheduler, then as shown above in the document, it should only take new records and update the old one, the above example shows exactly that, it add up the new record sum in the aggregate view rather than just replacing it with new sum for existing grouping. For some reason, when I do this kind of grouping on PostId, this method doesn’t add to the result rather replaces it completely with only newly found group sum. But when I use the data wise grouping as shown in the example above, then it works as expected. Here is my function which is doing aggregation:", "username": "Tariq_Jawed" }, { "code": "", "text": "just one correction, date wise grouping aggregates the results on subsequent incremental runs fine as document shows,MongoDB on-demand materialized viewbut PostId based grouping replaces the existing record with only the new sum, it doesn’t add up to existing group (if they exist). I also tried merge instead of replace in merge step. The result is same.", "username": "Tariq_Jawed" }, { "code": "on", "text": "Hi @Tariq_Jawed,The $merge command will update the document with the calculated number during the current run. It doesn’t know how to add a number to a merge document. Also you do not have an on clause and a whenNotMatched why?This is the reason I am not in favor of this design when you seperate the information and run all over to calculate a consistent view.Why won’t you use a trigger on post and poatActivity catching updates/inserts and doing a targeted $inc to the counts. Then you can keep counts in the post collection document as they clearly do not need to be in this other collection which is a complete antipattern to MongoDB.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "exports = async function() {\n const mongodb = context.services.get(\"Cluster0\");\n const collection = mongodb.db(\"test\").collection(\"PostActivity\");\n \n var postIds = await context.functions.execute(\"GetPostIdsByGreaterThenCurrentDate\");\n var currDate = new Date();\n \n return collection.aggregate( [\n { \n $match: { \n PostId : { $in : postIds } \n } \n },\n { \n $group: { \n _id: \"$PostId\",\n likesCount : { $sum: \"$Like\" },\n commentsCount : { $sum: \"$Comment\"} \n } \n },\n { \n $merge: { \n into: \"PostAggregate\", \n whenMatched: \"replace\"\n } \n }\n ] ).toArray().then(result => {\n console.log(`Successfully ran post-aggregate pipeline`)\n context.functions.execute(\"updatePostAggregateHistory\", currDate)\n return result\n }).catch(err => console.error(`Failed to run post-aggregate pipeline: ${err}`))\n};\n", "text": "Yes I understand your solution and point; the only thing I am concerned is for each like or comment, you would have to apply trigger, and then update the post collection. Which seems to be 3 operations for each activity type and operation type like delete, update and create like and comments.I am thinking here maybe to schedule a job similar to above, but run it only for distinct post-ids which are changed after the lastRunTime of aggregate job.I think both approaches have their cost, which one do you think would be best for large traffic site?Regards,", "username": "Tariq_Jawed" }, { "code": "", "text": "Hi @Tariq_Jawed,I am in favour of my first solution Where you will add a comment id array in the post document and maintain the counters there.Evrytime a new comment or a like is added you add it in the comment collection and run another update to $inc the counters in the post. You can execute both in a bulk set.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "this makes sense, will it work for parallel requests trying to update same post? since many comments or likes could happen at the same time for popular posts, I am afraid it could lead to locking/consistency issues on one document?", "username": "Tariq_Jawed" }, { "code": "", "text": "Hi @Tariq_Jawed,I believe that since $inc is a quick command and the lock is based on a post document I don’t expect the locking be impacting.How many people will create comments and likes during the same 50-100 ms for a single post I expect that a low numberOf course index the filter expression and use unique indexes for fields like postId…Best\nPavel", "username": "Pavel_Duchovny" } ]
Aggregation and Lookup
2020-08-13T22:05:43.482Z
Aggregation and Lookup
4,233
https://www.mongodb.com/…03_2_1024x64.png
[]
[ { "code": "", "text": "Hi, Recently My mongodb showing this error on my system.It is showing me this errormongoer1120×70 13 KBI tried following things alreadyMy node js code is fine as others not facing any issue with the code.Previously it was working fine suddenly it starting throwing this error.Please help me on this, as this error really stalls my development.My system details:Ubuntu 16.04(“Xenial”)", "username": "Avinash_B" }, { "code": "localhost:27017mongodb.version()mongo", "text": "Welcome to the community @Avinash_B!Based on the error message it looks like your application is trying to connect to a local MongoDB server that is currently not running.Downgrade to a lower version like 3.6.17, 4.0.0 etc.I would definitely work out the issue with your current MongoDB installation rather than changing server versions (which would require following the relevant upgrade or downgrade steps).What specific versions of MongoDB server, Node.js driver, and Node.js are you using?Mongo shell allows me to connec to DB Easily, even query on collections, GUI tools like ROBO3T also able to make connection without an error.To be clear: you are able to connect to localhost:27017 from the same environment as your application using the mongo shell and other admin tools?What does db.version() return in the mongo shell?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X,Thanks for reply\nYes, i am able to connect with mongo on mongo shell and following versions, i am using on my system.MongoDB verison\ndb version v4.4.0Node Version\nv8.14.0Following are the packages of mongo i am using, from package.json\n“mongo-heartbeat”: “~1.0.0”,\n“mongodb”: “^3.5.2”,\n“mongodb-core”: “^2.1.17”,\n“mongoose”: “^5.0.9”,", "username": "Avinash_B" }, { "code": "", "text": "Anyone here who can help me with this problem?Here are my mongodb logs logs1298×263 39.1 KBThanks", "username": "Avinash_B" } ]
Unable to connnect to mongodb
2020-08-21T05:30:16.573Z
Unable to connnect to mongodb
1,952
null
[]
[ { "code": "", "text": "My web API has been serving binary data (BSON) to a native client. This approach seems efficient since the web API receives the raw BSON from MongoDB, and forwards it to the native client while the data remains in binary form along the way. Now we’re adding a web client. While I found the GitHub - mongodb/js-bson: BSON Parser for node and browser library, I’ve read that BSON is less efficient than JSON in the browser. So why should I use js-bson rather than have my web API serve JSON instead of BSON?", "username": "Adrian_Pinter" }, { "code": "", "text": "The data in a MongoDB database is stored as BSON data types. The driver (e.g., Pyhton language, NodeJS platform, etc., drivers) converts the different data types used in the applications to BSON types and vice-versa.The driver software lets the application use the BSON data as its JSON representation - see MongoDB Extended JSON (v2).", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Prasad_Saya, how are you? That’s interesting that you consider js-bson as a driver – I hadn’t thought about it that way. Please continue your train of thought and answer my question. That is, why should I bother with js-bson when BSON is less efficient than JSON in the browser?", "username": "Adrian_Pinter" }, { "code": "", "text": "Hello.I dont think BSON is meant to be used with a browser. And, I dont think js-bson as a driver.You can tell about a scenario (sample data / field, perhaps) about what is it you are trying to do with the help an example, using this BSON / JSON / JS-BSON.", "username": "Prasad_Saya" } ]
Why use js-bson instead of serving JSON?
2020-08-21T04:13:43.648Z
Why use js-bson instead of serving JSON?
1,660
null
[ "data-modeling" ]
[ { "code": "", "text": "Where should one focus when building a referral system (let’s say 15 level referral). How to make schema more effective yet simple to understand?", "username": "Ankit_Joshi" }, { "code": "", "text": "Hi @Ankit_Joshi,I am not sure I understand what data will the system store?Do you have users which will have a referral list? Do you need a link as deep as 15 levels? Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny ,Suppose I am building a referral system. My referee used my referral code so he is my level one referee. Other user used my level 1 referee’s referral code, so he’ll be my level 2 referee and so on. I need to link as deep as 50 levels. What should be the most efficient way to do this.", "username": "Ankit_Joshi" }, { "code": "{ userid : ... ,\n level1 : {\n referedUsers : [ userid1, userid2 ...],\n level2 : {\n referedUsers : [ userid3, userid4 ...],\n ....\n { level50 : ... }\n}\n}\n{ userid : ... ,\n level1 : [ userid1, userid2 ...],\nlevel2 : [ userid3, userid4 ...],\n...\nlevel50 : ...\n}\n{ userid : ... ,\n levelRefferals : [ {level : 1 , referenceListId : ...} ,{ level: 2 , referenceListId : ...} ...],\n{\n_id : ...,\nreferrals : [ userid1, userid2, ... ]\n}\n", "text": "Hi @Ankit_Joshi,Well in this case I would try three designs:Referral Lists collection:The one to choose is based on your application way of searching and presenting and updating data.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n referrer: userId,\n referee: [\n { level: 1, refereeList: [ userId1, userId2, .... ] },\n { level: 2, refereeList: [ userId7. userId8, .... ]},\n ....\n ]\n", "text": "Hi @Pavel_Duchovny ,I am going with this structure:Will this be any good?Best\nAnkit", "username": "Ankit_Joshi" }, { "code": "", "text": "Hi @Pavel_Duchovny ,Regarding the Field based model you described. Do I have to create static level upto 50 or can it be dynamic?Best\nAnkit", "username": "Ankit_Joshi" }, { "code": "", "text": "Hi @Ankit_Joshi,Regarding the Field based model you described. Do I have to create static level upto 50 or can it be dynamic?In MongoDB the schema is flexible so you can add fields dynamically as user profile grows.The problem I have with lots of potentially large arrays as I expect in the chosen model. Considering that a user might end up with this design in an array of thousands of elements and nested elements which is bad for performance and scalability.Additionally, you can endup reaching the 16mb document size limit this way.In my model the actual large arrays are stored in seperate documents and the levels only maintain the pointer id.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Suggestions on level based referral schema structure
2020-08-19T20:33:06.433Z
Suggestions on level based referral schema structure
3,891
null
[ "scala" ]
[ { "code": "2020-08-19 04:08:50.628 | ERROR | | 9818284 ms| c.m.i.c.t.a.AsynchronousTlsChannelGroup | error in operation\njava.lang.IllegalStateException: inPlain buffer insufficient despite having capacity of 32768\n at com.mongodb.internal.connection.tlschannel.impl.BufferHolder.enlarge(BufferHolder.java:101)\n at com.mongodb.internal.connection.tlschannel.impl.TlsChannelImpl.unwrapLoop(TlsChannelImpl.java:309)\n at com.mongodb.internal.connection.tlschannel.impl.TlsChannelImpl.readAndUnwrap(TlsChannelImpl.java:612)\n at com.mongodb.internal.connection.tlschannel.impl.TlsChannelImpl.read(TlsChannelImpl.java:235)\n at com.mongodb.internal.connection.tlschannel.ClientTlsChannel.read(ClientTlsChannel.java:168)\n at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup.readHandlingTasks(AsynchronousTlsChannelGroup.java:594)\n at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup.doRead(AsynchronousTlsChannelGroup.java:559)\n at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup.lambda$processRead$5(AsynchronousTlsChannelGroup.java:471)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n at java.lang.Thread.run(Thread.java:748)\n", "text": "We are using the scala driver version 4.1.0 with TLS enabled. Our application runs smoothly for several hours before randomly erroring with the following log:this error is thrown at mongo-java-driver/AsynchronousTlsChannelGroup.java at 7cc4be219a15b08c69dfba8d02c3e014f808b90d · mongodb/mongo-java-driver · GitHuband logged by mongo-java-driver/AsynchronousTlsChannelGroup.java at 7cc4be219a15b08c69dfba8d02c3e014f808b90d · mongodb/mongo-java-driver · GitHubThe problem is, after this error is thrown, the Future never completes and the thread hangs indefinitely. No exception is thrown for us to recover from, and our application grinds to a halt.The error occurs at random, we have not been able to reproduce it on demand.We thought this was related to Fix AsynchronousTlsChannelGroup closing by rozza · Pull Request #385 · rozza/mongo-java-driver · GitHub / https://jira.mongodb.org/browse/JAVA-3730 but upgrading the driver to 4.1.0 did not fix the problem. It sounds related because the pending read operation appears to hang indefinitely, and if it is related to unexpected network / socket issues then it would be difficult to reproduce.", "username": "Ryan_Foltz" }, { "code": "", "text": "HI @Ryan_Foltz,Thanks for posting, this is something we’d need to look into to understand more about what is happening here. Could you post a ticket to the JAVA jira project.Looks like there is a bug in the tlschannel library.Ross", "username": "Ross_Lawley" }, { "code": "", "text": "Thanks, I’ll open the ticket in a minute.For the record, we worked around this by configuring a 10 second socket timeout. We still see the error occasionally, but it no longer brings everything to a halt.", "username": "Ryan_Foltz" } ]
TLS inPlain buffer overflow error hangs indefinitely in scala/java driver 4.1.0
2020-08-19T20:36:04.420Z
TLS inPlain buffer overflow error hangs indefinitely in scala/java driver 4.1.0
3,726
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.0.20 is out and is ready for production deployment. This release contains only fixes since 4.0.19, and is a recommended upgrade for all 4.0 users.\nFixed in this release:4.0 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.0.20 is released
2020-08-21T18:23:12.474Z
MongoDB 4.0.20 is released
2,194
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.9 is out and is ready for production deployment. This release contains only fixes since 4.2.8, and is a recommended upgrade for all 4.2 users.\nFixed in this release:4.2 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.9 is released
2020-08-21T18:20:06.791Z
MongoDB 4.2.9 is released
1,832
null
[ "connecting", "rust" ]
[ { "code": "with_uri_strthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: ServerSelectionError { message: \"Server selection timeout: No available servers. Topology: { Type: Unknown, Servers: [ { Address: <server_address>:27017, Type: Unknown, Error: tls handshake eof }, ] }\" } }', src/lib.rs:715:18\nmongo", "text": "I’m trying to update my app to v1.0.0 from using v0.3.12 and I want to use the sync API for now (just because It’s easier and is what I’ve already built for). I’m trying to use Client with_uri_str so that I can just pass it a mongo connection string and be on my way, but I’m getting this panic when I do so:My server apparently requires tls, but It’s unclear to me how to enable it with sync API. I shouldn’t need any certs or anything since my connection string works fine in the mongo cli. How does one go about connecting using TLS this way? Can I somehow pass client options while using a URL? Also wondering about parse, but that’s async, so I can’t use it AFAIK. Any help is appreciated!", "username": "wilnil" }, { "code": "truetls=false", "text": "Hi Will,When you provide a URI to configure the Client, tls is automatically set to true. My theory is that your server isn’t configured with TLS support, which would explain why your TLS handshake is failing. To fix this you can add the param tls=false to your URI, although it would probably be better to enable TLS on your server for security instead.Here’s the documentation on MongoDB URIsHope this helps!", "username": "Mark_Smith" }, { "code": "eofthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: ServerSelectionError { message: \"Server selection timeout: No available servers. Topology: { Type: Unknown, Servers: [ { Address: <server_address>:27017, Type: Unknown, Error: unexpected end of file }, ] }\" } }', src/lib.rs:716:18", "text": "Oh, okay, cool!So I did that, but now it’s giving me a different, and slightly more vague eof-related error.thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: ServerSelectionError { message: \"Server selection timeout: No available servers. Topology: { Type: Unknown, Servers: [ { Address: <server_address>:27017, Type: Unknown, Error: unexpected end of file }, ] }\" } }', src/lib.rs:716:18Thoughts?", "username": "wilnil" }, { "code": "ssl=truetls=trueMongoDB shell version v4.2.6\nconnecting to: mongodb://<ip_address>:27017/<db_name>?compressors=disabled&gssapiServiceName=mongodb&tls=false\n2020-08-16T16:06:40.982+0000 I NETWORK [js] DBClientConnection failed to receive message from <ip_address>:27017 - HostUnreachable: Connection closed by peer\n2020-08-16T16:06:40.983+0000 E QUERY [js] Error: network error while attempting to run command 'isMaster' on host '<ip_address>:27017' :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-08-16T16:06:40.986+0000 F - [main] exception: connect failed\n2020-08-16T16:06:40.986+0000 E - [main] exiting with code 1\n", "text": "Okay, so I did some experiments with the mongo CLI tool, and found that I had to have ssl=true or tls=true in order to connect. Otherwise, it’d give me an error that looked something like this:Not sure what’s wrong now. I should just be able to connect with that URL, just like I can with the CLI, right?", "username": "wilnil" }, { "code": "tls=truetls=true", "text": "I’m confused! Didn’t you say you could connect with tls=true in the URI? Or was that only in the CLI, and not from the Rust driver?When you try to connect with the Rust driver, with tls=true, what’s the error that you get now?", "username": "Mark_Smith" }, { "code": "mongodb://<db>:<password>@<ip_address>/<db>?ssl=true\nthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: ServerSelectionError { message: \"Server selection timeout: No available servers. Topology: { Type: Unknown, Servers: [ { Address: <ip_address>:27017, Type: Unknown, Error: tls handshake eof }, ] }\" } }', src/lib.rs:716:18\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\n let coll = mongo_client\n .database(\"<db>\")\n .collection(&collection);\n let mut namespace_table = Vec::new(); // The vec of namespace information we're gonna send back.\n\n // Find the document and receive a cursor\n dbg!(&coll);\n let cursor = coll.find(None, None).unwrap();\npanic", "text": "Apologies, it seems I need tls (or SSL, which I guess is an alias?) to be enabled to connect. So the connection string I think I need is this:but that gives me thisWhen I do this:And I believe it’s panicing on that unwrap.The source code can be found here, if you want to take a look.Thanks again!", "username": "wilnil" }, { "code": "", "text": "Um, would this possibly be happening on Ubuntu?", "username": "Jack_Woehr" }, { "code": "", "text": "Why, yes. 18.04 Server, I believe. Why? Is this a known bug? ", "username": "wilnil" }, { "code": "sudo apt install ca-cacert", "text": "I’ve found weirdness with the certs. You might try sudo apt install ca-cacert", "username": "Jack_Woehr" }, { "code": "thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: ServerSelectionError { message: \"Server selection timeout: No available servers. Topology: { Type: Unknown, Servers: [ { Address: <ip_address>:27017, Type: Unknown, Error: tls handshake eof }, ] }\" } }', src/lib.rs:716:18\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\n", "text": "No dice. Perhaps I should try running this on my Fedora box, just for the lolz.Weather Update: That did not work. Same error:", "username": "wilnil" }, { "code": "export SSL_CERT_FILE=$(python3 -c \"import certifi; print(certifi.where())\")mongomongosh", "text": "What I actually did that made things work is this:\nexport SSL_CERT_FILE=$(python3 -c \"import certifi; print(certifi.where())\")\nand then all my Mongo stuff started working, mongo and mongosh and PHP and Python.\nI dunno, something is flakey with the certs, not sure what the problem is, but I’ve got it working, and it wasn’t working before ", "username": "Jack_Woehr" }, { "code": "", "text": "No dice, unfortunately. Same error.", "username": "wilnil" }, { "code": "", "text": "Weird. I’m out of stupid ideas Good luck with that!", "username": "Jack_Woehr" }, { "code": "RUST_BACKTRACE=1", "text": "@wilnil Could you run this again with RUST_BACKTRACE=1 and paste the stack trace?", "username": "Mark_Smith" }, { "code": "", "text": "@wilnil It’s worth noting that the mongodb crate has been rewritten since 0.3.12. It would be worth looking at the 1.1 release and maybe porting your code. I don’t think it would take much more effort than a few search&replaces, and you’ll be getting a fully supported up-to-date driver.", "username": "Mark_Smith" }, { "code": "thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: ServerSelectionError { message: \"Server selection timeout: No available servers. Topology: { Type: Unknown, Servers: [ { Address: <ip_address>:27017, Type: Unknown, Error: tls handshake eof }, ] }\" } }', src/lib.rs:716:18\nstack backtrace:\n 0: backtrace::backtrace::libunwind::trace\n at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86\n 1: backtrace::backtrace::trace_unsynchronized\n at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66\n 2: std::sys_common::backtrace::_print_fmt\n at src/libstd/sys_common/backtrace.rs:78\n 3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt\n at src/libstd/sys_common/backtrace.rs:59\n 4: core::fmt::write\n at src/libcore/fmt/mod.rs:1076\n 5: std::io::Write::write_fmt\n at src/libstd/io/mod.rs:1537\n 6: std::sys_common::backtrace::_print\n at src/libstd/sys_common/backtrace.rs:62\n 7: std::sys_common::backtrace::print\n at src/libstd/sys_common/backtrace.rs:49\n 8: std::panicking::default_hook::{{closure}}\n at src/libstd/panicking.rs:198\n 9: std::panicking::default_hook\n at src/libstd/panicking.rs:218\n 10: std::panicking::rust_panic_with_hook\n at src/libstd/panicking.rs:486\n 11: rust_begin_unwind\n at src/libstd/panicking.rs:388\n 12: core::panicking::panic_fmt\n at src/libcore/panicking.rs:101\n 13: core::option::expect_none_failed\n at src/libcore/option.rs:1272\n 14: core::result::Result<T,E>::unwrap\n at /rustc/d3fb005a39e62501b8b0b356166e515ae24e2e54/src/libcore/result.rs:1005\n 15: shelflife::get_db\n at src/lib.rs:716\n 16: shelflife::view_db\n at src/lib.rs:757\n 17: shelflife::main\n at src/bin/main.rs:233\n 18: std::rt::lang_start::{{closure}}\n at /rustc/d3fb005a39e62501b8b0b356166e515ae24e2e54/src/libstd/rt.rs:67\n 19: std::rt::lang_start_internal::{{closure}}\n at src/libstd/rt.rs:52\n 20: std::panicking::try::do_call\n at src/libstd/panicking.rs:297\n 21: std::panicking::try\n at src/libstd/panicking.rs:274\n 22: std::panic::catch_unwind\n at src/libstd/panic.rs:394\n 23: std::rt::lang_start_internal\n at src/libstd/rt.rs:51\n 24: std::rt::lang_start\n at /rustc/d3fb005a39e62501b8b0b356166e515ae24e2e54/src/libstd/rt.rs:67\n 25: main\n 26: __libc_start_main\n 27: _start\nnote: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.\nlet coll = mongo_client\n .database(\"shelflife\")\n .collection(&collection);\nlet mut namespace_table = Vec::new(); // The vec of namespace information we're gonna send back.\n\n// Find the document and receive a cursor\ndbg!(&coll);\nlet cursor = coll.find(None, None).unwrap();\n", "text": "Here you go:And as to your second point, I thought it would work the way it is, but at this point, yeah. I’ll have to go through it and port my logic.It’s this block here that it’s failing on. Is there a “new” way to do that?I see that there is a sync api example. Do you think it’ll work if I copy it? I’ll try it out after work today and report back. Thanks for the help! ", "username": "wilnil" }, { "code": "", "text": "A post was split to a new topic: Rust driver 2.0.0 problems when ssl=true and tls=true", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Rust driver v1.0.0 tls handshake eof
2020-08-12T04:53:45.285Z
Rust driver v1.0.0 tls handshake eof
6,726
null
[ "aggregation" ]
[ { "code": "customerIdnullcustomerId$ifNull", "text": "Hi,\nOne thing came in mind that if we have a collection and in that collection i have multiple documents and there is a field named customerId for example which is null and i want to add something in customerId field. i.e./1/\n{\n“userId” : “u12”\n“customerId” : null\n“province” : “KPK”\n}\n/2/\n{\n“userId” : “u13”\n“customerId” : null\n“province” : “KPK”\n}\n/3/\n{\n“userId” : “u14”\n“customerId” : null\n“province” : “PUNJAB”\n}And i want to replace all the null with a value, for that i can use $ifNull operator but it will work for online one document at a time(i think) . I want to know that is there is a possibility in MmongoDB to add data in incremental form i.e./1/\n{\n“userId” : “u12”\n“customerId” : “c101”\n“province” : “KPK”\n}\n/2/\n{\n“userId” : “u13”\n“customerId” : “c102”\n“province” : “KPK”\n}\n/3/\n{\n“userId” : “u14”\n“customerId” : “c103”\n“province” : “PUNJAB”\n}", "username": "Nabeel_Raza" }, { "code": "“c101\"null", "text": "“customerId” : “c101\"Yes, you can add an id value like “c101\" replacing the existing null value. It can be done using an aggregation query (or an update with an aggregation). I am assuming the values are in serial like “c101”, “c102”, “c103”, etc.", "username": "Prasad_Saya" }, { "code": "", "text": "i want to replace all the null with a value,on the other hand you may leave the customerId completely out until it is not set and add it as soon as it is available. This can be done with $exist and/or update or an aggregation query", "username": "michael_hoeller" }, { "code": "customerID", "text": "I got you, you are saying that i can replace all the nulls with a value but here throughout the value will be same. what i want is that the customerID should be in incremental form.", "username": "Nabeel_Raza" }, { "code": "", "text": "It involves generating an array with the range of numbers (e.g., 101, 102, … 999, etc.). Then use this array of numbers and an array of the documents that needs to be updated with the customer id. Iterate the array and merge the document with the corresponding customer number (the number can be concatenated with a prefix “c” to form the id like “c101”). This will get the updated documents in an array (which can be unwound, etc. ).", "username": "Prasad_Saya" } ]
Add incremental id on documents in a collection
2020-08-20T05:24:31.868Z
Add incremental id on documents in a collection
1,979
null
[]
[ { "code": "Mutation: {\n register: async (_, { data }) => {\n\n // Start a mongo session & transaction\n const session = await mongoose.startSession()\n session.startTransaction()\n\n try {\n\n // Hash password and create user\n const hashedPassword = await bcrypt.hash(data.password, 12)\n const user = await User.create(\n [{ ...data, password: hashedPassword }],\n { session }\n )\n \n // Create array of items to add to history entries\n const historyEntries = [\n {\n type: 'userAction',\n user: user.id,\n object: 'profile',\n instance: user.id,\n action: 'set',\n property: 'username',\n details: user.username\n },\n {\n type: 'userAction',\n user: user.id,\n object: 'profile',\n instance: user.id,\n action: 'set',\n property: 'password'\n }\n ]\n\n // If the user included an email, add it to historyEntries array\n if (user.email) {\n historyEntries.push({\n type: 'userAction',\n user: user.id,\n object: 'profile',\n instance: user.id,\n action: 'set',\n property: 'email',\n details: user.email\n })\n }\n\n // If the user included a mobile, add it to historyEntries array\n if (user.mobile) {\n historyEntries.push({\n type: 'userAction',\n user: user.id,\n object: 'profile',\n instance: '1',\n action: 'set',\n property: 'mobile',\n details: user.mobile\n })\n }\n\n // Create a history entry for each item in historyEntries\n await HistoryEntry.create(historyEntries, { session })\n\n // commit the changes if everything was successful\n await session.commitTransaction()\n return {\n ok: true,\n user\n }\n } catch (err) {\n // if anything fails above, rollback the changes in the transaction\n await session.abortTransaction()\n return formatErrors(err)\n } finally {\n // end the session\n session.endSession()\n }\n }\n}\n", "text": "I am trying to create an audit trail using Apollo Server and Mongoose. When a user initially registers and I create a document for them in my users collection, I can successfully add a document to my history collection for each piece of data they provided (username, password, email, etc) . And for each one, I am able to include the id for the document that was created for the user (the document in the user collection).However, when I add a transaction in (see below), the userId for the user document comes back as undefined. I am assuming that the id for a document does not get created until the entire transaction has been completed?Any ideas?", "username": "Gregg_Squire" }, { "code": "Mutation: {\n register: async (_, { data }) => {\n\n // Start a mongo session & transaction\n const session = await mongoose.startSession()\n session.startTransaction()\n\n try {\n\n // Hash password and create user\n const hashedPassword = await bcrypt.hash(data.password, 12)\n const user = await User.create(\n [{ ...data, password: hashedPassword }],\n { session }\n )\n \n // Create array of items to add to history entries\n const historyEntries = [\n {\n type: 'userAction',\n user: user.id,\n object: 'profile',\n instance: user.id,\n action: 'set',\n property: 'username',\n details: user.username\n },\n {\n type: 'userAction',\n user: user.id,\n object: 'profile',\n instance: user.id,\n action: 'set',\n property: 'password'\n }\n ]\n\n // If the user included an email, add it to historyEntries array\n if (user.email) {\n historyEntries.push({\n type: 'userAction',\n user: user.id,\n object: 'profile',\n instance: user.id,\n action: 'set',\n property: 'email',\n details: user.email\n })\n }\n\n // If the user included a mobile, add it to historyEntries array\n if (user.mobile) {\n historyEntries.push({\n type: 'userAction',\n user: user.id,\n object: 'profile',\n instance: '1',\n action: 'set',\n property: 'mobile',\n details: user.mobile\n })\n }\n\n // Create a history entry for each item in historyEntries\n await HistoryEntry.create(historyEntries, { session })\n\n // commit the changes if everything was successful\n await session.commitTransaction()\n return {\n ok: true,\n user\n }\n } catch (err) {\n // if anything fails above, rollback the changes in the transaction\n await session.abortTransaction()\n return formatErrors(err)\n } finally {\n // end the session\n session.endSession()\n }\n }\n}\n", "text": "It has been 5 days, and I still haven’t received a reply in mongoDB’s system. So I went to Stack Overflow, and I got my question answered within a few hours.", "username": "Gregg_Squire" }, { "code": "", "text": "It has been 5 days, and I still haven’t received a reply in mongoDB’s system. So I went to Stack Overflow, and I got my question answered within a few hours.Hi @Gregg_Squire ,Thanks for sharing the answer to your question.Similar to the Stack Exchange network, answers in the MongoDB Developer Community Forums are provided by community members without any specific SLA or guarantee of response. You may have to be extra patient for responses during weekend and holiday periods, or for questions requiring more specific expertise.Stack Overflow reaches a much larger community of developers who may be able to help with general programming questions, while the MongoDB community has more specific experience and expertise for questions relating to MongoDB (including MongoDB engineering and product team members). If you do end up posting on multiple sites, it is always helpful to include a link for context.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Retrieve an id from Create operation during a transaction
2020-08-12T20:05:13.429Z
Retrieve an id from Create operation during a transaction
4,574
null
[ "field-encryption" ]
[ { "code": "", "text": "Can I feature request Hashicorp Vault as a KMS for Mongo Client Side Encryption?Is there a board somewhere I can +1 this or see when the ETA would be?Thanks", "username": "L_B" }, { "code": "", "text": "Hi L_B,You can do +1 here - CSFLE - Integration with more KMS providers like Hashicorp Vault – MongoDB Feedback EngineAlso, just wanted to check if you could find a way around till this feature is available?Thanks,\nAnu", "username": "Anu_Madan" } ]
Hashicorp Vault as KMS for Client Side Encryption?
2020-06-30T03:16:23.045Z
Hashicorp Vault as KMS for Client Side Encryption?
2,463
null
[ "golang" ]
[ { "code": "", "text": "why not use count?why using aggregation group?", "username": "Zheng_Ficoto" }, { "code": "count()count()countDocuments()estimatedDocumentCount()count", "text": "Hi @Zheng_Ficoto, and welcome to the forums!why not use count?why using aggregation group?Actually this applies to all MongoDB drivers, not only MongoDB Go driver.MongoDB drivers compatible with the v4.0+ features deprecate their respective cursor and collection count() APIs in favour of two new APIs:The reasoning behind the deprecation is because the count() method, when used without a query predicate returns results based on the collection’s metadata which results in an approximate count. On a sharded cluster, the resulting approximation count may not correctly filter out orphaned documents. In addition, after an unclean server shutdown the resulting approximation count may also be incorrect.The new API names were chosen to make it clear how they behave and exactly what they do.The countDocuments() helper does not use the collection metadata, but counts the documents that match the provided query filter using an aggregation pipeline. While the estimatedDocumentCount() helper returns an estimate of the count of documents in the collection using collection metadata (wraps the count command), rather than counting the documents or consulting an index. See also Specifications: CRUD Count API Details for more information.Back to MongoDB Go driver, you have two choices to use depending on your use case: Collection.CountDocuments() and Collection.EstimatedDocumentCount().Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
About golang mongo driver CountDocuments
2020-08-19T03:48:01.933Z
About golang mongo driver CountDocuments
6,205
null
[]
[ { "code": "", "text": "Which file systems does atlas use in order to achieve snapshot backups?I was under the impression that btfrs is used, which has this feature, but then the responses on my post about btfrs suggest it shouldn’t be used.", "username": "Dushyant_Bangal" }, { "code": "", "text": "Hi @Dushyant_Bangal,Atlas uses Cloud Provider capabilities to snapshot the storage. Each cloud might have its own mechanics, however. all of them are reliable and proven to work.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny, can you provide info about which filesystem is used underneath?", "username": "Dushyant_Bangal" }, { "code": "", "text": "Hi @Dushyant_Bangal,Each cloud provider have its own disk types (EBS, SSD, Premium storage).They are mounted as xfs file systems according to our best practices.Best\nPavel", "username": "Pavel_Duchovny" } ]
Just Curious: How does atlas achieve snapshot backup?
2020-08-19T10:33:45.607Z
Just Curious: How does atlas achieve snapshot backup?
1,548
null
[]
[ { "code": "", "text": "Hi,\nI am unable to connect compass to mongo shell.\nKindly help me to be out of this.Thanking You\nDivya", "username": "Divya_Suman" }, { "code": "", "text": "It should be\nshow collections\nshow dbs", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanx Ramachandra_37567 it worked for me.", "username": "Divya_Suman" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Unable to Connect Compass to Mongo shell
2020-08-20T14:15:47.753Z
Unable to Connect Compass to Mongo shell
1,240
https://www.mongodb.com/…c_2_1024x588.png
[]
[ { "code": "", "text": "MongoDB charts are suddenly not showing any data (after many months of working fine) as seen in this image:\n\nimage2794×1606 167 KB\nAlso, not working for embedding in webpage: \nimage1486×1000 35.2 KB\nThe request shows the following error message:\n{“errorCode”:-1,“simple”:“Error loading data for this chart (error code: -1).”,“verbose”:“Error loading data for this chart (error code: -1). Unknown Error. See https://dochub.mongodb.org/core/charts-embedding-error-codes for details.”}The error code is “unkown error”, so not much I can guess about what’s going on.Is something wrong with mongoCharts?", "username": "Nicolas_Oteiza" }, { "code": "", "text": "Hi @Nicolas_Oteiza, sorry to see you’re having problems. There is an issue with Atlas hostname resolution impacting a small number of clusters. We are working to resolve this for everybody, but in the meantime there is a workaround that should fix this. If you go to Atlas and add a temporary IP whitelist entry (for any IP address), that should fix the issue immediately. Please give that a go and let me know if it works.Tom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Charts not rendering charts
2020-08-21T00:07:11.119Z
MongoDB Charts not rendering charts
2,928
null
[ "graphql" ]
[ { "code": "location: {\n type: \"Point\",\n coordinates: [-73.856077, 40.848447]\n}\ndb.places.find(\n {\n location:\n { $near :\n {\n $geometry: { type: \"Point\", coordinates: [ -73.9667, 40.78 ] },\n $minDistance: 1000,\n $maxDistance: 5000\n }\n }\n }\n)\n", "text": "Hi,I like to find a way to use graphql geospatial query.in mongo:Please let me how to do in new GraphQLThank youRadovan", "username": "Radovan_Stas" }, { "code": "", "text": "Hi @Radovan_Stas,I think the best way to do it is by building a custom resolver function which will receive all input and run the $near query and return the results.Best\nPavel", "username": "Pavel_Duchovny" } ]
GeoJSON $near in GraphQL query
2020-08-20T02:07:25.535Z
GeoJSON $near in GraphQL query
2,643