image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "server", "installation" ]
[ { "code": "", "text": "I am unable to install mongodb-org 4.2 for ubuntu Focal. It appears the server repo is missinghttps://repo.mongodb.org/apt/ubuntu/dists/focal/mongodb-org/4.2/multiverse/binary-amd64/", "username": "Justin_B" }, { "code": "", "text": "No, it is not supported on this platform.MongoDB 4.2 Community Edition supports the following 64-bit Ubuntu LTS (long-term support) releases on x86_64 architecture:18.04 LTS (“Bionic”)\n16.04 LTS (“Xenial”)MongoDB only supports the 64-bit versions of these platforms.MongoDB 4.2 Community Edition on Ubuntu also supports the ARM64 and s390x architectures on select platforms.and", "username": "chris" }, { "code": "", "text": "Welcome to the MongoDB community @Justin_B!As @chris noted, Ubuntu Focal (20.04) isn’t a supported platform for official MongoDB 4.2 packages.Ubuntu Long Term Support (LTS) releases like Focal are generally supported starting from the next major MongoDB server release following the LTS release, which in this case would be MongoDB 4.4 (July 2020).I’d recommend installing MongoDB 4.4, instead. For information on what has changed, please see the Release Notes for MongoDB 4.4.Regards,\nStennie", "username": "Stennie_X" } ]
MongoDB-org 4.2 repo for Ubuntu Focal missing
2021-01-21T20:20:08.088Z
MongoDB-org 4.2 repo for Ubuntu Focal missing
3,923
null
[ "security", "configuration" ]
[ { "code": "", "text": "We have a sharded cluster that we want to enable authentication for. However, we would like to phase it in by creating an account and configuring it for a couple app servers running nodejs/mongoose. Once we are confident in the configs, then we want to deploy to all the app servers than enable authentication enforcement on the sharded cluster… is this possible? Any tips on how to enable authentication on a running mongodb cluster would be appreciated. Especially if the goal is zero downtime.", "username": "AmitG" }, { "code": "security.transitionToAuth", "text": "Hi @AmitG,You can use the security.transitionToAuth configuration parameter (MongoDB 3.4+) to perform a rolling upgrade to enable authentication. There’s a tutorial in the documentation: Update Sharded Cluster to Keyfile Authentication (No Downtime).The link I’ve provided is to the latest production version of MongoDB (currently 4.4), but if you are using an older version of MongoDB server please select the matching MongoDB manual version from the navigation near the top left of this tutorial page.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you for this Stennie. Just to clarify, does this configuration only apply for internal connections between MongoD and MongoS instances? Or will this allow us to transitionToAuth between a nodejs client and MongoS instance as well?", "username": "AmitG" }, { "code": "security.transitionToAuthmongodmongossecurity.transitionToAuthtransitionToAuthtransitionToAuthmlaunchlaunching: \"mongod\" on port 27018\nlaunching: \"mongod\" on port 27019\nlaunching: \"mongod\" on port 27020\nlaunching: config server on port 27021\nreplica set 'configRepl' initialized.\nreplica set 'shard01' initialized.\nlaunching: mongos on port 27017\nadding shards. can take up to 30 seconds...\nsent signal Signals.SIGTERM to 5 processes.\nlaunching: config server on port 27021\nlaunching: \"mongod\" on port 27018\nlaunching: \"mongod\" on port 27019\nlaunching: \"mongod\" on port 27020\nlaunching: mongos on port 27017\nUsername \"user\", password \"password\"\n", "text": "Just to clarify, does this configuration only apply for internal connections between MongoD and MongoS instances?HI @AmitG,There are two key aspects of auth security:The security.transitionToAuth setting makes Authentication optional and does not enforce Authorization (so all clients have full access):A mongod or mongos running with security.transitionToAuth does not enforce user access controls. Users may connect to your deployment without any access control checks and perform read, write, and administrative operations.The implication of transitionToAuth for client/driver connections is that:You can use this setting to transition your Node clients to using authentication, with the caveat that access control will not be enforced until you finish setting up your deployment and remove transitionToAuth.I recommend testing out this admin scenario in a development/staging environment. I find mlaunch (an open source community tool) extremely convenient for standing up local test deployments.For example, to create a single shard test cluster with access control and an initial user:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This is a perfect explanation. Also, thank you for the pointer to mlaunch!", "username": "AmitG" } ]
Incrementally enabling authentication
2021-01-19T20:05:12.702Z
Incrementally enabling authentication
2,205
null
[ "atlas-search", "graphql", "stitch" ]
[ { "code": "", "text": "I am looking to mongodb stitch to expose my data through graphql. I went through the initial getting started and see. the 2 queries. I want to use Altas search which requires aggregation. How do I do this from my graphql endpoint?", "username": "Fred_Kufner" }, { "code": "", "text": "Hi Fred – In order to do this, you’ll want to create a custom resolver and corresponding function which will use the $search aggregation stage. One caveat for this – We currently only allow $search in functions set to run as System so you’ll want to do any permission checking in the function logic. That being said, we plan on supporting $search alongside our Rules soon.", "username": "Drew_DiPalma" }, { "code": "", "text": "Thanks for the response, Drew.That is what I suspected. I am following this example create a custom resolver. I get an error on saving as shown:\n\nimage3360×2100 754 KB\n", "username": "Fred_Kufner" }, { "code": "", "text": "Hi Fred – There is currently a bug in that form (fix pending release) causing bsonType to fail validation, you should be able to use ‘type’ instead of ‘bsonType’.", "username": "Drew_DiPalma" }, { "code": "", "text": "Thanks Drew. Changing to “type” worked. I have a basic custom resolver talking to function that I can hit through graphql now. Next let me see if I can get aggregate working. (I am using sample data sample_mflix)", "username": "Fred_Kufner" }, { "code": "", "text": "Watched your session on MongoDB live. Nice job.I was having trouble returning a collection (in response to an aggregation with $search and autocomplete) from my function that was called by custom resolver. I got error in graphiql.I was missing cursor.toArray(). All good now. I have been using the sample_mflix database and movies collection. I will now implement in my own data.", "username": "Fred_Kufner" }, { "code": "", "text": "Got it working.", "username": "Fred_Kufner" }, { "code": "", "text": "Edit: I did manage to get this working today. It was a classic PEBKAC. That said, the question about the System context vs User context is still one that I would love an update on.Unfortunately, unlike Fred, I have not had as much luck getting this to work. I’ve got a custom resolver defined and pointing to my function. I believe that the function is not being called as I’m not seeing any log statement for the function, though that’s just a guess.The function itself works as expected when testing it in the function editor. I’m able to pass in a string and it returns me the autocomplete results. These results should match the custom payload type that I defined. (Not sure if Realm would throw an exception if they didn’t) I’ve tried all of the permutations that I could imagine the custom resolver wanting. Nothing seems to come back out to the resolver. Is there any reason an async function would not be able to be used with a custom resolver?Additionally, I’m having to run this function as System rather than my authenticated user to not have the aggregate step throw an exception (there’s no message, either, just a blank exception). I’m aware that back in June, System was the only way to call $search, but from the docs, it says that it works in the User context but will run as a System user. So I don’t really know what is happening here.Any and all advice would be gratefully appreciated.", "username": "Justin_Jarae" } ]
Stitch, GraphQL, and Atlas Search
2020-06-08T21:31:34.262Z
Stitch, GraphQL, and Atlas Search
4,577
https://www.mongodb.com/…8aa12cd9c9df.png
[ "data-modeling", "ruby", "mongoid-odm" ]
[ { "code": "", "text": "Many applications, e.g. todo lists, need to maintain a user-defined order of items. Suppose I have the following collections/documents:users → { _id, name }\ntodos → { user_id, text, position }I’d like to have todos look something like:{ user_id: abc…, position: 0, text: “Do laundry” } { user_id: abc…, position: 1, text: “Clean room” } { user_id: abc…, position: 2, text: “Brush teeth” } { user_id: def…, position: 0, text: “Walk dog” } { user_id: def…, position: 1, text: “Buy dog toy” }For each user ID, the positions should always be 0, 1, 2… N with no duplicates and no gaps. Moreover, the users should be able to re-order their list, e.g. something like this UI, and maintain integrity of the list:The problem is further explored here: User-defined Order in SQLI’ve implemented this in my Ruby application using the Mongoid Orderable gem: GitHub - mongoid/mongoid_orderable: Acts as list mongoid implementation. However, when processing many re-order actions with concurrent requests, my lists always get out of sync (position duplicates or gaps occur, e.g. positions become 0, 0, 1, 1, 1, 3, 4, 6, 6, 8… etc.)I am experimenting using transactions with mixed results. I’d like to hear if there is a recommended/canonical way to do this in MongoDB?", "username": "Johnny_Shields" }, { "code": "{\nuser_id ,\nlist_id,\ntiles : [ {\"task\" : \"Do laundry\" }, { \"task\" : \"Do laundry\" }....]\n}\ndb.todos.update({user_id : ..., list_id : ...},{$set : {tiles : newArray})\ndb.todos.findAndModify({user_id : ..., list_id : ..., lock:false},{$set : { lock:true}})\n\ndb.todos.update({user_id : ..., list_id : ...},{$set : {tiles : newArray, lock:false})\n", "text": "Hi @Johnny_Shields,Welcome to MongoDB community!Have you considered just holding this data in an array of the specific user/page you are showing on the application.When you want to change position manipulate the array on the application side and update it as a whole to MongoDB:This operation is guaranteed to be atomic and there is no need for transactions.If you wish to lock a record for update you can get the records you show with a field you will use as locking:Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "For my use case I would not be able to use an Array embedded in a parent document. My app is not actually a “Todo” list; my documents have 100s of fields and I can have 1,000+ in one list. (I agree that an Array would work for simple use cases.)How would you do this assuming it is a requirement for each document to have a “position” field?", "username": "Johnny_Shields" }, { "code": "", "text": "@Johnny_Shields,If the findAndModify mechanism does not work for you even when updating multiple documents consider indeed using transactions.The find and modify works in a way that each document when selected from the specific list is only available for this process to update. Many applications do this by having a pen button before you edit lists, therefore the database update the documents that noone else will edit this list.You can have a parent document for each list to save who edits it in each time.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
How to implement an orderable list
2021-01-21T14:02:21.056Z
How to implement an orderable list
7,515
null
[ "data-modeling" ]
[ { "code": "", "text": "HelloI’m creating an application which allows users define a schema for their documents and then create documents that will be validated against the schema. Document schemas can define “relations” between each other which means that when I query the documents I need to get related documents as well. Schemas also allow to set certain document properties as ‘sortable’ which means I’d need to create indexes for documents that have that schema.All the validation is done on the application layer. I am however facing a dilemma of how to “model” my storage layer.The approach I’m taking at the moment is inserting all documents into one collection no matter what schema they belong to. Which means that I’ll have a bunch of partial indexes that only index the documents where specific fields exist. I also have a relations collection with documents that have ‘from’ and ‘to’ ids (something akin to join tables).It works well but I keep second-guessing this approach. Based on what I see in the wild the most common approach is to have a collection per schema. I’m not against it but it certainly makes for a more complex solution as I’d need to create more collections for the “join” tables and track more things overall.My question is — am I missing something crucial in a the single collection approach that might render it as a bad one?", "username": "Alexey_Golev" }, { "code": "", "text": "Hello @Alexey_Golev! First of all, great question and thank you for being a part of the MongoDB Community! With regard to your question, is it okay to have documents of different shapes within the same collection? The short answer is YES! We actually have a pattern named after this exact use case, it’s called the Polymorphic Pattern.You’re right that lots of people split up their data into separate collections, but IMHO it’s because they are used to doing the SQL way, not the MongoDB way.tl;dr - You’re on the right track and I support your schema decision! ", "username": "JoeKarlsson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Collection modelling ("multi-model" collections)
2021-01-20T19:30:22.673Z
Collection modelling (“multi-model” collections)
2,897
null
[ "replication" ]
[ { "code": "2021-01-11T05:21:39.321+0700 I REPL [replexec-5747] Member GCGPOCHDB01:27017 is now in state PRIMARY\n2021-01-11T05:21:39.322+0700 I REPL [replexec-5746] Member GCGPOCHDB05:27017 is now in state ARBITER\n2021-01-11T05:21:39.322+0700 I REPL [replexec-5738] Member GCGPOCHDB02:27017 is now in state SECONDARY\n2021-01-11T05:21:39.377+0700 I NETWORK [listener] connection accepted from 10.36.8.100:55886 #911 (5 connections now open)\n2021-01-11T05:21:39.377+0700 I NETWORK [conn911] received client metadata from 10.36.8.100:55886 conn911: { driver: { name: \"NetworkInterfaceASIO-Replication\", version: \"3.6.12\" }, os: { type: \"Windows\", name: \"Microsoft Windows Server 2016\", architecture: \"x86_64\", version: \"10.0 (build 14393)\" } }\n2021-01-11T05:21:39.381+0700 I ACCESS [conn911] Successfully authenticated as principal __system on local\n2021-01-11T05:21:42.077+0700 I REPL [repl writer worker 11] applied op: CRUD { ts: Timestamp(1610316861, 2), t: 268, h: 2846058962956094355, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\" }, wall: new Date(1610316861593), o: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"e4860f96-7217-4391-860f-967217c391e7\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1600545361347\", id: \"e4860f96-7217-4391-860f-967217c391e7\", last_heartbeat_at: \"1610316861599\" }, createdAt: new Date(1610316861599), updatedAt: new Date(1610316861599), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 440480ms\n2021-01-11T05:22:07.487+0700 I REPL [repl writer worker 4] applied op: CRUD { ts: Timestamp(1610316955, 1), t: 268, h: -5055395288595994290, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\" }, wall: new Date(1610316955125), o: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"5524ee80-b206-44fb-a4ee-80b206e4fb0d\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1606707904084\", id: \"5524ee80-b206-44fb-a4ee-80b206e4fb0d\", last_heartbeat_at: \"1610316955131\" }, createdAt: new Date(1610316955131), updatedAt: new Date(1610316955131), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 10055ms\n2021-01-11T05:22:07.487+0700 I REPL [repl writer worker 13] applied op: CRUD { ts: Timestamp(1610317028, 1), t: 268, h: -2867339436281527501, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\" }, wall: new Date(1610317028120), o: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"e4860f96-7217-4391-860f-967217c391e7\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1600545361347\", id: \"e4860f96-7217-4391-860f-967217c391e7\", last_heartbeat_at: \"1610317028071\" }, createdAt: new Date(1610317028071), updatedAt: new Date(1610317028071), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 10055ms\n2021-01-11T05:22:09.798+0700 I NETWORK [PeriodicTaskRunner] Socket hangup detected, no longer connected (idle 466 secs, remote host 10.36.8.87:27017)\n2021-01-11T05:22:09.798+0700 I NETWORK [PeriodicTaskRunner] Socket hangup detected, no longer connected (idle 466 secs, remote host 10.36.8.87:27017)\n2021-01-11T05:22:09.798+0700 I NETWORK [PeriodicTaskRunner] Socket hangup detected, no longer connected (idle 466 secs, remote host 10.36.8.88:27017)\n2021-01-11T05:23:12.997+0700 I REPL [repl writer worker 4] applied op: CRUD { ts: Timestamp(1610317060, 1), t: 268, h: 3482908582761058341, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\" }, wall: new Date(1610317060377), o: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"5524ee80-b206-44fb-a4ee-80b206e4fb0d\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1606707904084\", id: \"5524ee80-b206-44fb-a4ee-80b206e4fb0d\", last_heartbeat_at: \"1610317060381\" }, createdAt: new Date(1610317060381), updatedAt: new Date(1610317060381), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 65510ms\n2021-01-11T05:23:12.997+0700 I REPL [repl writer worker 13] applied op: CRUD { ts: Timestamp(1610317219, 1), t: 268, h: 7577762830670030243, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\" }, wall: new Date(1610317219410), o: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"e4860f96-7217-4391-860f-967217c391e7\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1600545361347\", id: \"e4860f96-7217-4391-860f-967217c391e7\", last_heartbeat_at: \"1610317219400\" }, createdAt: new Date(1610317219400), updatedAt: new Date(1610317219400), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 65510ms\n2021-01-11T05:23:15.274+0700 I REPL [repl writer worker 4] applied op: CRUD { ts: Timestamp(1610317125, 1), t: 268, h: 7209638758034574527, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\" }, wall: new Date(1610317125544), o: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"5524ee80-b206-44fb-a4ee-80b206e4fb0d\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1606707904084\", id: \"5524ee80-b206-44fb-a4ee-80b206e4fb0d\", last_heartbeat_at: \"1610317125548\" }, createdAt: new Date(1610317125548), updatedAt: new Date(1610317125548), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 2277ms\n2021-01-11T05:23:15.274+0700 I REPL [repl writer worker 13] applied op: CRUD { ts: Timestamp(1610317289, 1), t: 268, h: -2517501561845510061, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\" }, wall: new Date(1610317289662), o: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"e4860f96-7217-4391-860f-967217c391e7\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1600545361347\", id: \"e4860f96-7217-4391-860f-967217c391e7\", last_heartbeat_at: \"1610317289668\" }, createdAt: new Date(1610317289668), updatedAt: new Date(1610317289668), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 2277ms\n2021-01-11T05:23:15.277+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:23:15.293+0700 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to GCGPOCHDB02:27017 (1 connections now open to GCGPOCHDB02:27017 with a 5 second timeout)\n2021-01-11T05:23:15.293+0700 I NETWORK [LogicalSessionCacheRefresh] Successfully connected to GCGPOCHDB01:27017 (1 connections now open to GCGPOCHDB01:27017 with a 5 second timeout)\n2021-01-11T05:23:15.308+0700 I NETWORK [LogicalSessionCacheRefresh] Successfully connected to GCGPOCHDB01:27017 (1 connections now open to GCGPOCHDB01:27017 with a 0 second timeout)\n2021-01-11T05:23:18.548+0700 I REPL [repl writer worker 12] applied op: CRUD { ts: Timestamp(1610317304, 1), t: 268, h: -1534264711087651767, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\" }, wall: new Date(1610317304723), o: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"e4860f96-7217-4391-860f-967217c391e7\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1600545361347\", id: \"e4860f96-7217-4391-860f-967217c391e7\", last_heartbeat_at: \"1610317304726\" }, createdAt: new Date(1610317304726), updatedAt: new Date(1610317304726), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 3274ms\n2021-01-11T05:23:18.549+0700 I COMMAND [LogicalSessionCacheRefresh] command config.system.sessions command: listIndexes { listIndexes: \"system.sessions\", cursor: {}, $db: \"config\" } numYields:0 reslen:433 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 3238022 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_msg 3238ms\n2021-01-11T05:23:18.549+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:23:25.698+0700 I REPL [repl writer worker 2] applied op: CRUD { ts: Timestamp(1610317336, 1), t: 268, h: 7555128536992133691, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\" }, wall: new Date(1610317336046), o: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"5524ee80-b206-44fb-a4ee-80b206e4fb0d\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1606707904084\", id: \"5524ee80-b206-44fb-a4ee-80b206e4fb0d\", last_heartbeat_at: \"1610317336047\" }, createdAt: new Date(1610317336047), updatedAt: new Date(1610317336047), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 7149ms\n2021-01-11T05:23:25.700+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:23:25.705+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:23:28.137+0700 I NETWORK [listener] connection accepted from 10.36.8.88:54246 #912 (6 connections now open)\n2021-01-11T05:23:28.137+0700 I NETWORK [conn912] received client metadata from 10.36.8.88:54246 conn912: { driver: { name: \"MongoDB Internal Client\", version: \"3.6.12\" }, os: { type: \"Windows\", name: \"Microsoft Windows Server 2016\", architecture: \"x86_64\", version: \"10.0 (build 14393)\" } }\n2021-01-11T05:25:40.643+0700 I NETWORK [conn886] end connection 10.36.8.88:56948 (5 connections now open)\n2021-01-11T05:25:47.050+0700 I REPL [repl writer worker 1] applied op: CRUD { ts: Timestamp(1610317465, 1), t: 268, h: 3348409552460655452, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\" }, wall: new Date(1610317465718), o: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"e4860f96-7217-4391-860f-967217c391e7\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1600545361347\", id: \"e4860f96-7217-4391-860f-967217c391e7\", last_heartbeat_at: \"1610317465720\" }, createdAt: new Date(1610317465720), updatedAt: new Date(1610317465720), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 3048ms\n2021-01-11T05:26:57.013+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:26:57.018+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:26:57.023+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:26:57.027+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:29:10.688+0700 I REPL [repl writer worker 2] applied op: CRUD { ts: Timestamp(1610317717, 1), t: 268, h: 8156486121829943205, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\" }, wall: new Date(1610317717145), o: { _id: \"7f741cd5-e92c-46a8-b41c-d5e92c96a8f5\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"5524ee80-b206-44fb-a4ee-80b206e4fb0d\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1606707904084\", id: \"5524ee80-b206-44fb-a4ee-80b206e4fb0d\", last_heartbeat_at: \"1610317717145\" }, createdAt: new Date(1610317717145), updatedAt: new Date(1610317717145), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 33538ms\n2021-01-11T05:29:10.689+0700 I COMMAND [LogicalSessionCacheReap] command config.system.sessions command: listIndexes { listIndexes: \"system.sessions\", cursor: {}, $db: \"config\" } numYields:0 reslen:433 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 28083218 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_msg 28083ms\n2021-01-11T05:31:57.022+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:31:57.027+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:31:57.031+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:31:57.036+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:36:57.027+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:36:57.032+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:36:57.036+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:36:57.039+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:45:00.262+0700 E STORAGE [ApplyBatchFinalizerForJournal] WiredTiger error (-31802) [1610318700:262335][4832:140713114425808], WT_SESSION.log_flush: __win_file_sync, 355: d:\\programfiles\\data\\db\\journal\\WiredTigerLog.0000001368 handle-sync: FlushFileBuffers: The request could not be performed because of an I/O device error.\n\n: WT_ERROR: non-specific WiredTiger error\n2021-01-11T05:45:00.263+0700 F - [ApplyBatchFinalizerForJournal] Invariant failure: _waitUntilDurableSession->log_flush(_waitUntilDurableSession, \"sync=on\") resulted in status UnknownError: -31802: WT_ERROR: non-specific WiredTiger error at src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_session_cache.cpp 318\n2021-01-11T05:45:00.263+0700 F - [ApplyBatchFinalizerForJournal] \n\n***aborting after invariant() failure\n\n\n2021-01-11T05:45:00.263+0700 I REPL [repl writer worker 10] applied op: CRUD { ts: Timestamp(1610318444, 1), t: 268, h: -701885481189892543, v: 2, op: \"u\", ns: \"api_external.events\", ui: UUID(\"8beada50-2fe3-4564-bc21-bb3b1bc036e6\"), o2: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\" }, wall: new Date(1610318444806), o: { _id: \"1c29cfbe-7cf7-4ddd-a9cf-be7cf76ddd92\", type: \"GATEWAY_STARTED\", payload: \"{\"id\":\"e4860f96-7217-4391-860f-967217c391e7\",\"version\":\"gc.1.2 (build: ${env.BUILD_ID}) revision#${env.GIT_COMMIT}\",\"tags\":null,\"plugins\":[{\"id\":\"mock...\", properties: { started_at: \"1600545361347\", id: \"e4860f96-7217-4391-860f-967217c391e7\", last_heartbeat_at: \"1610318444794\" }, createdAt: new Date(1610318444794), updatedAt: new Date(1610318444794), _class: \"io.gravitee.repository.mongodb.management.internal.model.EventMongo\" } }, took 255447ms\n2021-01-11T05:45:00.263+0700 I COMMAND [LogicalSessionCacheReap] command config.system.sessions command: listIndexes { listIndexes: \"system.sessions\", cursor: {}, $db: \"config\" } numYields:0 reslen:433 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 77643972 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_msg 77644ms\n2021-01-11T05:45:00.263+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:48:18.885+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:48:19.015+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:48:19.020+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:48:19.023+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:48:19.027+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:48:19.031+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:48:19.034+0700 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for api_external/GCGPOCHDB01:27017,GCGPOCHDB01:27017,GCGPOCHDB02:27017\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\util\\stacktrace_windows.cpp(247) mongo::printStackTrace+0x43\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\util\\signal_handlers_synchronous.cpp(184) mongo::`anonymous namespace'::printSignalAndBacktrace+0x74\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\util\\signal_handlers_synchronous.cpp(240) mongo::`anonymous namespace'::abruptQuit+0x85\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] ucrtbase.dll raise+0x1e7\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] ucrtbase.dll abort+0x31\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\util\\assert_util.cpp(162) mongo::invariantOKFailed+0x228\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_session_cache.cpp(318) mongo::WiredTigerSessionCache::waitUntilDurable+0x391\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_recovery_unit.cpp(152) mongo::WiredTigerRecoveryUnit::waitUntilDurable+0x18\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\db\\repl\\sync_tail.cpp(282) mongo::repl::`anonymous namespace'::ApplyBatchFinalizerForJournal::_run+0x126\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe c:\\program files (x86)\\microsoft visual studio 14.0\\vc\\include\\thr\\xthread(247) std::_LaunchPad<std::unique_ptr<std::tuple<void (__cdecl mongo::repl::`anonymous namespace'::ApplyBatchFinalizerForJournal::*)(void) __ptr64,mongo::repl::A0x580d1f93::ApplyBatchFinalizerForJournal * __ptr64>,std::default_delete<std::tuple<void (__cdecl mongo::repl::`anonymous namespace'::ApplyBatchFinalizerForJournal::*)(void) __ptr64,mongo::repl::A0x580d1f93::ApplyBatchFinalizerForJournal * __ptr64> > > >::_Run+0x75\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe c:\\program files (x86)\\microsoft visual studio 14.0\\vc\\include\\thr\\xthread(210) std::_Pad::_Call_func+0x9\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] ucrtbase.dll o__realloc_base+0x60\n2021-01-11T05:48:51.059+0700 I CONTROL [ApplyBatchFinalizerForJournal] KERNEL32.DLL BaseThreadInitThunk+0x14\n2021-01-11T05:48:51.059+0700 F - [ApplyBatchFinalizerForJournal] Got signal: 22 (SIGABRT).\n2021-01-11T05:48:51.060+0700 F CONTROL [ApplyBatchFinalizerForJournal] *** unhandled exception 0x0000000E at 0x00007FFA50A64F38, terminating\n2021-01-11T05:48:51.060+0700 F CONTROL [ApplyBatchFinalizerForJournal] *** stack trace for unhandled exception:\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] KERNELBASE.dll RaiseException+0x68\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\util\\signal_handlers_synchronous.cpp(241) mongo::`anonymous namespace'::abruptQuit+0x9d\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] ucrtbase.dll raise+0x1e7\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] ucrtbase.dll abort+0x31\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\util\\assert_util.cpp(162) mongo::invariantOKFailed+0x228\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_session_cache.cpp(318) mongo::WiredTigerSessionCache::waitUntilDurable+0x391\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_recovery_unit.cpp(152) mongo::WiredTigerRecoveryUnit::waitUntilDurable+0x18\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe ...\\src\\mongo\\db\\repl\\sync_tail.cpp(282) mongo::repl::`anonymous namespace'::ApplyBatchFinalizerForJournal::_run+0x126\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe c:\\program files (x86)\\microsoft visual studio 14.0\\vc\\include\\thr\\xthread(247) std::_LaunchPad<std::unique_ptr<std::tuple<void (__cdecl mongo::repl::`anonymous namespace'::ApplyBatchFinalizerForJournal::*)(void) __ptr64,mongo::repl::A0x580d1f93::ApplyBatchFinalizerForJournal * __ptr64>,std::default_delete<std::tuple<void (__cdecl mongo::repl::`anonymous namespace'::ApplyBatchFinalizerForJournal::*)(void) __ptr64,mongo::repl::A0x580d1f93::ApplyBatchFinalizerForJournal * __ptr64> > > >::_Run+0x75\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] mongod.exe c:\\program files (x86)\\microsoft visual studio 14.0\\vc\\include\\thr\\xthread(210) std::_Pad::_Call_func+0x9\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] ucrtbase.dll o__realloc_base+0x60\n2021-01-11T05:48:51.068+0700 I CONTROL [ApplyBatchFinalizerForJournal] KERNEL32.DLL BaseThreadInitThunk+0x14\n2021-01-11T05:48:51.068+0700 I - [ApplyBatchFinalizerForJournal] \n2021-01-11T05:48:51.069+0700 I CONTROL [ApplyBatchFinalizerForJournal] writing minidump diagnostic file D:\\programfiles\\MongoDB\\bin\\mongod.2021-01-10T22-48-51.mdmp\n2021-01-11T05:48:52.524+0700 F CONTROL [ApplyBatchFinalizerForJournal] *** immediate exit due to unhandled exception\n", "text": "I have 4 mongoDB, 3 data node is candidate primary node and 1 arbiter node\nthis is log file when secondary node stop working, that i try to find what is the problem when this node turn off", "username": "Mc_Donald" }, { "code": "", "text": "Looks like one of your nodes crashed? Can you check your disk storage and database file write permissions as a first step?", "username": "Joe_Drumgoole" } ]
Replica node stop working
2021-01-21T10:11:01.568Z
Replica node stop working
2,632
null
[ "dot-net", "performance" ]
[ { "code": "", "text": "Hi everyone, we currently have a collection with approximately 4 million documents.One of our backend processes requires us to use those 4 million documents. Apply one-to-one update operations on these documents using information from documents in other collections.Currently our idea is to load all the documents in memory in the backend (NetCore 3.1) and process them one by one, but we would like to know if in your experience there is any other mechanism that is more efficient, more secure and less expensive at the level of resources.", "username": "Jose_Alejandro_Benit" }, { "code": "", "text": "Take a look at the aggregation framework. This will allow you to process all the documents on the server without the requirement to load them all into memory or ship them to the client for processing.", "username": "Joe_Drumgoole" } ]
Recommended strategy for bulk processing in backend
2021-01-20T23:32:45.414Z
Recommended strategy for bulk processing in backend
1,716
null
[ "cxx" ]
[ { "code": " #include <cstdlib>\n #include <iostream>\n #include <cstdint>\n #include <vector>\n\n #include <bsoncxx/json.hpp>\n #include <mongocxx/client.hpp>\n #include <mongocxx/stdx.hpp>\n #include <mongocxx/uri.hpp>\n #include <mongocxx/instance.hpp>\n #include <mongocxx/exception/exception.hpp>\n #include <bsoncxx/builder/stream/helpers.hpp>\n #include <bsoncxx/builder/stream/document.hpp>\n #include <bsoncxx/builder/stream/array.hpp>\n\n using bsoncxx::builder::stream::close_array;\n using bsoncxx::builder::stream::close_document;\n using bsoncxx::builder::stream::document;\n using bsoncxx::builder::stream::finalize;\n using bsoncxx::builder::stream::open_array;\n using bsoncxx::builder::stream::open_document;\n\n namespace mongo\n {\n \tusing namespace mongocxx;\n };\n\n void run()\n {\n \tstatic mongo::instance instance;\n }\n\n int main()\n {\n \ttry {\n \t\trun();\n \t\tstd::cout << \"connected ok\" << std::endl;\n \t} catch( const mongo::exception &e )\n \t{\n \t\tstd::cout << \"caught \" << e.what() << std::endl;\n \t}\n \treturn EXIT_SUCCESS;\n }\n", "text": "I`m not understend how build this driver, make it pls if u can and share\nI waste 3 days, but have:test.cpp:(.text+0xa): undefined reference to `__imp__ZN8mongocxx7v_noabi8instanceD1Ev’with compiling:", "username": "Dmitry_LLIAMAH" }, { "code": "", "text": "@Dmitry_LLIAMAH, the error seems to indicate a linker error. However, without more information it is not possible to provide a resolution. Please have a look at these instructions for requesting assistance. (They are from the C driver project, but the same concepts apply here.)", "username": "Roberto_Sanchez" } ]
Help with compilation C++ driver for MinGw64
2021-01-21T01:10:55.113Z
Help with compilation C++ driver for MinGw64
2,206
https://www.mongodb.com/…b67ce9ea7550.png
[ "aggregation", "security" ]
[ { "code": "col.aggregate([{$indexStats: {}}])\nnot authorized on <db-name> to execute command { aggregate: \"<col-name>\", pipeline: [ { $indexStats: {} } ]...\n$indexStats", "text": "Sometimes when I execute the following pipeline:I get this error:What permission does my user need to be able to use the $indexStats? I tried looking here, but can’t find any.Update - found it here, sorry:\nScreenshot 2021-01-20 at 19.41.08821×299 30.1 KB\n", "username": "Alex_Bjorlig" }, { "code": "", "text": "Hi @Alex_Bjorlig,The indexStats requires a specific action:\nhttps://docs.mongodb.com/manual/reference/privilege-actions/#indexStatsIf you don’t have it in atlas ui , I suggest you temporary add a more powerful user with built in read permission on this database.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What permissions do I need to use the $indexStats?
2021-01-20T18:28:50.162Z
What permissions do I need to use the $indexStats?
5,505
null
[ "graphql" ]
[ { "code": "", "text": "I have configured a 1 to many relationship between 2 collections. When trying to use a RelationInput to create 1 new item and link to old items the result is that only the newly created item is linked and the old ones are unlinked. I tried to create the new id in the client and add it in the link array with no success. Only the new item exists in the relation field array.\nThe only solution was to first create the related object entity and then update the relating entity with the link.", "username": "michael_schiller" }, { "code": "", "text": "Hi Michael, this is currently a limitation of GraphQL. Your workaround would be the suggested way to go about doing this today. If you’d like to add this as a suggestion for, please do so here - Realm: Top (0 ideas) – MongoDB Feedback Engine", "username": "Sumedha_Mehta1" } ]
Using RelationInput to create a new item and link to existing ones
2021-01-11T09:32:42.180Z
Using RelationInput to create a new item and link to existing ones
1,955
null
[]
[ { "code": "", "text": "Is MacBook Air m1 compatible with MongoDB?", "username": "Kristy_Liu" }, { "code": "", "text": "Hi Kristy,Following the Install MongoDB Community Edition on macOS instructions worked fine for me on my Mac w/ the M1 processor. Can you elaborate on the error you’re seeing?Regards,\nBrian", "username": "Brian_Leonard" } ]
I can't install MongoDB on the new MacBook Air M1 , Please help!!
2021-01-20T19:33:09.572Z
I can&rsquo;t install MongoDB on the new MacBook Air M1 , Please help!!
7,181
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that addresses some issues reported since 2.11.5 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.6%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "Boris_Dogadov" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver 2.11.6 Released
2021-01-20T21:18:20.835Z
.NET Driver 2.11.6 Released
1,999
null
[]
[ { "code": "certifications: {\n type: 'list',\n objectType: 'Certification',\n optional: true\n},\n", "text": "I can’t get any optional arrays to work at all.Given that in a Schema file, I always get an error message list Property ‘Person.certifications’ of type ‘array’ cannot be nullable. If I make it optional: false, it works fine. Additionally, the data being persisted is an emptry Array .", "username": "Gregg_Bolinger" }, { "code": "", "text": "Arrays of optional elements is not something currently supported, but it’s something we eventually want to support.", "username": "nirinchev" }, { "code": "", "text": "Good to know. Thanks for the info.", "username": "Gregg_Bolinger" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does Realm actually support optional arrays?
2021-01-20T17:45:30.492Z
Does Realm actually support optional arrays?
2,632
null
[ "data-modeling" ]
[ { "code": "", "text": "What would be a recommended approach to enable existing collections for multilanguage support?\nThe objects of these collections have several properties, some are relevant for translation, others aren’t.\nCRUD operations are continuously performed on all collections.\nShould the translation information be stored directly in the objects of the collection, or would it be better, to e.g. have a dedicated text collection, which is somehow joined?\nA simple example would be very much appreciated", "username": "Christoph_Rohatsch" }, { "code": "{ word : \n { \"en\" : \" hello\", \"fr\" : \"bonjour\" ... }\n}\n{ word : [ {\"k\" : \"en\" , v : \"hello\"}, {\"k\" : \"fr\" , v : \"bonjour\"}]}\n", "text": "Hi @Christoph_Rohatsch,Welcome to MongoDB community.A common approach to this problem is storing an embedded document for each language for example:Another approach better for searches can be the attribute pattern:Than index and query {word.k: 1 , word.v : 1}Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.Thank\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Multilanguage enablement of existing collections
2021-01-20T13:08:00.865Z
Multilanguage enablement of existing collections
1,569
null
[ "dot-net", "backup" ]
[ { "code": "mongodump --db mydbname --out \\folderA\\folderB\\mongorestore --db mydbname --drop \\folderA\\folderB\\mydbname", "text": "Hello,\nis possible to use mongodump and mongorestore by the .net driver ?\nIn the shell to backup a single database i can write :\nmongodump --db mydbname --out \\folderA\\folderB\\and then to restore i write :\nmongorestore --db mydbname --drop \\folderA\\folderB\\mydbnameWorks like a charme.\nCan we do the same with the driver ?\nThanks a lot.", "username": "Marco_L" }, { "code": "Task.Run(() =>\n{\n System.Diagnostics.Process process = new System.Diagnostics.Process();\n System.Diagnostics.ProcessStartInfo startInfo = new System.Diagnostics.ProcessStartInfo();\n startInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden;\n startInfo.FileName = \"cmd.exe\";\n //Important is that the argument begins with /C otherwise it won't work.\n startInfo.Arguments = @\"/C mongodump --db shop --out \\backup\\mongo\\\";\n process.StartInfo = startInfo;\n process.Start();\n while (!process.HasExited) ; \n if (process.ExitCode == 0)\n {\n MessageBox.Show(\"Backup OK !\");\n }\n else\n {\n MessageBox.Show($\"Backup fail ! Exit code: {process.ExitCode}\");\n }\n });", "text": "Hello guys, share my solution.\nHope this helps !\nMongodump and Mongorestore are applications, not MongoDB commands which is why i would have to run the executable on another thread for not freeze the UI if the operation fail.\nUsing Mongodump:", "username": "Marco_L" } ]
.NET driver backup database
2021-01-18T23:51:31.562Z
.NET driver backup database
2,888
null
[ "queries", "node-js" ]
[ { "code": "async function getRangeOfFuzes() {\n const fuzes = await fuze.find()\n let todayDate = new Date();\n let oneWeekDate = new Date(+new Date() + 7*24*60*60*1000);\n console.log(`todayDate is: ${todayDate}`);\n console.log(`oneWeekDate is: ${oneWeekDate}`);\n\n $elemMatch: {\n\n startTime: {\n $gte: todayDate\n $lte: oneWeekDate\n }\n }\n}\ngetRangeOfFuzes();\n", "text": "Hello,\nI’m a full stack development learner working on a group project. I’m currently attempting to create a function for our home page that uses .find() to pull all documents from our database using dynamic dates.On the home page of our app, we want to show all events from the database that fall between the day the page is loaded and 7 days from page load. I haven’t been able to find a way to get the $lte to be 7 days from now.Any help is greatly appreciated!Thanks,\nKeryHere is my current code:", "username": "Thestrals_EvolveU" }, { "code": "", "text": "Have you look at javascript - How to subtract days from a plain Date? - Stack Overflow", "username": "steevej" } ]
Using dynamic dates with .find
2021-01-20T19:32:29.355Z
Using dynamic dates with .find
2,950
null
[ "dot-net" ]
[ { "code": " var app = App.Create(\"foo\");λ dir /s\n Volume in drive C has no label.\n Volume Serial Number is C2FB-E58D\n\n Directory of ..\\bin\\Debug\\net5.0\n\n01/18/2021 03:21 PM <DIR> .\n01/18/2021 03:21 PM <DIR> ..\n10/15/2020 08:23 PM 466,944 MongoDB.Bson.dll\n12/10/2020 01:41 PM 305,152 Realm.dll\n01/18/2021 03:10 PM 22,841 Realm_POC.deps.json\n01/18/2021 03:10 PM 12,288 Realm_POC.dll\n01/18/2021 03:10 PM 142,848 Realm_POC.exe\n01/18/2021 03:10 PM 10,780 Realm_POC.pdb\n01/18/2021 03:10 PM 246 Realm_POC.runtimeconfig.dev.json\n01/18/2021 03:10 PM 147 Realm_POC.runtimeconfig.json\n01/15/2021 07:07 PM <DIR> ref\n02/14/2019 03:21 PM 176,640 Remotion.Linq.dll\n01/15/2021 07:07 PM <DIR> runtimes\n 9 File(s) 1,137,886 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\ref\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n01/15/2021 07:07 PM 6,656 Realm_POC.dll\n 1 File(s) 6,656 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n01/15/2021 07:07 PM <DIR> linux-x64\n01/15/2021 07:07 PM <DIR> osx-x64\n01/15/2021 07:07 PM <DIR> win-x64\n01/15/2021 07:07 PM <DIR> win-x86\n 0 File(s) 0 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\\linux-x64\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n01/15/2021 07:07 PM <DIR> native\n 0 File(s) 0 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\\linux-x64\\native\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n12/10/2020 01:40 PM 15,529,368 librealm-wrappers.so\n 1 File(s) 15,529,368 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\\osx-x64\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n01/15/2021 07:07 PM <DIR> native\n 0 File(s) 0 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\\osx-x64\\native\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n12/10/2020 01:33 PM 13,133,840 librealm-wrappers.dylib\n 1 File(s) 13,133,840 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\\win-x64\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n01/15/2021 07:07 PM <DIR> native\n 0 File(s) 0 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\\win-x64\\native\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n12/10/2020 01:37 PM 8,343,552 realm-wrappers.dll\n 1 File(s) 8,343,552 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\\win-x86\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n01/15/2021 07:07 PM <DIR> native\n 0 File(s) 0 bytes\n\n Directory of ..\\bin\\Debug\\net5.0\\runtimes\\win-x86\\native\n\n01/15/2021 07:07 PM <DIR> .\n01/15/2021 07:07 PM <DIR> ..\n12/10/2020 01:36 PM 7,024,128 realm-wrappers.dll\n 1 File(s) 7,024,128 bytes\n\n Total Files Listed:\n 14 File(s) 45,175,430 bytes\n 32 Dir(s) 38,008,614,912 bytes free", "text": "Hello,I’m currently trying to test out realm for my next project. I’ve created a very basic console app using .net framework 5.0.102 and realm nuget package (10.0.0-beta.3). Unfortunately right from the start: var app = App.Create(\"foo\");I’m getting an exception:TypeInitializationException: The type initializer for ‘Realms.Sync.AppHandle’ threw an exception.\nInner Exception 2:\nDllNotFoundException: Unable to load DLL ‘realm-wrappers’ or one of its dependencies: The specified module could not be found. (0x8007007E)IDE:\nMicrosoft Visual Studio Community 2019\nVersion 16.8.4Any ideas how to fix this? I do see wrappes being available in ‘runtimes’ folder:", "username": "Mike_Mike" }, { "code": "binobjdotnet", "text": "So to clarify - you are running a .NET 5.0 Console app on Windows 10 and you installed the Realm package via the package manager integrated inside VS? And you’re running the project from the VS UI (either with or without a debugger)? That should definitely be a supported scenario and NuGet should add references to the correct native dlls. The two things I can suggest trying out are:If none of that works, it would help a lot if you could archive your project and send it to us. While most of the issues we’ve seen with missing native dependencies have been due to NuGet caching/not restoring the correct thing, it’s certainly possible that there’s a corner case we haven’t covered that you’re hitting.Note: Unrelated to the issue you’re seeing, there’s a regression in 10.0.0-beta.3 which prevents establishing sync connection on Windows due to certificate validation failure. We’ve just released 10.0.0-beta.5 which resolves that, so when you get past the missing wrappers dll, might be worth upgrading to that one.", "username": "nirinchev" }, { "code": "var realm = Realm.GetInstance(config);", "text": "Nikola,\nThank you for your answer. That was the exact use case, as you described.\nI’ve managed to get this to work on different machine. Unfortunately after playing around with writes/updates my app crashes currently on\nvar realm = Realm.GetInstance(config);\nwithout an exception, just leaving this error code behind:(process 11616) exited with code -1073741819.", "username": "Mike_Mike" }, { "code": "", "text": "Is this with beta.3 or beta.5? And is it something reproducible/deterministic or does it crash randomly?", "username": "nirinchev" } ]
Realm .net sdk (10.0.0-beta.3) crashes on launch
2021-01-18T19:45:25.713Z
Realm .net sdk (10.0.0-beta.3) crashes on launch
2,373
null
[]
[ { "code": "", "text": "Hey, I’ve been thinking about using mongo for my project to logging everything but I couldn’t decide that I should really us mongo for logging my data or not. Therefore, I want to ask here and get some experienced people advices.Firstly, I want to log everything that happens during working. Thus, there’d be a lot of logs, there’d be hundreds of thousands log data. Logs will not be that large in the memory(few fields like date, user etc.)", "username": "Duck" }, { "code": "", "text": "Hi @Duck,To better assist you we need to understand how would your application access the data and how would you search update it if any.We have some pattern for timesiries data, and it sounds like you be doing a similar taskBuilding with Patterns: The Bucket Pattern | MongoDB BlogWe also have a great blog with a logging example\nhttps://www.mongodb.com/article/mongodb-schema-design-best-practicesBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "My application only write logs to mongo. I don’t get/update/delete logs right now.\nI’ve look documents you attached. I really want to know which way is more effective for my project. Basically, I log everything about my project and I want to know the question I asked above.Especially, I’d like to know how to store log for a long time(10 years.) without performance impact.", "username": "Duck" }, { "code": "", "text": "Hi Dusk,In that case I think you can write the logs as single documents.Possibly partition the data into weekly or monthly collections.Drop collections once they past 10 years.What are the volume of the events you log? How many per minute?Please note that MongoDB Atlas has a datalake archiving features allowing you to archive collections to S3 Amazon buckets for cheap storage of large volumes.Consider this as well.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using MongoDB for logging your data
2021-01-20T10:08:43.767Z
Using MongoDB for logging your data
6,966
null
[ "python", "connecting", "security" ]
[ { "code": "import requests\nfrom requests.auth import HTTPDigestAuth\nfrom ipify import get_ip\n\natlas_group_id = \"<your group ID aka project ID -- check the Project / Settings section inside Atlas>\"\natlas_username = \"<your atlas username/email, eg. [email protected]>\"\natlas_api_key = \"<your atlas API key>\"\nip = get_ip()\n\nresp = requests.post(\n \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{atlas_group_id}/whitelist\".format(atlas_group_id=atlas_group_id),\n auth=HTTPDigestAuth(atlas_username, atlas_api_key),\n json=[{'ipAddress': ip, 'comment': 'From PythonAnywhere'}] # the comment is optional\n)\nif resp.status_code in (200, 201):\n print(\"MongoDB Atlas whitelist request successful\", flush=True)\nelse:\n print(\n \"MongoDB Atlas whitelist request problem: status code was {status_code}, content was {content}\".format(\n status_code=resp.status_code, content=resp.content\n ),\n flush=True\n )\n", "text": "Hey there,we currently deploy our prototype on PythonAnywhere. We work with Django and use mongoengine for connections to MongoDB Atlas.The problem is, that Atlas requires to whitelist the IP but PythonAnywhere does not give me a static IP address - it depends on the time the code of the Django application runs.\nWhitelisting 0.0.0.0/0 is too risky as we don’t want to open connection from anywhere.PythonAnywhere recommends to use some sort of atlas_api_key to request a whitelist of the actual IP but I don’t know where I can generate this API key. Is this with a service like Realm? Or might this code example just be out of date? Is this approach even more secure?Here is the link to the code example I found on PythonAnywhereGetting a MongoDB server\nWe don't provide Mongo servers ourselves, so you'll need to get one from an\nexternal provider (many of our customers are using MongoDB Atlas).\nFor best performance, you should", "username": "Philipp_Wuerfel" }, { "code": "", "text": "Hi @Philipp_Wuerfel,The following to guide covers how to create the api key for your projects:https://docs.atlas.mongodb.com/configure-api-accessPlease let me know if that works.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "import os\nimport mongoengine\nfrom dotenv import load_dotenv\n\n# IP-Handling on MongoDB Atlas\nimport requests\nfrom requests.auth import HTTPDigestAuth\n\nload_dotenv() # initialize / load env variables\nDB_URI = os.getenv(\"TEST_DB_URI\")\ndb = mongoengine.connect(host=DB_URI)\n\n# whitelist current IP adress via API Key on MongoDB Atlas\n# source: https://help.pythonanywhere.com/pages/MongoDB/\nprint(\"Whitelisting...\")\natlas_group_id = os.getenv(\"ATLAS_GROUP_ID\")\natlas_public_key = os.getenv(\"ATLAS_PUBLIC_KEY\")\natlas_private_key = os.getenv(\"ATLAS_PRIVATE_KEY\")\n\n# alternative to receive external ip-adress: https://checkip.amazonaws.com, https://ident.me \nip = requests.get('https://ident.me').text.strip()\n\nprint(ip)\n\nresp = requests.post(\n \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{atlas_group_id}/whitelist\".format(atlas_group_id=atlas_group_id),\n auth=HTTPDigestAuth(atlas_public_key, atlas_private_key),\n json=[{'ipAddress': ip, 'comment': 'From PythonAnywhere'}] # the comment is optional\n)\n\nif resp.status_code in (200, 201):\n print(\"MongoDB Atlas whitelist request successful\", flush=True)\nelse:\n print(\n \"MongoDB Atlas whitelist request problem: status code was {status_code}, content was {content}\".format(\n status_code=resp.status_code, content=resp.content\n ),\n\n flush=True\n )\n", "text": "Thank you so much @Pavel_Duchovny - it worked! To make this thread complete:\nThe package ipify is still not updated and will not work for Python Versions newer than 3.8.\nI posted an updated version of the code which will work on Python 3.8 and used request to ident.me to receive my external ip-adress. I also updated the naming of the variables necessary for authentication as we use private - public key pair and not username - api key. This ressource from the documentation helped me to understand a little bit better what is going on:https://docs.atlas.mongodb.com/reference/api/whitelist-add-oneThe API Key can be created on the Project Level only - Organization Level is not necessary:\nScroll down to Create One API Key for One Project in https://docs.atlas.mongodb.com/configure-api-access", "username": "Philipp_Wuerfel" }, { "code": "", "text": "I still have a question though:\nAs PythonAnywhere IP adress is not static I can’t whitelist with the standard way. Allowing “access from anywhere” is not good practise and to handle this I use an API Key to request a whitelist entry from application level. Still I need to add the IP adress from the instance on PythonAnywhere to the IP Access List of the generated API Key. This again only works if I allow “access from anywhere” or at least a range of all IP adresses my PythonAnywhere instance might get as the IP adress is not static.Maybe I get this wrong and I am curious about the advantage here. I basically only allow access via dbuser if my application with the api key was able to add it’s external IP to the whitelist.The “hacker” now needs to steal private and public key value pair to open database network access and than also needs to steal dbuser - password pair to finally gain access to my db. Did I get this right?", "username": "Philipp_Wuerfel" }, { "code": "", "text": "@Philipp_Wuerfel, yea this workaround they offer is not super secure. Therefore, if you could run this application in a cloud container VPC peered to your cloud region it will be much better.Btw you can outsource the code that whitelist Ips to a realm webhook and keep your keys as secret.If you want to explore this way I recommend looking into my blog\nhttps://www.mongodb.com/article/building-service-based-atlas-managementThe idea is highlighted there for a different problem but its similar. So once you have this webhook your python app can run it providing the IP.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_DuchovnyThank you! I will take a look at it!\nFor now I wrote a script that regulary checks the ip ranges of aws and it notifies me in case of any changes.\nIt compares an update on https://ip-ranges.amazonaws.com/ip-ranges.jsonPythonAnywhere uses only us-east-1 or eu-central-1 so I limited the range of allowed ip adresses on the access list based on all possible ip ranges from these two server locations. If it needs an update I can run an update script to update my ip access list. It is not the most secure solution but at least better than allowing access from anywhere. In future I might move away to a service provider allowing me to set up a static external ip adress from which I can run my application from. I wanted to start my project on PythonAnywhere to have things simple in setup and costs.", "username": "Philipp_Wuerfel" }, { "code": "", "text": "@Pavel_DuchovnyThanks again! I used your blog and now I think I came up with a solid solution.The application on PythonAnywhere connects to a webhook created with the Realm service and secured by payload signature. On the webhook I have the api keys as secrets and this requests an ip whitelist for network access to my database for my application.Now I am independent from ip changes on PythonAnywhere while still having a solid security on my database as I only allow access from a single ip. This setup should be enough for prototyping.You have been a great help! ", "username": "Philipp_Wuerfel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Secure way to connect to MongoDB Atlas from PythonAnywhere
2021-01-14T11:50:37.835Z
Secure way to connect to MongoDB Atlas from PythonAnywhere
4,792
null
[ "aggregation", "performance" ]
[ { "code": ".aggregate(pipeline, {explain: true})", "text": "I’m tracing down slow queries and have some raw execution plans, generated from .aggregate(pipeline, {explain: true}).Is there a tool somewhere - so I can import the raw JSON and get the information visualized - or do I need to read the JSON raw?", "username": "Alex_Bjorlig" }, { "code": "", "text": "Hi @Alex_Bjorlig,I was working on a similar plugin to compass to visualise explain from raw but it was never published.Can I ask you to open a https://feedback.mongodb.com so I can try and push this option forward?CC: @Massimiliano_Marcon Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny .Do you envision this is a feature of Compass?Do you know if it’s possible to explain an aggregation query in Compass today?", "username": "Alex_Bjorlig" }, { "code": "", "text": "Hi @Alex_Bjorlig,As far as I recall its only available in query document tab and not in aggregation builder.But my feature idea was to parse both aggregation and query.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Ok, nice! And would that be inside compass? (I just wan’t to make sure to put the feature request in the right place)", "username": "Alex_Bjorlig" }, { "code": "", "text": "Well you don’t have to explicitly say it , jist mention you would love to see an explain plan visualise tool like the one in compass query today ", "username": "Pavel_Duchovny" }, { "code": "", "text": "It’s here ", "username": "Alex_Bjorlig" } ]
Can I visualize a queryplan if I have the raw JSON?
2021-01-20T11:55:16.765Z
Can I visualize a queryplan if I have the raw JSON?
2,214
null
[ "mongoose-odm" ]
[ { "code": "GameSession : {\n _id: \"e055dbe445dda0332dc7cbd4c\"\n amount: 500,\n location: {type: mongoose.Schema.ObjectId, ref: 'Location'},\n madeBy: {type: mongoose.Schema.ObjectId, ref: 'User'},\n}\n\nUser: {\n _id: \"e055dfesd445dda0332dc7et54ge\"\n points: 5000\n}\nlet sessions = await GameSession.find({location: location._id}).populate('madeBy', '_id points').exec()sessions.madeBy.points = 200", "text": "I get the sessions and I populate the documentlet sessions = await GameSession.find({location: location._id}).populate('madeBy', '_id points').exec()I get the data correctly. Now if I make some changes to the populated data like…sessions.madeBy.points = 200Is there any way to save this change?", "username": "Soulaimane_Benmessao" }, { "code": "", "text": "I would look into the $set operator to update a document. Here you can set a new value of an existing field or add a new field to a document.It looks like you’re using mongoose they have an example of use $set.", "username": "tapiocaPENGUIN" } ]
Save/update changes of a populated document
2021-01-20T17:28:56.418Z
Save/update changes of a populated document
8,383
null
[ "java" ]
[ { "code": "MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 65017072099 not found on server", "text": "Hi everyone , lately i’ve developed a talend job where i read data from index and apply transformation and store it in another index , my problem is that when my job reaches 100k rows it gives me this error:MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 65017072099 not found on serveri noticed also that the number of rows read decreases with time before it returns the error!!Any idea whats wrong?", "username": "Heider" }, { "code": "com.mongodb.MongoCursorNotFoundException", "text": "Hello @Heider, welcome to the MongoDB Community forum.com.mongodb.MongoCursorNotFoundException definition says just that much - there is no additional information in the API docs. It is more likely that the cursor timed out.I browsed thru some online posts and they suggest the same. cursorTimeoutMillis is the Server Parameter that can be used to set the cursor timeout threshold. Increasing the value is likely to solve the issue.", "username": "Prasad_Saya" } ]
MongoCursorNotFoundException
2021-01-20T10:55:55.377Z
MongoCursorNotFoundException
5,465
null
[ "atlas-device-sync", "react-native" ]
[ { "code": "failed to validate upload changesets: failed to validate ArrayInsert instruction: cannot have index (1) greater than prior-size (0) (ProtocolErrorCode=212)ios vRealmJS/10.1.3react-native vRealmJS/10.0.1", "text": "I just upgraded Realm to 10.1.3 in my React Native app after encountering this error:failed to validate upload changesets: failed to validate ArrayInsert instruction: cannot have index (1) greater than prior-size (0) (ProtocolErrorCode=212)When testing on iOS simulators (I’ve tried 3-4 of them) and Android simulators, the requests come in with the correct SDK version in the Realm logs (e.g. ios vRealmJS/10.1.3), but when building the release version onto my device, the SDK version comes in as react-native vRealmJS/10.0.1. I tried nuking node_modules, pods, pod caches, build folders, and everything else I could think of, but still whenever I either build the app onto my device or generate an archive via Xcode, I still get this 10.0.1 version in the logs, which comes with the error mentioned above. Everything is resolved when building onto simulators, both in debug and release modes, but for some reason pushing it to the device or through app store connect doesn’t get the updated 10.1.3 version.Would really appreciate any guidance on this - I’ve been wrestling with this error all day and somehow releasing this Realm update has ended up being the roadblock.", "username": "Peter_Stakoun" }, { "code": "", "text": "My best guess is that your caches haven’t been cleared, and your app is still using v10.0.1. I suggest that you clear the iOS build folder and the Metro’s cache and rebuild the entire app.", "username": "Kenneth_Geisshirt" }, { "code": "failed to validate upload changesets: failed to validate ArrayInsert instruction: cannot have index (1) greater than prior-size (0) (ProtocolErrorCode=212)SDK: react-native vRealmJS/10.0.1npx react-native run-ios --configuration Release --devicenpx react-native run-ios --configuration ReleaseSDK: ios vRealmJS/10.1.3", "text": "I just tried clearing DerivedData and resetting metro cache. Same error:\nfailed to validate upload changesets: failed to validate ArrayInsert instruction: cannot have index (1) greater than prior-size (0) (ProtocolErrorCode=212) and SDK: react-native vRealmJS/10.0.1.\nTo build it I ran npx react-native run-ios --configuration Release --device. I then tried npx react-native run-ios --configuration Release (to build for simulator instead of device) and the simulator build worked perfectly fine with SDK: ios vRealmJS/10.1.3. Nothing changed between running those two builds.", "username": "Peter_Stakoun" }, { "code": "ios vRealmJS/10.1.2", "text": "We just tested this update on a different device and that one is coming in with SDK version ios vRealmJS/10.1.2. What’s interesting is that both of these incorrect versions were the first versions installed on the respective devices. Could the realm SDK version not be upgrading past the first version of realm installed on the device being used?", "username": "Peter_Stakoun" }, { "code": "", "text": "Hi Peter, have you tried uninstalling the old version of your app from the devices before installing the new version?", "username": "Andrew_Morgan" }, { "code": "", "text": "My guess is that uninstalling and reinstalling would probably fix it (I can give it a try now), but this doesn’t seem like a sustainable approach to pushing out updates, which is why I’m hoping to find the underlying cause.", "username": "Peter_Stakoun" }, { "code": "", "text": "Agreed - but it’s another data point.", "username": "Andrew_Morgan" }, { "code": "", "text": "Yep, uninstalling the app from the device and reinstalling the exact same build fixed the issue and requests are now coming in with SDK version 10.1.3.", "username": "Peter_Stakoun" }, { "code": "pod install", "text": "You might have to run pod install after upgrading (and remove the iOS build folder) and rebuild your app. In order to provide hot reload, React Native is caching a lot.", "username": "Kenneth_Geisshirt" }, { "code": "ios/buildDerivedData", "text": "I’ve tried clearing every cache and build folder I could find, and this issue persists (I also don’t have an ios/build folder (not entirely sure why), so I’m clearing the files out of DerivedData).What makes me think this isn’t a cache issue is that after archiving and uploading the app to app store connect, I can install it on two different devices and get two different realm versions based on what those devices previously had installed, although the build is the exact same.", "username": "Peter_Stakoun" } ]
Logs showing SDK version "react-native vRealmJS/10.0.1" coming from device after upgrading to 10.1.3
2021-01-17T05:42:19.416Z
Logs showing SDK version &ldquo;react-native vRealmJS/10.0.1&rdquo; coming from device after upgrading to 10.1.3
2,990
https://www.mongodb.com/…4_2_1024x512.png
[ "upgrading" ]
[ { "code": "mongodumpmongorestore", "text": "Hi,I’m planning to upgrade my application from using MongoDB 3.0.6 to MongoDB 4.4.The documentation states that:To upgrade from a version earlier than the 4.2-series, you must successively upgrade major releases until you have upgraded to 4.2- series. For example, if you are running a 4.0-series, you must upgrade first to 4.2 before you can upgrade to 4.4.This means that I need to upgrade from 3.0.6, to 3.2, 3.4, 3.6, 4.0, 4.2 and finally to 4.4I’m wondering if dumping the database using mongodump, then updating directly to mongoDB 4.4, and finally restoring the database using mongorestore can work, and if there any disadvantages or complications we need to be aware of. Any ideas?Thanks a lot!", "username": "Evan_L" }, { "code": "mongodumpmongorestoremongorestoremongorestoremongodumpmongorestore", "text": "I’m wondering if dumping the database using mongodump , then updating directly to mongoDB 4.4, and finally restoring the database using mongorestore can work, and if there any disadvantages or complications we need to be aware of. Any ideas?Hi @Evan_L,Successive in-place upgrades are the most thoroughly tested approach and will minimise downtime.However, if downtime is acceptable (and you’re open to hitting a few fixable speedbumps along the way), mongorestore is definitely a possibility.Please see my comment on Replace mongodb binaries all at once? - #3 by Stennie_X for more elaboration on alternatives to in-place upgrades.The mongorestore path is the last option discussed in my comment:If downtime is acceptable and you have a large number of major releases to upgrade through, you can also consider a migration using mongodump and mongorestore .This approach will require more testing and patience because you are still subject to major version compatibility changes and will encounter different issues depending on the provenance of your data. This approach will also not support upgrading user & auth schema, which is supported via the usual in-place upgrade path.Unlike an in-place upgrade, dump & restore will recreate all data files and indexes so you may run into some (fixable) errors. The most likely complaints will be due to stricter validation of index and collection options which would not cause an issue for an in-place upgrade. I definitely recommend testing this procedure in a staging/QA environment with a copy of your production data to ensure there are no unexpected issues that might otherwise delay your production upgrade.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upgrading from MongoDB 3.0 to 4.4
2021-01-20T13:11:39.848Z
Upgrading from MongoDB 3.0 to 4.4
4,262
null
[]
[ { "code": "\t\t\t$search : {\n\t\t\t\tindex : 'default',\n\t\t\t\tcompound: {\n\t\t\t\t\tmust: [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t'autocomplete': {\n\t\t\t\t\t\t\t\tquery: _options.query,\n\t\t\t\t\t\t\t\tpath: 'plainText',\n\t\t\t\t\t\t\t\tfuzzy: {\n\t\t\t\t\t\t\t\t\tmaxEdits: 2,\n\t\t\t\t\t\t\t\t\tprefixLength: 3,\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tequals: {\n\t\t\t\t\t\t\t\tpath: 'searchAccessList',\n\t\t\t\t\t\t\t\tvalue: _userID,\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$project: {\n\t\t\t\t'searchAccessLists': 0\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$skip: _options.skip || _options.defaultSkip\n\t\t},\n\t\t{\n\t\t\t$limit: Math.min((_options.limit || _options.defaultLimit), _options.maxLimit)\n\t\t},\n{\n\"mappings\": {\n\t\"dynamic\": false,\n\t\"fields\": {\n\t\t\"plainText\": {\n\t\t\t\"minGrams\": 3,\n\t\t\t\"tokenization\": \"nGram\",\n\t\t\t\"type\": \"autocomplete\"\n\t\t},\n\t\t\"searchAccessList\": {\n\t\t\t\"type\": \"objectId\"\n\t\t},\n\t\t\"tags\": {\n\t\t\t\"analyzer\": \"lucene.keyword\",\n\t\t\t\"type\": \"string\"\n\t\t}\n\t}\n}\n", "text": "Hi, I have working pipeline. However when I add $project stage it does not work. ‘false’ instead of ‘0’ does not work neither. Moving $project stage after $skip or $limit does not work.Does anyone know what can be the problem here?Pipeline is set up like this:Index definition:}", "username": "richtone" }, { "code": "searchAccessListssearchAccessList", "text": "May be it is because you havesearchAccessListsvssearchAccessList", "username": "steevej" }, { "code": "", "text": "Yes, sorry I found it right after submitting this thread, however I couldnt delete it before approval.", "username": "richtone" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$project stage not working
2021-01-20T10:08:27.144Z
$project stage not working
3,735
null
[ "devops" ]
[ { "code": "", "text": "I have installed and using MongoDB in my cloud hosting. While after i moved my domain into Cloudflare DNS, I’m not able to access the mongodb. Rest of all files and msql are be accessible.When i disabled the Proxied option and enable DNS Only. Its works all fine.Do I need to add any other settings to access mongodb from xcloudflare proxiedplease help.", "username": "Ajith_Prakash" }, { "code": "", "text": "Hi @Ajith_Prakash,Per the Cloudflare FAQ:By default, only A and CNAME records that handle web traffic (HTTP and HTTPs) can be proxied to Cloudflare. All other DNS records should be toggled to a gray cloud.The MongoDB Wire Protocol is a binary protocol and not suitable for HTTP/HTTPS proxying.You should use the “DNS Only” option and secure your deployment using available measures in the MongoDB Security Checklist. At a minimum you should enable & configure access control, enable TLS for network encryption, and limit network exposure via your firewall settings. Ideally your database deployment should only be directly accessible from a limited range of originating IPs (members of the same MongoDB deployment, application servers, and administrative hosts).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "We setup a firewall rule like below in our MongoDB Serversudo firewall-cmd --permanent --zone=public --add-rich-rule=‘rule family=“ipv4” source address=“MONGODB_SERVER_IP_HERE” port protocol=“tcp” port=“27017” accept’After we are able to connect our mongo db server remotely but while we enable the PROXIED options from cloudflare we couldn’t establish the connection.We have to enable the PROXIED option then only we are covered by CLOUDFLARE CDN and WAFplease provide a support. we are not able to see any similar public post, cloud flare community", "username": "Ajith_Prakash" }, { "code": "", "text": "Hi @Ajith_Prakash,Per the earlier note I quoted from the Cloudflare FAQ:By default, only A and CNAME records that handle web traffic (HTTP and HTTPs) can be proxied to CloudflareThe MongoDB Wire Protocol is a binary protocol and not suitable for use with HTTP/HTTPS proxying, CDNs (Content Delivery Networks), or Cloudflare’s WAF (Web Application Firewall).All of the options you are asking about are for web application security using HTTP or HTTPS protocols with text payloads that can be cached and/or inspected. You likely won’t find similar public discussion because this is not an applicable configuration for a database deployment.You should use Cloudflare’s “DNS Only” option and secure your database deployment using available measures in the MongoDB Security Checklist .Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "thanks @Stennie_X, for your reply.As per Cloud Flare, If we connect mongo db using a non proxied sub domain will work.But we are using a PHP library to connect the mongo db with our application but the library is only supporting IP address to connect mongo db, is any option to connect mongo db via a subdomain hostname. (eg: mongo.example.com)So it help me to use Primary Domain as proxied and subdomain (DNS only) can be connect to mongo along with the security which you mentioned.thanks\nAjith", "username": "Ajith_Prakash" }, { "code": "", "text": "But we are using a PHP library to connect the mongo db with our application but the library is only supporting IP address to connect mongo db, is any option to connect mongo db via a subdomain hostname. (eg: mongo.example.com)Hi @Ajith_Prakash,You can use hostnames or IPs in a standard MongoDB Connection String, but hostnames are more common for production applications.If you’re having trouble working out the connection syntax, please provide more information:Thanks,\nStennie", "username": "Stennie_X" } ]
MongoDB not accessible from cloudflare Proxied
2020-11-05T17:43:54.659Z
MongoDB not accessible from cloudflare Proxied
4,913
null
[]
[ { "code": "Schema.index({ expire: 1 }, { expireAfterSeconds: 604800 })\nSchema.updateMany(query, { $set: { expire: Date.now() } })\n", "text": "I have made a TTL index on my Schema using:When I want documents to be deleted after the 7 days I use:Is this the correct way? Is there an easier way to delete specific documents after 7 days?\nWhen I want a document to expire do I need to set the TTL index value to a date or could I use a different value?", "username": "mental_N_A" }, { "code": "", "text": "Hi @mental_N_A,The ttl index is a good way to expire documents gradually like you mentioned.You can have the expire time set to seconds to live only and you can extend it via collMod command.There is a trick where you can set the expireAfterSeconds to 0 and than once the time specified on your field (eg. expire) is crossed the document is eligible for deletion immediately.So you can write each document with expire of now + 7 days , during document creation, and no need to bulk update.Let me know if that helps.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Deleting specific documents after 7 days
2021-01-20T12:24:10.835Z
Deleting specific documents after 7 days
4,844
null
[ "replication" ]
[ { "code": "{\n \"set\" : \"REPLICASETNAME\",\n \"date\" : ISODate(\"2021-01-18T06:32:58.873Z\"),\n \"myState\" : 2,\n \"term\" : NumberLong(282),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 4,\n \"writeMajorityCount\" : 4,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"lastCommittedWallTime\" : ISODate(\"2021-01-18T03:37:14.660Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"readConcernMajorityWallTime\" : ISODate(\"2021-01-18T03:37:14.660Z\"),\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"lastAppliedWallTime\" : ISODate(\"2021-01-18T03:37:14.660Z\"),\n \"lastDurableWallTime\" : ISODate(\"2021-01-18T03:37:14.660Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1610941034, 1),\n \"lastStableCheckpointTimestamp\" : Timestamp(1610941034, 1),\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"SERVER1:27017\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:57.494Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"Couldn't get a connection within the time limit\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1\n },\n {\n \"_id\" : 1,\n \"name\" : \"SERVER1:27018\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:57.493Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"Couldn't get a connection within the time limit\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1\n },\n {\n \"_id\" : 2,\n \"name\" : \"SERVER1:27019\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:57.494Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"Couldn't get a connection within the time limit\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1\n },\n {\n \"_id\" : 3,\n \"name\" : \"SERVER1:27020\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:57.493Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"Couldn't get a connection within the time limit\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1\n },\n {\n \"_id\" : 4,\n \"name\" : \"SERVER2:27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 563,\n \"optime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"optimeDate\" : ISODate(\"2021-01-18T03:37:14Z\"),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"could not find member to sync from\",\n \"configVersion\" : 80072,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 5,\n \"name\" : \"server2:27018\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 441,\n \"optime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"optimeDate\" : ISODate(\"2021-01-18T03:37:14Z\"),\n \"optimeDurableDate\" : ISODate(\"2021-01-18T03:37:14Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:58.568Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2021-01-18T06:32:58.653Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 80072\n },\n {\n \"_id\" : 6,\n \"name\" : \"SERVER2:27020\",\n \"health\" : 1,\n \"state\" : 7,\n \"stateStr\" : \"ARBITER\",\n \"uptime\" : 441,\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:58.546Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2021-01-18T06:32:58.087Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 80072\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1610941034, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"JnS/QEMfZgZaDZezG44AVJ5yod4=\"),\n \"keyId\" : NumberLong(\"6859706153517973505\")\n }\n },\n \"operationTime\" : Timestamp(1610941034, 1)\n}\n{\n \"_id\" : \"REPLICASETNAME\",\n \"version\" : 68,\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"SERVER1:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 10,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 1,\n \"host\" : \"SERVER1:27018\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 5,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 2,\n \"host\" : \"SERVER1:27019\",\n \"arbiterOnly\" : true,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 0,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 3,\n \"host\" : \"SERVER1:27020\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 5,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 4,\n \"host\" : \"SERVER2:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 5,\n \"host\" : \"SERVER2:27018\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 6,\n \"host\" : \"SERVER2:27020\",\n \"arbiterOnly\" : true,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 0,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"5e450995dc745aa0d45e8d74\")\n }\n}\n2021-01-18T06:36:26.998+0000 I ELECTION [replexec-7] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:36:36.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27020\n2021-01-18T06:36:36.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27018\n2021-01-18T06:36:36.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27017\n2021-01-18T06:36:36.994+0000 I CONNPOOL [Replication] Connecting to SERVER1:27019\n2021-01-18T06:36:37.192+0000 I ELECTION [replexec-7] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:36:48.172+0000 I ELECTION [replexec-7] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:36:56.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27018\n2021-01-18T06:36:56.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27017\n2021-01-18T06:36:56.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27020\n2021-01-18T06:36:56.994+0000 I CONNPOOL [Replication] Connecting to SERVER1:27019\n2021-01-18T06:36:59.118+0000 I ELECTION [replexec-6] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:37:06.544+0000 I NETWORK [conn173] end connection 127.0.0.1:34860 (156 connections now open)\n2021-01-18T06:37:09.613+0000 I ELECTION [replexec-6] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:37:16.159+0000 I NETWORK [listener] connection accepted from 172.28.0.1:47278 #174 (157 connections now open)\n2021-01-18T06:37:16.160+0000 I NETWORK [conn174] received client metadata from 172.28.0.1:47278 conn174: { application: { name: \"MongoDB Shell\" }, driver: { name: \"MongoDB Internal Client\", version: \"4.4.0\" }, os: { type: \"Linux\", name: \"Ubuntu\", architecture: \"x86_64\", version: \"18.04\" } }\n2021-01-18T06:37:16.190+0000 I ACCESS [conn174] Successfully authenticated as principal vvgcaameig on admin from client 172.28.0.1:47278\n2021-01-18T06:37:16.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27017\n2021-01-18T06:37:16.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27020\n2021-01-18T06:37:16.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27018\n2021-01-18T06:37:16.994+0000 I CONNPOOL [Replication] Connecting to SERVER1:27019\n2021-01-18T06:37:19.924+0000 I ELECTION [replexec-6] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:37:24.291+0000 I NETWORK [conn174] end connection 172.28.0.1:47278 (156 connections now open)\n2021-01-18T06:37:30.959+0000 I ELECTION [replexec-7] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:37:36.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27020\n2021-01-18T06:37:36.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27018\n2021-01-18T06:37:36.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27017\n2021-01-18T06:37:36.994+0000 I CONNPOOL [Replication] Connecting to SERVER1:27019\n2021-01-18T06:37:38.720+0000 I NETWORK [listener] connection accepted from SERVER2IP:31560 #175 (157 connections now open)\n2021-01-18T06:37:38.720+0000 I NETWORK [conn175] received client metadata from SERVER2IP:31560 conn175: { application: { name: \"MongoDB Shell\" }, driver: { name: \"MongoDB Internal Client\", version: \"4.4.0\" }, os: { type: \"Linux\", name: \"Ubuntu\", architecture: \"x86_64\", version: \"18.04\" } }\n2021-01-18T06:37:38.757+0000 I ACCESS [conn175] Successfully authenticated as principal vvgcaameig on admin from client SERVER2IP:31560\n2021-01-18T06:37:41.175+0000 I ELECTION [replexec-6] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:37:51.868+0000 I ELECTION [replexec-7] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2021-01-18T06:37:56.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27018\n2021-01-18T06:37:56.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27017\n2021-01-18T06:37:56.648+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to SERVER1:27020\n2021-01-18T06:37:56.994+0000 I CONNPOOL [Replication] Connecting to SERVER1:27019\n\n\n\n\n\n\n\n\n\n\n\n\n{\n \"set\" : \"REPLICASETNAME\",\n \"date\" : ISODate(\"2021-01-18T06:32:58.873Z\"),\n \"myState\" : 2,\n \"term\" : NumberLong(282),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 4,\n \"writeMajorityCount\" : 4,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"lastCommittedWallTime\" : ISODate(\"2021-01-18T03:37:14.660Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"readConcernMajorityWallTime\" : ISODate(\"2021-01-18T03:37:14.660Z\"),\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"lastAppliedWallTime\" : ISODate(\"2021-01-18T03:37:14.660Z\"),\n \"lastDurableWallTime\" : ISODate(\"2021-01-18T03:37:14.660Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1610941034, 1),\n \"lastStableCheckpointTimestamp\" : Timestamp(1610941034, 1),\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"SERVER1:27017\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:57.494Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"Couldn't get a connection within the time limit\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1\n },\n {\n \"_id\" : 1,\n \"name\" : \"SERVER1:27018\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:57.493Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"Couldn't get a connection within the time limit\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1\n },\n {\n \"_id\" : 2,\n \"name\" : \"SERVER1:27019\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:57.494Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"Couldn't get a connection within the time limit\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1\n },\n {\n \"_id\" : 3,\n \"name\" : \"SERVER1:27020\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:57.493Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"Couldn't get a connection within the time limit\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1\n },\n {\n \"_id\" : 4,\n \"name\" : \"SERVER2:27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 563,\n \"optime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"optimeDate\" : ISODate(\"2021-01-18T03:37:14Z\"),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"could not find member to sync from\",\n \"configVersion\" : 80072,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 5,\n \"name\" : \"dev.instasafe.io:27018\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 441,\n \"optime\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1610941034, 1),\n \"t\" : NumberLong(282)\n },\n \"optimeDate\" : ISODate(\"2021-01-18T03:37:14Z\"),\n \"optimeDurableDate\" : ISODate(\"2021-01-18T03:37:14Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:58.568Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2021-01-18T06:32:58.653Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 80072\n },\n {\n \"_id\" : 6,\n \"name\" : \"dev.instasafe.io:27020\",\n \"health\" : 1,\n \"state\" : 7,\n \"stateStr\" : \"ARBITER\",\n \"uptime\" : 441,\n \"lastHeartbeat\" : ISODate(\"2021-01-18T06:32:58.546Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2021-01-18T06:32:58.087Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 80072\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1610941034, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"JnS/QEMfZgZaDZezG44AVJ5yod4=\"),\n \"keyId\" : NumberLong(\"6859706153517973505\")\n }\n },\n \"operationTime\" : Timestamp(1610941034, 1)\n}\n", "text": "So totally I have 7 replica members. 4 in SERVER1 and 3 in SERVER2. Due to some issue, SERVER1 has shutdown. Although 3 members still remain in SERVER2 (2S-1A) election is not taking place and I am not sure why.This is the rs.status() in the secondary in SERVER2This is the rs.conf()The logs of both SERVER2 secondary 1 and 2 looks like thisAs per the logs it keeps trying to connect to SERVER1 although it is down. Why not try SERVER2 members itself? There is 2 Secondaries and 1 Arbiter and it can easily do an election", "username": "Shrinidhi_Rao" }, { "code": "", "text": "Hello @Shrinidhi_Rao,In a replic-set, in case a primary node goes down, one of the secondary nodes take its place, as primary; this process is called as failover. The failover mechanism also means that a majority of the nodes must be available for a primary to be elected. The available nodes decide which node becomes a primary, thru the process of election.In a seven member replica-set (with all members eligible to vote) the majority is 4. There must be majority voting members for an election to take place and elect a new primary.In your case, without the 4 voting members (of SERVER1) the election is not possible with just the remaining 3 members to elect a new primary.", "username": "Prasad_Saya" }, { "code": "", "text": "What @Prasad_Saya said is correct: you need to be able to reach a majority of the voting members in a Replica Set to be able to trigger an election.Extreme example: you have 50 nodes in your RS. Only 7 voting members in it (maximum). If you are unlucky and 4 of these voting members go down, you will be stuck with 46 nodes that can’t elect a Primary unless you reconfigure the Replica Set.You can read more about elections and failover operation in MongoDB’s documentation.Also, I wanted to ask you why you had 4 nodes on a single node and 3 on another node. That’s not a production ready setup because it’s breaks the first principale of the Replica Sets which is High Availability. This is the #1 reason for RS to exist. That’s why we usually recommend for production clusters to put the different node in 3 different data centers which are completely independent from one to another (or at least different regions if you are in a cloud provider like eu-west-1, eu-west-2 and eu-west-3).So, if you lose SERVER1 for one reason or another, you actually lose 4 nodes and your majority… So even if your Primary was on SERVER2 when this happens, because it can’t reach anymore the majority of the nodes, it will step down and become secondary and you will be in the situation where you are at the moment: read only. Which means your cluster isn’t Highly Available because just one failing hard drive can bring everything down.Also, SERVER1 has to share its hardware (RAM, CPU, SSD, Network) with the 4 mongods that you have running on this machine which isn’t ideal. It’s as good as one mongod really with access to 100% of the hardware.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "That was my development server and I was in the midst of moving the 1P-2S-1A replica set from SERVER1 to SERVER2 since server1 is unstable.My idea was to create replicas in server2 and then slowly disconnect the ones in server1 until everything is moved.Not sure if that is the best method but it seemed safe for me. But I did not know the minimum 4 are needed. I read the documentation and my interpretation of “majority of the votes” was whoever got the most votes from the remaining voters. Guess I didnt understand it properly. Still I wonder why though?", "username": "Shrinidhi_Rao" }, { "code": "", "text": "Thanks I learnt something new today. Although I do not understand why this rule though.", "username": "Shrinidhi_Rao" }, { "code": "", "text": "A MongoDB Replica Set always needs to be able to reach the majority of the voting nodes to maintain a Primary. So if you have a 3 nodes RS - 2 need to be alive at all time. If you have 5 nodes, 3. 7 nodes => 4.If that wasn’t the case, let’s imagine the following scenario:\nSERVER 1: 4 nodes.\nSERVER 2: 3 nodes.\nSo 7 nodes total. Let’s imagine I just need 3 to have a Primary instead of 4. Let’s now imagine that you have a network partition between SERVER 1 and 2. But they can still communicate with your driver.On SERVER 1: 4 nodes can still see each other. They can elect a Primary.\nOn SERVER 2: 3 nodes can still see each other. Because we decided for the example that 3 was enough, they will ALSO elect a Primary… You now have your driver connecting to 2 Primaries and writing to one or the other randomly… Soon the content of your database doesn’t make any sense anymore as you are basically reading and writing randomly on what is now 2 separated and independent Replica Set.This is called a split-brain issue and you NEVER EVER want that. This is one of the main reason why this rule exists. It will prevent you from ever having to deal with this situation.In the “real” world on SERVER 2, because these 3 nodes can’t reach the majority of the nodes, they won’t be able to elect a Primary and you are safe to recover once the network partition is fixed. Meanwhile, normal activity can carry on on your SERVER1 with its 4 nodes (but that’s really still a bad idea !).I hope this helps !If you want to learn more, I really recommend that you have a look to this free training on MongoDB University which cover all these concepts.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Election fails although 3 nodes are available!
2021-01-18T06:51:20.204Z
Election fails although 3 nodes are available!
5,132
null
[ "queries" ]
[ { "code": "", "text": "What does\n“winningPlan” : {\n“stage” : “EOF”means ? id did expect colscan order idxscanThanks", "username": "Kiron_Ip_Yarda" }, { "code": "queryPlanner.namespaceexecutionStats isEOFexecutionStages", "text": "Hi @Kiron_Ip_Yarda,EOF (end-of-file) means no more data can be read. If this is indicated as the winning plan, I expect you haven’t queried the intended namespace (perhaps didn’t use the intended database?) or the target collection has not been created yet. I would confirm the queryPlanner.namespace value in your explain plan is as expected.If collection data exists, an explain with executionStats will include an isEOF indication in executionStages when a stage indicates there are no further results to return.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
"winningPlan" : { "stage" : "EOF"
2021-01-19T20:04:14.620Z
&ldquo;winningPlan&rdquo; : { &ldquo;stage&rdquo; : &ldquo;EOF&rdquo;
5,002
null
[]
[ { "code": "", "text": "MongoDb Enterprise for having the storage engine WiredTiger and In-Memory Storage Engine or only In-Memory Storage Engine?", "username": "Andre_Luiz" }, { "code": "mongod", "text": "Welcome to the MongoDB community @Andre_Luiz!MongoDB Enterprise includes the default WiredTiger storage engine as well an In-Memory Storage Engine and Encrypted Storage Engine. The In-Memory and Encrypted storage engines are both based on WiredTiger, so share some common configuration options, metrics, and behaviours.The In-Memory Storage Engine supports a niche set of use cases where all data and indexes can be kept in memory without persisting to disk – this avoids disk I/O for more predictable latency of database operations, but limits the size of your working set to an in-memory cache. The Encrypted Storage Engine natively encrypts data files at rest, so all data files are fully encrypted from a filesystem perspective and data only exists in an unencrypted state in memory and during transmission.Storage engine configuration is per mongod, but typically all members of a deployment should use the same storage engine configuration with the exception of some In-Memory deployments which may need data persistence.More generally, MongoDB Enterprise includes the same features as MongoDB Community edition with added operational features for enterprise requirements including advanced security (Kerberos & LDAP), auditing, and additional storage engines. See: Upgrade MongoDB Community to MongoDB Enterprise. For production usage MongoDB Enterprise requires an Enterprise Advanced subscription which includes commercial support, operational tooling (Cloud/Ops Manager and Enterprise Operator for Kubernetes), access to on-demand training, and access to other tools like the MongoDB Connector for BI.Regards,\nStennie", "username": "Stennie_X" } ]
MongoDB Enterprise storage engines
2021-01-19T15:19:25.144Z
MongoDB Enterprise storage engines
1,921
null
[]
[ { "code": "{\n \"_id\": 1,\n \"adj\": -5,\n \"grades\": [\n {\n \"grade\": 70,\n \"mean\": 75,\n \"std\": 6\n },\n {\n \"grade\": 75,\n \"mean\": 90,\n \"std\": 4\n },\n {\n \"grade\": 75,\n \"mean\": 85,\n \"std\": 6\n }\n ]\n},{\n \"_id\": 2,\n \"adj\": -4,\n \"grades\": [\n {\n \"grade\": 80,\n \"mean\": 75,\n \"std\": 6\n },\n {\n \"grade\": 77,\n \"mean\": 90,\n \"std\": 3\n },\n {\n \"grade\": 75,\n \"mean\": 85,\n \"std\": 4\n }\n ]\n}\n", "text": "Hi\nI do have collection called “student” which has following data (provided in json format).I can update the “grade” in grades array by usingdb.students.updateMany({},{inc:{\"grades..grade\":-5}})but instead of “-5” I want to use “adj” field of document . How can I do that?\nI tried to usedb.students.updateMany({},{inc:{\"grades..grade\":’$adj’}})but I am getting errorThanks", "username": "Dhruvesh_Patel" }, { "code": "", "text": "Hi @Dhruvesh_Patel,I think you should be able to use aggregation pipeline update with $add instead of the $inc.https://docs.mongodb.com/manual/tutorial/update-documents-with-aggregation-pipeline/Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I am sorry , I have tried few different variations and it’s not working. Is it possible you can provide working example of it?thanks", "username": "Dhruvesh_Patel" }, { "code": "$mapgrades$addgradeadj$mergeObjectsgradedb.students.updateMany({},\n[{\n $set: {\n grades: {\n $map: {\n input: \"$grades\",\n in: {\n $mergeObjects: [\n \"$$this\",\n { grade: { $add: [\"$$this.grade\", \"$adj\"] } }\n ]\n }\n }\n }\n }\n}])\n", "text": "@Dhruvesh_Patel,You can do something like this, $map to iterate loop of grades array, $add to sum grade and adj fields, $mergeObjects to merge current object with updated grade field", "username": "turivishal" }, { "code": "", "text": "Thanks it works. It’s compatible with mongodb version 4.2 but not with 4.0", "username": "Dhruvesh_Patel" }, { "code": "", "text": "You are right it is starting from MongoDB v4.2.", "username": "turivishal" } ]
Update array element with field value of document
2021-01-15T20:50:10.607Z
Update array element with field value of document
2,338
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of 1.4.5 of the MongoDB Go Driver.This release contains several bug fixes. For more information please see the release notes.You can obtain the driver source from GitHub under the 1.4.5 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "benjirewis" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.4.5 Released
2021-01-19T20:17:50.210Z
MongoDB Go Driver 1.4.5 Released
1,794
null
[ "aggregation" ]
[ { "code": "$accumulatorfunction accumulateFn(someState) {\n someState.fooBar = 2;\n return someState;\n}\n// vs\nfunction accumulateFn(someState) {\n return {\n fooBar: 2\n }\n}\n\nstate.path.to.some.object += 3; return state;", "text": "Hi everyone. Thanks for the $accumulator - it’s amazing. We are building some amazing things with it, in combination with the group aggregation step. It feels like super-powers!I was wondering, if it’s safe to do something like the following, where the state is mutated and then returned:(I have some pretty complex $accumulator’s that would be very simple to maintain, if I can do something like state.path.to.some.object += 3; return state;. Thanks in advance ", "username": "Alex_Bjorlig" }, { "code": "", "text": "hi @Alex_Bjorlig, it is safe. The aggregation and Javascript use different formats internally, it’s not holding on to any Javascript values between calls (the accumulator’s internal state is a variable in C++, which has nothing to do with Javascript).", "username": "Katya" }, { "code": "", "text": "Ok - thanks for clarification ", "username": "Alex_Bjorlig" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
When using an $accumulator is it safe to "modify" the existing state and return it?
2021-01-12T14:03:20.383Z
When using an $accumulator is it safe to &ldquo;modify&rdquo; the existing state and return it?
1,575
null
[ "server" ]
[ { "code": "", "text": "Hi,We run Mongo 3.6.20 (yes, working towards running 4.X) and are encountering a bug in WiredTiger 3.1.1 that appears to have been fixed in WT 3.2.1 based on the bug tracker.\nI’ve looked at the upgrade guide for WiredTiger (WiredTiger: Upgrading WiredTiger applications) and I don’t believe we violate any of the concerns listed.Has anyone run Mongo with an upgraded WiredTiger dependency?\nWe already build our own binaries so there’s not a meaningful overhead for modifying the source.Thanks.", "username": "Nate_Brennand" }, { "code": "", "text": "It may compile (haven’t tried), but the biggest risk is of course that each version of MongoDB vendors and tests with a specific snapshot of the WT library. I would certainly not recommend this configuration.", "username": "Daniel_Pasette" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Running Mongo 3.6.20 w/ WiredTiger 3.2.1?
2021-01-14T06:44:20.658Z
Running Mongo 3.6.20 w/ WiredTiger 3.2.1?
2,639
null
[ "queries" ]
[ { "code": "// cleanup\ndb.incTest.drop();\n\n// insert testdata\ndb.incTest.insert([\n { '_id' : 1, 'key' : 'lastGivenSystemID', 'value' : 1234567890123456 }\n]);\n\n// update SystemID by +1\ndb.incTest.findOneAndUpdate(\n { key: \"lastGivenSystemID\" },\n { $inc: { value: 1 } },\n { returnNewDocument: true }\n)\n", "text": "HelloI have a single value which gets constantly updated. I use findOneAndUptaed since I want to get the field “value” back in my response.\nThe questions are;Is there a simple way to prevent the concurrent updates? Maybe even to the lack of performance e.g. exclusive locking the document?The final goal is that every thread gets a unique SystemID back.", "username": "michael_hoeller" }, { "code": "", "text": "The Atomicity and Transactions documentation says:In MongoDB, a write operation is atomic on the level of a single document, even if the operation modifies multiple embedded documents within a single document.Is there a simple way to prevent the concurrent updates? Maybe even to the lack of performance e.g. exclusive locking the document?In some scenarios you can use Multi-Document Transactions. This requires MongoDB deployments with versions 4.0 (for replica-sets) and 4.2 (for sharded clusters). Note that this feature is not available on standalone deployments.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Prasad_SayaIn some scenarios you can use Multi-Document Transactions. This requires MongoDB deployments with versions 4.0 (for replica-sets) and 4.2 (for sharded clusters).thanks for the update, since the threads are complete independent Mulit-Doc Updates will not help. The use case is that single request hammer on this field, the final goal is that every thread gets a unique SystemID back.Concerning Atomicity: sure a write is atomic but what about the query part of the update ? What happens when two request want to update and “find” the same document. The does the copy on write process allow for two versions of document? If so I think we have optimistic locking her, so the last one wins? But would be return values one would be an update SystemID, the other?? The same SystemID? Or will it fail?", "username": "michael_hoeller" }, { "code": "private void go() throws InterruptedException {\n\t// Create an instance of MongoClient with default values - \n\t// with a connection pool of 100 (default for Java driver).\n\ttry(MongoClient mongoClient = MongoClients.create()) {\n\t\tMongoCollection<Document> coll = mongoClient.getDatabase(\"test\").getCollection(\"th1\");\n\t\tBson update = inc(\"value\", new Integer(1));\n\t\tFindOneAndUpdateOptions updateOptions = new FindOneAndUpdateOptions()\n\t\t\t\t\t\t\t\t\t\t\t\t\t.returnDocument(ReturnDocument.AFTER);\n\t\tBson queryFilter = eq(\"key\",\"lastGivenSystemID\");\n\t\ttestWithConcurrency(coll, queryFilter, update, updateOptions);\n\t}\n}\n\t\nprivate void testWithConcurrency(MongoCollection<Document> coll, \n\t\t\t\t\t\t\t\t\tBson queryFilter, \n\t\t\t\t\t\t\t\t\tBson update,\n\t\t\t\t\t\t\t\t\tFindOneAndUpdateOptions updateOptions)\n\t\tthrows InterruptedException {\n\n\tint numberOfThreads = 5000;\n\tExecutorService service = Executors.newFixedThreadPool(20);\n\tCountDownLatch latch = new CountDownLatch(numberOfThreads);\n\n\tfor (int i = 0; i < numberOfThreads; i++) {\n\t\tservice.submit(() -> {\n\t\t\tupdate(coll, queryFilter, update, updateOptions);\n\t\t\tlatch.countDown();\n\t\t});\n\t}\n\tlatch.await();\n\tservice.shutdown();\n\n\t// This verified on different thread pool sizes and number of created threads for update.\n\tSystem.out.println(\">>> Verify numberOfThreads, returnedValues: \" + numberOfThreads + \", \" + set.size());\n}\n\n// NOTE: The results were the same without the synchronization on this method.\nprivate synchronized void update(MongoCollection<Document> coll, \n\t\t\t\t\t\t\t\t\tBson queryFilter, \n\t\t\t\t\t\t\t\t\tBson update,\n\t\t\t\t\t\t\t\t\tFindOneAndUpdateOptions updateOptions) {\n\tDocument result = coll.findOneAndUpdate(queryFilter, update, updateOptions);\n\tInteger i = ((Double) result.get(\"value\")).intValue();\n\tset.add(i); // set is an instance of CopyOnWriteArraySet\n\t//System.out.println(\">>> \" + Thread.currentThread() + \": \" + i);\n}", "text": "If 1000 clients (each client as a thread) does this update operation and returns the updated value, will the value be unique each time? Yes, I think it will.I tried the update concurrently on the same operation from a Java client, and after each update I get a new (and unique value) returned by the update.", "username": "Prasad_Saya" }, { "code": "", "text": "Yes, the query and write part of the update are guaranteed to be atomic, so you should not be using any additional locking on the application side, the correct thing will happen server-side.", "username": "Asya_Kamsky" } ]
Preventing concurrent updates to ensure every thread gets a unique ID value
2021-01-15T10:34:09.104Z
Preventing concurrent updates to ensure every thread gets a unique ID value
11,331
null
[ "golang" ]
[ { "code": "type User struct {\n ID string `bson:\"id,$oid\"`\n Name string `bson:\"id\"`\n DateOfBirth int `bson:\"dob,$date,omitemtpy\"`\n LastLogin *time.Time `bson:\"lastLogin,$timestamp,omitempty\"` \n} \n", "text": "Hi Team, thanks for all the work on the driver! Much appreciated!One of the things that can be improved in the driver in my opinion is the type leakage. The driver has an internal type system that is sort of forced onto users. primitive.ID is the most common example. Rather than hiding that fact that mongo runs on [12]byte ids internally, the hex string <–> [12]byte conversion and error handling is pushed out to library users. Ostensibly this is because bson != json, but, at some point all we care about is json–that’s what we are sending across the internet. So, this conversion between bson/extended json/json has become the responsibility of the developer using mongo.Because it complicates writing, testing, and debugging the business logic, I have a strong preference to push this conversion as close to the database layer as possible. That way the json-to-bson and bson-to-json is happening in one place and isn’t strewn throughout other data manipulation code.Ok there are my reasons haha, thanks for getting this far. My suggestion to fix this problem is to implement extended json struct tags. Like this:Struct tags formatted in this way could serve the same purpose as extended json. The json <-> bson translation information is just listed in the struct rather than in UnmarshalBSON functions, or decoders, or extended json translation functions. In tests this would allow users to work with these structs as they would any other. The use of go’s type system in the structs keeps them working with all the other go libraries out there. Let me know if you have any questions. Also, I’d be happy to help write this.", "username": "Dustin_Currie" }, { "code": "", "text": "Also, I haven’t given any thought to the performance implications of doing this. Maybe preformace would make this an unworkable solution.", "username": "Dustin_Currie" }, { "code": "", "text": "I ended up doing this myself. It works great.", "username": "Dustin_Currie" }, { "code": "", "text": "Hi @Dustin_Currie,\nI believe this could be accomplished through the use of custom codecs. But using struct tags to describe specific BSON types does seem convenient.Note, not all BSON types have a single extended JSON keyword: specifications/extended-json.rst at master · mongodb/specifications · GitHub. For example: string/bool/document/array/null do not have explicit keywords. $code is used in both the code and codeWithScope BSON types.I am glad to hear you got this working! If you would like to submit a PR, the Go driver team would be happy to take a look.Sincerely,\nKevin", "username": "Kevin_Albertson" } ]
Extended JSON Struct Tags
2020-12-18T17:59:27.919Z
Extended JSON Struct Tags
3,913
null
[ "atlas-triggers" ]
[ { "code": "{\n \"updateDescription.updatedFields\": {\n \"status\": \"\"\n }\n}\n", "text": "For my USERS collection, I want to trigger a function when a document is inserted or updated, but only if the “status” field of the inserted/updated document is not equal to an empty string.Reading the documentation, I believe with the $match expression, I can look to see if the value of “status” is CHANGED to an empty string using the expression below:But that is not what I want. I want to fire the trigger only if the value for “status” field AFTER update or insert is NOT an empty string. It could have been an empty string before update as well.The solution so far for me is to let the trigger get fired, but then in the function I check for the value of the “status” field to do what I need to do, but that means the trigger will be fired too many unnecessary times since the “status” field has an empty string 80% of the time.", "username": "Akbar_Bakhshi" }, { "code": "{\n \"updateDescription.updatedFields.status\": {\n \"$ne\": \"\"\n }\n}\n", "text": "Hi @Akbar_Bakhshi,You can use the $ne operator:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,\nThanks for the response. However, I think with this expression, the trigger will still be fired if the “status” value was originally an empty string and after the document is updated, it’s still an empty string (the update happened on other fields). Right?What I am trying to accomplish is to check the “status” value AFTER THE UPDATE and no matter what the previous value was, if the new value is equal to an empty string, I do not want to fire the trigger. Is that possible?", "username": "Akbar_Bakhshi" }, { "code": "", "text": "Edit - I think Pavel’s answer is correct here, I made the mistake of thinking this post was the same question asked hereHi Akbar - I don’t believe this is doable today. Do you mind adding it here for our Triggers feedback - Realm: Top (0 ideas) – MongoDB Feedback EngineThis will allow us to track interest around more Triggers flexibility in the way you described", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I see.Ok I just did: Firing a trigger if the added or updated document has a field with specific value.Thank you!", "username": "Akbar_Bakhshi" }, { "code": "{\n'fullDocument.status' : { \"$ne\" : \"\" }\n}\n", "text": "Hi @Akbar_Bakhshi,Have you tried in that case filtering on the main fullDocument (set update Lookup to true):Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,\nI think that’s it. It works now the way I want it to. I did not know that we have access to the fullDocument in the Realm UI. That’s great!Thanks for the help!", "username": "Akbar_Bakhshi" }, { "code": "", "text": "Seems like this is already accounted for. We do have access to fullDocument in the Realm UI and so we can do the check in the advanced section using what Pavel suggested.", "username": "Akbar_Bakhshi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Trigger when a field in document has a specific value
2021-01-16T16:33:53.219Z
Realm Trigger when a field in document has a specific value
3,880
null
[ "c-driver" ]
[ { "code": " ==3726== Invalid read of size 1\n==3726== at 0x4842B60: memmove (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n==3726== by 0x4A81C82: memcpy (string_fortified.h:34)\n==3726== by 0x4A81C82: bson_string_append (bson-string.c:143)\n==3726== by 0x49F9C72: _append_platform_field (mongoc-handshake.c:495)\n==3726== by 0x49F9C72: _mongoc_handshake_build_doc_with_application (mongoc-handshake.c:560)\n==3726== by 0x4A1A11A: _build_ismaster_with_handshake (mongoc-topology-scanner.c:232)\n==3726== by 0x4A1A11A: _mongoc_topology_scanner_get_ismaster (mongoc-topology-scanner.c:263)\n==3726== by 0x4A1A2BF: _begin_ismaster_cmd (mongoc-topology-scanner.c:291)\n==3726== by 0x4A1AC78: mongoc_topology_scanner_node_setup_tcp (mongoc-topology-scanner.c:836)\n==3726== by 0x4A1AF57: mongoc_topology_scanner_node_setup (mongoc-topology-scanner.c:960)\n==3726== by 0x4A1B18E: mongoc_topology_scanner_start (mongoc-topology-scanner.c:1083)\n==3726== by 0x4A15126: mongoc_topology_scan_once (mongoc-topology.c:765)\n==3726== by 0x4A15126: _mongoc_topology_do_blocking_scan (mongoc-topology.c:797)\n==3726== by 0x4A157F4: mongoc_topology_select_server_id (mongoc-topology.c:1030)\n==3726== by 0x49DCD90: _mongoc_cluster_select_server_id (mongoc-cluster.c:2704)\n==3726== by 0x49E169F: _mongoc_cluster_stream_for_optype (mongoc-cluster.c:2750)\nmongoc_init();\nuri = mongoc_uri_new_with_error(\"mongodb://127.0.0.1:27017\", &error);\nreturn mongoc_client_new_from_uri(uri);\nmongoc_client_destroy(mongoClient);\nmongoc_cleanup();\nreturn mongoc_client_get_collection(mongoClient, database, collectionName);\n", "text": "I wrote some funtions that uses mongodb-c-driver. and I tried to write some tests with GTest.in each test I init and destroy mongoc with ( mongoc_init() and mongoc_cleanup()).When I run one test, everything goes fine but when I run two tests or more I’m getting some invalid reads :Do I need to stub all mongoc-driver fuctions to do some unit/integration tests to my functions ?Mongo Init Function :Mongo Cleanup functionFunctions to test", "username": "KamelA" }, { "code": "mongoc_cleanup()mongoc_init()mongoc_init()mongoc_cleanup()", "text": "Hello @KamelA!in each test I init and destroy mongoc with ( mongoc_init() and mongoc_cleanup()).Once mongoc_cleanup() is called, it is invalid to call mongoc_init() again. See Initialization and cleanup — libmongoc 1.23.2mongoc_init() should be called at the start of the process, and mongoc_cleanup() at the end.", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to not stub mongo-c-driver whiling using GTests?
2021-01-18T11:25:04.449Z
Is it possible to not stub mongo-c-driver whiling using GTests?
2,555
null
[ "java" ]
[ { "code": "", "text": "I’m using mongodb-driver-core-3.8.2 and want to stop logging from org.mongodb.driver.cluster. None of the 3 or 4 suggestions found on the Internet work. Most comments on the Internet suggest this: Logger.getLogger(“org.mongodb.driver”).setLevel(Level.SEVERE);\nwhich doesn’t work for whatever reason.In particular, I want to stopBlockquote\n2020-04-23 13:22:22.697 INFO 19154 — [ngodb.net:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address:27017=cluster0-shard-00-01-ue9rr.mongodb.net, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 6]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=48192417, setName=‘Cluster0-shard-0’, canonicalAddress:27017=cluster0-shard-00-01-ue9rr.mongodb.net, hosts=[cluster0-shard-00-00-ue9rr.mongodb.net:27017, cluster0-shard-00-01-ue9rr.mongodb.net:27017, cluster0-shard-00-02-ue9rr.mongodb.net:27017], passives=, arbiters=, primary=‘cluster0-shard-00-02-ue9rr.mongodb.net:27017’, tagSet=TagSet{[Tag{name=‘region’, value=‘US_EAST_1’}, Tag{name=‘provider’, value=‘AWS’}, Tag{name=‘nodeType’, value=‘ELECTABLE’}]}, electionId=null, setVersion=3, lastWriteDate=Thu Apr 23 13:22:20 CDT 2020, lastUpdateTimeNanos=917683913678847}\nBlockquote\nwhich is sent to output every few seconds.", "username": "Brian_Sheely" }, { "code": "LoggerLevelOFFlogger.setLevel(Level.OFF)", "text": "Try using the Logger's Level OFF; this is a special level that can be used to turn off logging. Usage: logger.setLevel(Level.OFF).", "username": "Prasad_Saya" }, { "code": "", "text": "Finally found the solution…logging.level.org.mongodb=warn\nlogging.level.org.springframework.boot.autoconfigure.mongo.embedded=warn", "username": "Brian_Sheely" }, { "code": "", "text": "Unfortunately, that didn’t work either.", "username": "Brian_Sheely" }, { "code": "", "text": "Work for me. Thank you…", "username": "Nuwan_Sameera" }, { "code": "Logger logger = Logger.getLogger(\"org.mongodb.driver\");\nlogger.setLevel(Level.SEVERE);\nLevel.SEVEREOFFLevel.WARN", "text": "To suppress MongoDB Java Driver logging, I tried this and works fine:I used Level.SEVERE instead of OFF, so that the severe errors can be logged and noted, ignoring all the other informational and warning messages. Depending upon your need you can also use other levels like Level.WARN, for example.I think it is not a good idea to turn off logging altogether - some level of logging needs to be retained to capture those errors and warnings.", "username": "Prasad_Saya" } ]
How do you stop Java driver logging?
2020-04-23T19:39:38.073Z
How do you stop Java driver logging?
10,911
null
[ "stitch" ]
[ { "code": "", "text": "I’m using Stitch to create a email/password login system. I’m using the Stitch Email/Password provider with the “user confirmation method” setting of “send a confirmation email”.My app is working to the point where I can submit my app’s register form, then see my entry in the Stitch Users screen as a pending user.After registering, Stitch sends a confirmation email with a link to “Confirm email”.I’ve setup my confirmation page according to this guide:\nhttps://docs.mongodb.com/stitch/authentication/userpass/#confirm-a-new-user-s-email-addressIn the Chrome dev tools, I can see that the token and token ID are being sent, such as:{“token”:“token was here”,“tokenId”:“token id was here”}But, stitch is returning the error:{“error”:“userpass token is expired or invalid”,“error_code”:“UserpassTokenInvalid”,“link”:“https://stitch.mongodb.com/ groups/link continues”}Please let me know if you have any troubleshooting advice.", "username": "Ed_Talmadge" }, { "code": "tokentokenIdUserPasswordAuthProviderClient", "text": "Hi @Ed_Talmadge, welcome!But, stitch is returning the error:It’s been a while since you posted this question, have you found a solution yet ?\nThere are two parts to the error, first is expired token and the second is invalid token. The token is valid within 2 hours, so please make sure that the email that you actioned is the one that is sent by Stitch within the time period.The second case, is invalid token. Could you post the code snippet that you used to pass token and tokenId to UserPasswordAuthProviderClient to confirm ?Regards,\nWan.", "username": "wan" }, { "code": "export const attemptSignupConfirmation = () => async (dispatch: any) => {\n try {\n // Confirm the user's email/password account\n // See: https://docs.mongodb.com/stitch/authentication/userpass/#confirm-a-new-user-s-email-address\n // Parse the URL query parameters\n const url = window.location.search;\n const params = new URLSearchParams(url);\n const token = await params.get(\"token\");\n const tokenId = await params.get(\"tokenId\");\n console.log(`token: ${token}, tokenId: ${tokenId}`);\n if (!token || !tokenId) {\n return;\n }\n\n // Confirm the user's email/password account\n const emailPassClient = Stitch.defaultAppClient.auth.getProviderClient(\n UserPasswordAuthProviderClient.factory\n );\n\n await emailPassClient.confirmUser(token, tokenId);\n console.log(\"dispatch SIGNUP_CONFIRMATION_SUCCESS\");\n dispatch({\n type: SIGNUP_CONFIRMATION_SUCCESS\n });\n } catch (err) {\n // dispatch(setAlert(err, \"danger\"));\n dispatch({\n type: SIGNUP_CONFIRMATION_ERROR\n });\n }\n};\n", "text": "Thanks Wan. No, I have not found a solution yet. To answer your questions:There are two parts to the error, first is expired token and the second is invalid token. The token is valid within 2 hours, so please make sure that the email that you actioned is the one that is sent by Stitch within the time period.After registering, I am immediately opening the confirm user email and clicking the Confirm Email button.The second case, is invalid token. Could you post the code snippet that you used to pass token and tokenId to UserPasswordAuthProviderClient to confirm ?Here is the React/Redux action creator I’m using to pass the token and tokenId to UserPasswordAuthProviderClient.\nIt is using the code example from: https://docs.mongodb.com/stitch/authentication/userpass/#confirm-a-new-user-s-email-address", "username": "Ed_Talmadge" }, { "code": "userpass token is expired or invalidconfirmed", "text": "Thank you for the code snippet. That looks correct.\nGiven your code snippet and a valid URL that’s provided by Stitch, the only way I could reproduce the userpass token is expired or invalid is if I tried to execute the same token after it’s been validated/confirm (i.e. executing twice).If you register a new user email/pwd, and you click the link from the email, does the user status changed to confirmed in Stitch’s Users tab ? If so, there’s a possibility that your React code may have executed/triggered twice.If you still encountering this issue, could you provide a minimal reproducible example ?Regards,\nWan.", "username": "wan" }, { "code": "confirmed", "text": "Thanks again Wan. To answer your questions:If you register a new user email/pwd, and you click the link from the email, does the user status changed to confirmed in Stitch’s Users tab ?NoIf you still encountering this issue, could you provide a minimal reproducible example ?Yes, I am still encountering this issue. I will work on a minimal, reproducible example and post it here.", "username": "Ed_Talmadge" }, { "code": "Pending ConfirmationconfirmUser(token, tokenId)Pending ConfirmationPending User LoginconfirmUser(token, tokenId)Pending User Login", "text": "Hi Ed, hope you are well.I may have encountered the same issue you are having and I may have an answer for you:When a user registers themselves for the first time they are assigned a status called Pending Confirmation. Once confirmUser(token, tokenId) has been called, the status moves from Pending Confirmation to Pending User Login, but the user still remains in the Pending category.Subsequent calls to confirmUser(token, tokenId) if the user is already in Pending User Login will result in the error message you mentioned!All that’s left to do is for the user to login and the user will be confirmed.Hope that helps!", "username": "Philip_Blaquiere" } ]
Stitch JavaScript SDK confirmUser method returns: userpass token is expired or invalid
2020-03-13T00:17:14.968Z
Stitch JavaScript SDK confirmUser method returns: userpass token is expired or invalid
4,583
null
[ "java", "android", "kotlin" ]
[ { "code": "", "text": "I use realm-java on Android.\nnow I’m creating profile function, but I’m not sure how to user realm correctly.when renew profile, delete value → store value but, I fetch value from realm, sometimes old value is taken.To reproduce, My test repository is below, and I attached movie that problem is reproduced. https://github.com/shinsan/realm_test/When thread id is changed, sometimes old value appears. so, if you try to reproduce, please use lower memory device such as nexus5 simulator\nLower Spec AVD is easy to reproduce. such as (Nexus4 or 5 RAM 512MB VM heap 40MB) easy to change process id#I think Realm instance is singleton and transaction is thread-safe, so value is always only one.my code kotlin + Android Studio Realm Java 10.3", "username": "111168" }, { "code": "", "text": "I solved this, simply I forgot to close realm instance.", "username": "111168" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting a deleted value from realm-java threads
2021-01-18T03:04:59.046Z
Getting a deleted value from realm-java threads
2,268
null
[ "node-js", "mongodb-shell" ]
[ { "code": "mongoshmaster> mongosh@ evergreen-release /mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh\n> node scripts/evergreen-release.js \"compile\"\n\nmongosh: running command 'compile' with config: {\n version: '0.0.0-dev.0',\n appleNotarizationBundleId: 'com.mongodb.mongosh',\n rootDir: '/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh',\n input: '/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/packages/cli-repl/lib/run.js',\n buildVariant: '',\n execInput: '/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/packages/cli-repl/dist/mongosh.js',\n outputDir: '/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/dist',\n project: undefined,\n revision: undefined,\n branch: undefined,\n isCi: false,\n platform: 'linux',\n repo: { owner: 'mongodb-js', repo: 'mongosh' },\n isPatch: false\n}\n/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/scripts/evergreen-release.js:33\n (err) => process.nextTick(() => { throw err; }));\n ^\n\nError: Segment key is required\n at Object.writeAnalyticsConfig [as default] (/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/packages/build/lib/analytics.js:21:15)\n at Object.generateInput [as default] (/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/packages/build/lib/generate-input.js:10:30)\n at Object.compileExec [as default] (/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/packages/build/lib/compile-exec.js:30:35)\n at release (/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/packages/build/lib/release.js:61:37)\n at async runRelease (/mnt/4bcabbf7-6928-4c6d-bad5-5e01e41e9625/jwoehr/MongoDB/mongosh/scripts/evergreen-release.js:28:3)\nnpm ERR! code ELIFECYCLE\nnpm ERR! errno 1\nnpm ERR! mongosh@ evergreen-release: `node scripts/evergreen-release.js \"compile\"`\nnpm ERR! Exit status 1\nnpm ERR! \nnpm ERR! Failed at the mongosh@ evergreen-release script.\nnpm ERR! This is probably not a problem with npm. There is likely additional logging output above.\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /home/jwoehr/.npm/_logs/2021-01-18T03_49_33_922Z-debug.log\nnpm ERR! code ELIFECYCLE\nnpm ERR! errno 1\nnpm ERR! mongosh@ compile-exec: `npm run evergreen-release compile`\nnpm ERR! Exit status 1\nnpm ERR! \nnpm ERR! Failed at the mongosh@ compile-exec script.\nnpm ERR! This is probably not a problem with npm. There is likely additional logging output above.\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /home/jwoehr/.npm/_logs/2021-01-18T03_49_33_952Z-debug.log\n", "text": "Not having much luck lately building mongosh from the repo master branch.", "username": "Jack_Woehr" }, { "code": "mongoshSEGMENT_API_KEY=\"dummy\" node scripts/evergreen-release.js \"compile\"\n", "text": "@Jack_Woehr as part of some recent changes to simplify our release process, we are now expecting that the API key for the service we are using for telemetry is set. To build mongosh for yourself, you should be able to get around that by running:", "username": "Massimiliano_Marcon" }, { "code": " internal/modules/cjs/loader.js:1033\n throw err;\n ^\n\nError: Cannot find module 'adm-zip'\nRequire stack:\n- /work/jwoehr/MongoDB/mongosh/packages/build/lib/tarball.js\n- /work/jwoehr/MongoDB/mongosh/packages/build/lib/compile-and-zip-executable.js\n- /work/jwoehr/MongoDB/mongosh/packages/build/lib/release.js\n- /work/jwoehr/MongoDB/mongosh/packages/build/lib/index.js\n- /work/jwoehr/MongoDB/mongosh/scripts/evergreen-release.js\n at Function.Module._resolveFilename (internal/modules/cjs/loader.js:1030:15)\n at Function.Module._load (internal/modules/cjs/loader.js:899:27)\n at Module.require (internal/modules/cjs/loader.js:1090:19)\n at require (internal/modules/cjs/helpers.js:75:18)\n at Object.<anonymous> (/work/jwoehr/MongoDB/mongosh/packages/build/lib/tarball.js:9:35)\n at Module._compile (internal/modules/cjs/loader.js:1201:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1221:10)\n at Module.load (internal/modules/cjs/loader.js:1050:32)\n at Function.Module._load (internal/modules/cjs/loader.js:938:14)\n at Module.require (internal/modules/cjs/loader.js:1090:19) {\n code: 'MODULE_NOT_FOUND',\n requireStack: [\n '/work/jwoehr/MongoDB/mongosh/packages/build/lib/tarball.js',\n '/work/jwoehr/MongoDB/mongosh/packages/build/lib/compile-and-zip-executable.js',\n '/work/jwoehr/MongoDB/mongosh/packages/build/lib/release.js',\n '/work/jwoehr/MongoDB/mongosh/packages/build/lib/index.js',\n '/work/jwoehr/MongoDB/mongosh/scripts/evergreen-release.js'\n ]\n}\n", "text": "Thanks, @Massimiliano_Marcon … that seems to error out as follows:", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongosh build failure
2021-01-18T05:34:53.678Z
Mongosh build failure
2,590
null
[ "dot-net" ]
[ { "code": "decimal'Incompatible MarshalAs detected in parameter named 'value'. Please refer to MCG's warning message for more information.'\n", "text": "We’ve decided to update to 10.* version in order to be able to use decimal support and the rest of the new features. We were ready to make the first release with the new SDK, and soon after we made our release build we discovered that the application crashes with the following exception when built in release mode:This only happens in release mode though, both on iOS and UWP. This is a major blocker for as and we would have to revert the effort of 2 weeks of refactoring if we decide to revert the changes. Anything we could do to work around this issue?Here you can find a sample project that demonstrates the issue.", "username": "Gagik_Kyurkchyan" }, { "code": "", "text": "Let’s discuss in the Github issue and I’ll post an update here once we know what’s going on.", "username": "nirinchev" }, { "code": "", "text": "This was resolved and will be fixed with the next release.", "username": "nirinchev" }, { "code": "", "text": "The fix has been release with 10.0.0-beta.5.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm .NET 10.0.0-beta.3 crashes on UWP release build
2020-12-21T10:12:45.430Z
Realm .NET 10.0.0-beta.3 crashes on UWP release build
2,210
null
[ "dot-net", "connecting" ]
[ { "code": " private void Init()\n { \n client = new MongoClient(\"mongodb://localhost:27017\"); \n } \n private bool CheckConnectionToBatabase()\n { \n //verify is the server is up and running and if the client is connected to it?\n var t = client.GetDatabase(\"prodcuts\");\n //return true or false\n } \n\n private void InsertDocument(BsonDocument document)\n {\n if(CheckConnectionToBatabase())\n collection.InsertOne(document);\n }\n\n private void MongoDb_Load(object sender, EventArgs e)\n {\n Init();\n GetDatabase(\"shop\");\n GetCollection(\"products\");\n }", "text": "Hello everyone, i’m new to MongoDB.\nI’m writing a new application in my desktop, but i cannot undestand how to check if the client is connected to the server before to make any query.\nIf i stop the service in windows, in the shell the connection to “mongo” fail.\nSo in the driver how can check this ? Thanks a lot.", "username": "Marco_L" }, { "code": "var client = new MongoClient(connectionString);\nvar server = client.GetServer();\nserver.Ping();", "text": "I think you can use the Ping method, something like", "username": "Robert_Walters" }, { "code": " private bool CheckConnectionToBatabase()\n {\n var database = new BsonDocument();\n var timeoutCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMilliseconds(2000)); //2000ms timeout token\n try\n {\n var databases = client.ListDatabasesAsync(timeoutCancellationTokenSource.Token).Result;\n databases.MoveNextAsync(); // Force MongoDB to connect to the database.\n if (client.Cluster.Description.State == ClusterState.Connected)\n {\n // Database is connected.\n return true;\n }\n else\n return false;\n }\n catch(Exception e)\n {\n return false;\n } \n }\n", "text": "Hi Robert and thanks for your answer. Unfortunately the GetServer() method is not available on the last release. So i figured out of the problem with another approach.\nI test the connection using stop and start service on the shell, and i see that if the server goes down, the client is able to reconnect automatically.So, i share my solution to test if a valid connection is ready before sent a “crud” command :", "username": "Marco_L" } ]
.NET driver connection
2021-01-17T10:50:06.816Z
.NET driver connection
1,950
null
[ "atlas-triggers" ]
[ { "code": "let counter = 0;\nlet q = db[collection_name].find({\"theField\": myvalue}); // Perform a find \nlet total = q.count();\nif(total > 0){\nlet bulk = db[collection_name].initializeUnorderedBulkOp();\n// iterate over the collection\nq.forEach(function(d) {\n counter++;\n bulk.find({ _id: d._id }).remove();\n if (counter % 500 == 0) { // Execute when we have 500 documents in our bulk operation\n result = bulk.execute();\n sleep(1000); // Sleep for time\n bulk = db[collection_name].initializeUnorderedBulkOp(); // Initialize on collection for next iteration\n }\n});\n//last block\nif(counter > 0){\n bulk.execute();\n}\n", "text": "Hello ,\nI am new in MongoDB Atlas realm, this is why I am asking for help on the following context:\nI have a Mongo shell script that is removing data by bulk, I have tested it manually and it works\nSO, I need to schedule this task to be executed one time per week,I wanted to know if it is possible to adapt the shell script to be run using realm feature (creating a function and scheduling it through a trigger)Do you have any example to help me to implement this kind of task?Thank you for your help\nAnd best regards\nSandrinehere is a piece of the script:", "username": "Sandrine_Lhadj_Mohan" }, { "code": "const myCollection = context.services.get(\"mongodb-atlas\").db(\"myDbName\").collection(\"myCollectionName\");\n", "text": "Hi Sandrine - yes you should be able to do this via a scheduled trigger which will run a Realm Function at a scheduled time. Realm functions can provide the functionality you just described.To get a collection from the function, you can use the following snippet:For the rest of your code, you might find the snippets in the MongoDB Data Access docs useful as they provide Realm Function snippets for various CRUD operations.", "username": "Sumedha_Mehta1" } ]
Could you help to implement a scheduling of remove data from databases by realm
2021-01-18T19:44:49.152Z
Could you help to implement a scheduling of remove data from databases by realm
2,980
https://www.mongodb.com/…5_2_1024x377.png
[ "realm-studio" ]
[ { "code": "", "text": "Since the 5.0.0 release we are unable to use Realm Studio to open Realms freshly created with 5.0.0 or Realms that were previously 4.x.x.The error is: Invalid top arrayIt works with all projects that have not updated to 5.0.0macOS 10.15.x (and 10.14.x)\nXCode 11.3.1\nRealm Studio 3.10.0Realm Studio Crash2430×896 482 KB", "username": "Jay" }, { "code": "", "text": "Please manually download the 3.11.0 release of Realm Studio from Release 3.11.0 · realm/realm-studio · GitHub this has the ability to open Realms using the new format, but please be aware that opening Realms with this version of Studio will also irreversibly upgrade “old” Realm files, making them unable to open via “older” SDKs.", "username": "kraenhansen" }, { "code": "", "text": "@kraenhansen Thank you. That version has allowed us to access our Realms again.Now if you could just fix the issue with 5.0.0 not returning correct results when filtering…", "username": "Jay" }, { "code": "", "text": "I have the same problem but can’t use the latest App on Mac because the App isn’t signed correctly.Screenshot 2021-01-13 at 11.28.27552×636 93.7 KB", "username": "Sonja_Meyer" }, { "code": "", "text": "That’s weird. I just downloaded it and it’s working correctly for us. Did you try contextual click->open? aka right click->open?", "username": "Jay" }, { "code": "", "text": "Seems to work with that, yes, thank you. Why Apple why?", "username": "Sonja_Meyer" }, { "code": "", "text": "There’s a feature in macOS called “Gatekeeper” that gives you a level of security against opening apps from unsigned developers. By default it just won’t open them.However, to provide flexibility, right-click open circumvents Gatekeeper and allows you to open unsigned apps.", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Studio Crash: Invalid top array
2020-05-26T19:36:50.776Z
Realm Studio Crash: Invalid top array
5,187
https://www.mongodb.com/…8_2_1024x512.png
[ "atlas-device-sync" ]
[ { "code": "", "text": "We have downloaded and created the given applicationContribute to mongodb-university/realm-tutorial-react-native development by creating an account on GitHub.We have followed the following stepsWe need this offline capability so a user can easily access the data in an offline appin Firestore / AWS Amplify this functionality is easily working so can you please help to resolve this problem as soon as possible.", "username": "Saurabh_Modh" }, { "code": "", "text": "There are a number of other posts here about the same thing - with solutions. Did you check them out?See this one Open synced local database when completely offline and another one with additional info Local Realm open after synchronized on realm cloud", "username": "Jay" } ]
Offline functionality does not work in Task Tracker App
2021-01-17T10:09:38.077Z
Offline functionality does not work in Task Tracker App
3,313
null
[ "indexes", "performance" ]
[ { "code": "db.collection.createIndex({ guildID: 1 })db.collection.createIndex({ userID: 1 }){ guildID: 'string', userID: 'string' }db.collection.createIndex({ guildID: 1, userID: 1 })", "text": "I currently have these two indexes:\ndb.collection.createIndex({ guildID: 1 })\ndb.collection.createIndex({ userID: 1 })My most common query to my database is { guildID: 'string', userID: 'string' }Should I make a compound index of db.collection.createIndex({ guildID: 1, userID: 1 })? Or would having two single indexes be faster?", "username": "mental_N_A" }, { "code": "{userID: 1, guildID : 1}", "text": "With 2 single field indexes, only one will be used to satisfy the query. Documents will have to be fetched to verify if the other field matches.With a compound index of the 2 fields, the index can be used to satisfied the whole query. Only documents matching the whole query will be fetch.Depending on other queries you might be better off with an index {userID: 1, guildID : 1}.", "username": "steevej" } ]
Single vs compound index speed
2021-01-18T15:14:43.287Z
Single vs compound index speed
9,353
null
[ "aggregation", "queries", "text-search" ]
[ { "code": "{\n firstObject: {\n firstField: \"Some value\",\n secondField: \"Some value\",\n },\n secondObject: {\n firstField: \"Some value\",\n secondField: \"Some value\",\n }\n}\nfirstObjectsecondObjectdb.collection.createIndex({ firstObject: \"text\", secondObject: \"text\" });\ntext", "text": "Document structure like this:I want to create text index on both objects firstObject and secondObject because both object having dynamic fields, I have tried this:But unfortunately this is not working, as per documentation mention, it work on string and array of string:MongoDB provides text indexes to support text search queries on string content. text indexes can include any field whose value is a string or an array of string elements.Is there any other option to work my scenario? I don’t want to mention each and every object’s field manually in create index because all keys are dynamic.", "username": "turivishal" }, { "code": "", "text": "I think that with Building with Patterns: The Attribute Pattern | MongoDB Blog you could have an index on k that will support your queries.", "username": "steevej" }, { "code": "", "text": "Thank you, yah that was the last option i thought after my post,I figured that 2 options:I will work around which is efficient and better for my database", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the alternate option for text index on object field?
2021-01-18T09:04:05.873Z
What is the alternate option for text index on object field?
3,065
null
[]
[ { "code": " import hashlib;\n import hmac;\n import datetime;\n import paho.mqtt.client as mqtt\n import jsonimport calendar\n import requests\n\n mqqt_server=#enter your mqqt server here\n mqqt_port=1883\n\n def on_connect(mqttc, obj, flags, rc):\n print(\"Connected with result code \"+str(rc))\n client.subscribe(\"/iotdemo/temp\")\n # The callback for when a PUBLISH message is received from the server.\n\n def on_message(client, userdata, msg):\n v=str(msg.payload.decode('utf8')) #string comes in as bytes so need to convert it\n sample=json.loads(v)\n t=str(sample['time'])\n t=t[1:]\n t=t[:-1]\n t=t.split(\",\")\n time_t=datetime.datetime(int(t[0]), int(t[1]), int(t[2]), int(t[4]), int(t[5]), int(t[6]), int(t[7]))\n timestamp_utc = calendar.timegm(time_t.timetuple())\n print('Processing sample : ' + str(sample['value']))\n body='{ \"device\":\"' + str(sample['device']) + '\", \"sample_date\" : \"' + time_t.strftime(\"%Y-%m-%d\") + '\", \"value\":\"' + str(sample['value']) + '\", \"time\":\"' + repr(timestamp_utc) + '\" }'\n secret = #b'Enter your MongoDB Stitch Webhook password here'\n hash = hmac.new(secret, body.encode(\"utf-8\"), hashlib.sha256)\n url = #enter your Stitch Web API URL here\n header={\"Content-Type\":\"application/json\",\"X-Hook-Signature\":\"sha256=\" + hash.hexdigest() }\n myResponse = requests.post(url,headers=header, data=body )\n print (myResponse.status_code)\n # uncomment to debug\n #def on_log(mqttc, obj, level, string)\n :# print(string)print('Connecting to MQQT broker')\n client = mqtt.Client()\n client.on_message = on_message\n client.on_connect = on_connect\n #client.on_log = on_log\n client.connect(mqqt_server, mqqt_port, 60)\n client.loop_forever()\n", "text": "Hi\nI want to recreate an example on IoT devices with MongoDB but I have an issue. The example is from 2018 and there was not yet introduced Realm it was stitch apps , so my issue is that I have everything done once that was last year and it worked perfectly transfering data from ESP8266 to Gateway (raspberry pi ) and than to the MongoDB. But this year I got stuck the ESP8266 does connect and send data to the gateway(raspberry pi) but the gateway does not connect to the MondoDB.So here is the example: Example And the configuration to the raspberry pi is done as it says I changed in the fields where it askes for information of the HTTP service and still noting happened. I don`t know if the issue is in the raspberry pi or the configuration of the function on the http service.", "username": "Nikola_Jovanoski" }, { "code": "", "text": "I’ll look to update this article soon, for now though, check the name of the Realm service under “Linked Data Sources”. It can be something other than \"mongodb-atlas’ in that case you’ll have to update your function with the service name.e.g. check the name of the service in the codeexports = function(payload,response) {\nconst mongodb = context.services.get(“mongodb-atlas”);", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB IoT Example Issue
2020-09-25T08:06:06.014Z
MongoDB IoT Example Issue
2,907
null
[ "mongoose-odm", "performance" ]
[ { "code": "$nin", "text": "I have a collection with 100 million documents. I want to delete every document where the guildID isn’t in an array (the array has 200k elements). I’m using mongoose.Would it be faster to:Stream all the data from the collection (using mongoose). Once I recieve each document I check if the guildID is in the array. If it’s not in the array I use deleteOne().Use deleteMany with $nin. I’m currently using this method and it’s taking 10 minutes to delete 1 million documents.", "username": "mental_N_A" }, { "code": "", "text": "I would guess that deleteMany is much faster.Both scenarios assume the deletion of the 10 million documents that matches the delete criteria. This is to simplify the understanding.Scenario 1 - streamScenario 2 - deleteManyThat’s a very interesting problem. Please do follow up on your findings.My question is what is the source of the array of 200k elements? Is it another collection in the same database?I would try to find a way to use $in rather than $nin by getting the guildID that needs to be deleted rather than using the guildID that needs to be kept. Specially if an index on guildID exists.", "username": "steevej" } ]
Is it faster to stream the data and use deleteOne or use deleteMany
2021-01-18T03:05:26.069Z
Is it faster to stream the data and use deleteOne or use deleteMany
5,104
null
[ "node-js", "connecting" ]
[ { "code": "const express = require(\"express\");const mongoose = require(\"mongoose\");const app = express();// Connect to DBconst mongoURI = \"mongodb+srv://MyUserName:<MyPassword>@cluster0.lbb8n.mongodb.net/<DbImUsing>?retryWrites=true&w=majority\";mongoose.connect(mongoURI, { useNewUrlParser: true, useUnifiedTopology: true,});// Routeapp.set(\"view engine\", \"ejs\");app.get(\"/\", (req, res) => { res.render(\"index\");});// Endpointapp.post(\"/shortUrls\", (req, res) => {});app.listen(process.env.PORT || 5000);(node:5058) UnhandledPromiseRejectionWarning: MongoError: bad auth : Authentication failed.`\n ` at MessageStream.messageHandler (/Users/fridavbg/Desktop/MovingWorlds/urlShortv2/node_modules/mongodb/lib/cmap/connection.js:268:20)`\n ` at MessageStream.emit (events.js:315:20)`\n ` at processIncomingData (/Users/fridavbg/Desktop/MovingWorlds/urlShortv2/node_modules/mongodb/lib/cmap/message_stream.js:144:12)`\n ` at MessageStream._write (/Users/fridavbg/Desktop/MovingWorlds/urlShortv2/node_modules/mongodb/lib/cmap/message_stream.js:42:5)`\n ` at writeOrBuffer (_stream_writable.js:352:12)`\n ` at MessageStream.Writable.write (_stream_writable.js:303:10)`\n ` at TLSSocket.ondata (_stream_readable.js:719:22)`\n ` at TLSSocket.emit (events.js:315:20)`\n ` at addChunk (_stream_readable.js:309:12)`\n ` at readableAddChunk (_stream_readable.js:284:9)`\n ` at TLSSocket.Readable.push (_stream_readable.js:223:10)`\n ` at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23)`\n \n (Use node --trace-warnings ... to show where the warning was created)`\n (node:5058) \n\n UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)\n\n (node:5058) [DEP0018] \n DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.`\n", "text": "HI there!!So Im new to this whole developing world and would be grateful if anyone could help me out.\nI’m trying to connect a nodejs app to mongodb(atlas) with mongoose. It was all working fine until I created a new user for the database, and now all my users get bad auth: Authentication failed.\nThings I have doubled checkedNODEJS APP - index.js\nconst express = require(\"express\");\nconst mongoose = require(\"mongoose\");\nconst app = express();// Connect to DB\nconst mongoURI = \"mongodb+srv://MyUserName :<MyPassword>@cluster0.lbb8n.mongodb.net/<DbImUsing>?retryWrites=true&w=majority\";mongoose.connect(mongoURI, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});// Route\napp.set(\"view engine\", \"ejs\");\napp.get(\"/\", (req, res) => {\n res.render(\"index\");\n});// Endpoint\napp.post(\"/shortUrls\", (req, res) => {});\napp.listen(process.env.PORT || 5000);Error message I am getting is this:If anyone could point me in a direction to solve this, I would be grateful!!", "username": "Frida_Persson" }, { "code": "", "text": "The < and > are not part of a valid URI. It is customary to surround parts that needs to be changed within the less-than and then greater-than signs. It is called a place holder. If, as you write, your user is atlasAdmin then the URI must reflect that rather than having MyUserName. Ditto for the MyPassword. It has to be the real password.", "username": "steevej" }, { "code": "", "text": "Thanks Steeve!!Yes, it was the small < >. feel a bit bad that it was so simple. But at least I got a few hours in reading the documentation ", "username": "Frida_Persson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Bad auth : Authentication failed. - Nodejs / MongoDB atlas
2021-01-17T19:32:36.086Z
Bad auth : Authentication failed. - Nodejs / MongoDB atlas
92,571
null
[ "data-modeling" ]
[ { "code": "{ \n _id: ObjectId(\"...\"),\n name: \"Rusty Shortsword\", \n type: \"weapon\",\n subtype: \"sword\",\n description: \"It looks like it hasn't been used in awhile.\",\n stackable: false, \n maximum_carry_limit: 2147483647, // int32 max safe value \n merchant_value: 0,\n requirements: {\n { k: \"strength\", v: 3 },\n { k: \"quest\", v: 17 }\n },\n modifiers: {\n { k: \"accuracy\", v: 1 }\n }\n}\n{\n _id: ObjectId(\"...\"),\n item_id: ObjectId(\"...\"), // Item Prototype _id\n owned_by: ObjectId(\"...\"), // Character _id\n durability: 100.0, \n created_by: ObjectId(\"...\"), // Character _id \n created_at: new Date()\n}\n{\n _id: ObjectId(\"...\"),\n ...\n}\n", "text": "Hey everyone, so I’m learning MongoDB to build the structure to my game and as I get closer to completing M320 - Data Modeling (MongoDB University) I’ve built the first version of my data structure. I was wondering if the community could give their opinion.This is a very simple structure used to represent items owned by a player in a game. I wanted a way to make it so that there was never any stale data and that any items that are updated are immediately reflected in future queries.So I came up with a structure where there is three collections with different types of documents.The Item Prototype which contains static information about an item. Things like name, description, type, requirements and so on. These are things that are going to be read often, but written very rarely.The Item is basically an instance of the Item Prototype this is an item that actually exists in the game. It contains some data specific to the item such as who owns it, who’s created it, the durability, etc.The Character is nearly irrelevant here, but needed as the parent to the Item.So here I will layout my structure…Item Prototype – Formatted Code LinkItem – Formatted Code LinkCharacter – Formatted Code LinkItem Prototype → Item will be a One-to-Zillions setup, where Item → Character will be a many-to-one.", "username": "Christian_Tucker" }, { "code": "", "text": "Hi @Christian_Tucker,Welcome to MongoDB community.Happy to discuss thus data design.Thinking in relationship ways for MongoDB is good but shouldn’t be the main aspects.I think the main questions is how is your application access the data and what are the critical paths for its performance (reads of character data and its items or updating items)My assumption is that reads will favor writes in this use case and you rather get the character and its inventory, at least first amount of items as quick as possible. Correct me if I am wrong.In this case wouldn’t it make sense to keep last added items inside the character? Including the immutable items shown on the screen like its “name” and “description”If user has a large amount of items you should outsource them to items collection by pulling a document out of a character and placing them in the items collection for when user clock on next items for example.You can still keep the prototype collection for the extended information, but it feels a waste to query it for every time an item needs to project its name so this could be placed in every character array.Let me know what you think.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "item-id, amount", "text": "Hey @Pavel_Duchovny thanks for the response… I will provide some information about the way the data is used below…When information about the character is read, we do not need to access any of the items that belong to the character, it’s only when a character is “selected” from our game interface that the character’s inventory (items) are loaded. Because of the way we’ve decided to setup our inventory system, a character can have an “infinite” amount of items at any given time. This means that there can be thousands of items that belong to a given character.Item reads will only happen during login, but writes will happen throughout the user’s session.One of the reasons I wanted to seperate out the item documents into being their own non-duplicated data was for tracking. Based on other data that is not shown here I am able to determine where a specific item came from, every character that has ever possessed the item and where the item was removed from the game. (Destroyed, Consumed, Sold to a vendor, etc.)Because of wanting to keep track of all this data, this means that I need to keep item information for every item instance in the game, instead of just having a subdocument of item-id, amount in a characters document / seperate inventory collection.–The prototype collection doesn’t need to be queried every time a user wants to get item information, instead, this data will sync’d to a memory database at runtime. The collection in this case is simply used to form a relation for memory-lookup and an easy way to allow our creatives to balance, design, and make changes to existing items for future updates. It will never be used in a $lookup and has a primary use-case of just being the raw-storage for a caching mechanism.", "username": "Christian_Tucker" }, { "code": "", "text": "Hi @Christian_Tucker,If you are confident that this schema is best design for you data access go for it.Thanks,\nPavel", "username": "Pavel_Duchovny" } ]
Review on data structure for game appreciated!
2021-01-16T19:31:26.438Z
Review on data structure for game appreciated!
2,387
null
[ "mongodb-shell", "indexes" ]
[ { "code": "mongodb.collection.createIndex({ joinedAt: 1 }, { background: true })", "text": "I use mongo to get to my Ubuntu mongo shell. After sending the function db.collection.createIndex({ joinedAt: 1 }, { background: true }) my mongo shell just stops working/responding.", "username": "mental_N_A" }, { "code": "", "text": "Open another session and check the log\nor use this\ndb.currentOp() to check the progress", "username": "Ramachandra_Tummala" } ]
Mongo shell stops after sending function to create background index
2021-01-18T00:47:10.313Z
Mongo shell stops after sending function to create background index
1,745
null
[]
[ { "code": "", "text": "So I basically want to host my own MongoDB database on my own hosting computer rather then using AWS, google hosting and such like hosting providers, how to do it? I did some research and couldnt really find anything. Any reply would be appreciated, thanks! ", "username": "SomeoneOnTheInternet" }, { "code": "", "text": "I did some research and couldnt really find anything.You did not found anything because there is nothing special about it. Aside from mongod specific configuration there is nothing particular about it. Just like any other server platform. Install, configure and then run, just like a web server.", "username": "steevej" }, { "code": "", "text": "Hi @SomeoneOnTheInternet,The Installation Tutorials in the MongoDB manual have instructions for installing in supported operating systems (Linux, Windows, macOS, …).Installation in a desktop or laptop environment is similar to installing in a hosted environment, but hosting providers will often have specific firewall and network configuration steps.I strongly recommend reviewing the MongoDB Security Checklist to ensure your deployment is properly secured.If you run into any installation or configuration issues, this is a good forum to ask questions in. Including some information on your environment (specific O/S version, MongoDB server version, tool/driver version, …) and the problem you are encountering (steps to reproduce, expected outcome, and actual outcome) will help get faster and more relevant suggestions.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I asked a simular question last month where my post was derailed by solitiations of commercial offerings which were off-topic to the original question.I hope my reply can signal boost your question and we can get some more detailed suggestions and guidance on hardware/Ops.", "username": "Gabriel_Fair" }, { "code": "", "text": "oBut how do I host it? I’ts not like a file which you can host directly so thats what’s confusing me, do you possibly have any info sources available [eg websites, pdf’s… etc]?", "username": "SomeoneOnTheInternet" }, { "code": "", "text": "I would recommend that you take some courses from https://university.mongodb.com/. I think you are missing some fundamental concepts about hosting and servers in general. The courses will help you. Start with M001 and then you may try M103. Once you master the material covered in those 2 basic courses you should be able to deploy a server which is the first step of hosting.", "username": "steevej" }, { "code": "", "text": "This is exactly what i needed, thank you so much!", "username": "SomeoneOnTheInternet" } ]
How to host your own MongoDB server without using hosting providers?
2021-01-16T20:27:34.873Z
How to host your own MongoDB server without using hosting providers?
23,305
null
[ "security", "configuration" ]
[ { "code": "mongodX509mongo shellWhere X is in the set: {\n Server.pem, Server.cert, Client.pem, Client.cert,\n IntermedCA.pem, RootCA.pem}\n", "text": "I am trying to set up a stand alone mongod server using X509 and then connect to it from mongo shell. Below is a description of what I do. It all goes well up to a certain point, but when I reach the time to launch mongo shell, troubles show up. Since I am no mongodb expert and neither am I an openssl guru, I may well be making some basic mistake on the way. I hope someone with more experience will take a look and shed some light on the issue.The root CA certificate (RootCA.pem) is created here, using:$ openssl req -x509 -newkey rsa:4096 -days 3653 -keyout RootCA.key.pem -out RootCA.pem -subj /C=US/ST=NY/O=RootCAThen an intermediate certificate (IntermedCA.pem) is created, using: openssl req -config openssl.cnf -new -newkey rsa:4096 -nodes -keyout IntermedCA.key.pem -out IntermedCA.req.pem -subj /C=US/ST=DC/O=IntermedCA\n openssl x509 -req -days 1096 -in IntermedCA.req.pem -CA RootCA.pem -CAkey RootCA.key.pem -extfile openssl.cnf -extensions v3_ca -set_serial 01 -out IntermedCA.pemA server certificate (Server.pem) is created, using: openssl req -new -newkey rsa:4096 -nodes -keyout Server.key.pem -out Server.req.pem -subj /C=US/ST=CA/O=ServerCA\n openssl x509 -req -days 365 -in Server.req.pem -CA IntermedCA.pem -CAkey IntermedCA.key.pem -extensions v3_ca -set_serial 01 -out Server.pemA client certificate (Client.pem) is created, using: openssl req -new -newkey rsa:4096 -nodes -keyout Client.key.pem -out Client.req.pem -subj /C=US/ST=MA/O=ClientCA\n openssl x509 -req -days 365 -in Client.req.pem -CA IntermedCA.pem -CAkey IntermedCA.key.pem -extensions v3_ca -set_serial 01 -out Client.pemThe trust chain (TrustChain.pem) for the verification is created, using:$ cat IntermedCA.pem RootCA.pem > TrustChain.pemWe must then create a certificate of an adequate shape for mongod to work, using:$ cat Server.key.pem Server.pem > Server.certThe order of Server.key.pem and Server.pem in the command above does not matter.The mongod server can then be launched using:$ mongod --tlsMode requireTLS --tlsCertificateKeyFile Server.cert --tlsCAFile IntermedCA.pem --dbpath /mnt/mongoDB-One/DB_X509 --logpath /mnt/mongoDB-One/DB_X509/mongod.log --forkUp to this point, all seems to be working perfectly, as far as I can see.To check, I run: ps -ef | grep mongod\nubuntu 2142 1 1 13:39 ? 00:00:31 mongod --tlsMode requireTLS --tlsCertificateKeyFile Server.cert --tlsCAFile IntermedCA.pem --dbpath /mnt/mongoDB-One/DB_X509 --logpath /mnt/mongoDB-One/DB_X509/mongod.log --fork\nubuntu 2365 2124 0 14:14 pts/0 00:00:00 grep --color=auto mongod\nand also: sudo netstat -tulpn | grep mongod\ntcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 2142/mongod \nThen I do for the client the same as I did for the server:$ cat Client.key.pem Client.pem > Client.certAnd then I try to launch mongo shell, using:$ mongo --tls --tlsCertificateKeyFile Client.cert --tlsCAFile IntermedCA.pem\nMongoDB shell version v4.4.2\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n{“t”:{\"date\":\"2021-01-10T14:18:31.198Z\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23256, \"ctx\":\"js\",\"msg\":\"SSL peer certificate validation failed\",\"attr\":{\"error\":\"SSL peer certificate validation failed: unable to get issuer certificate\"}}\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: unable to get issuer certificate :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\nAs one can see, this last command fails.And this is the log:$ tail -3 /mnt/mongoDB-One/DB_X509/mongod.log\n{“t”:{\"$date\":“2021-01-10T14:18:31.201+00:00”},“s”:“E”, “c”:“NETWORK”, “id”:23256, “ctx”:“conn6”,“msg”:“SSL peer certificate validation failed”,“attr”:{“error”:“SSL peer certificate validation failed: unable to get issuer certificate”}}\n{“t”:{\"$date\":“2021-01-10T14:18:31.201+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22988, “ctx”:“conn6”,“msg”:“Error receiving request from client. Ending connection from remote”,“attr”:{“error”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate validation failed: unable to get issuer certificate”},“remote”:“127.0.0.1:58650”,“connectionId”:6}}\n{“t”:{\"date\":\"2021-01-10T14:18:31.201+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn6\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:58650\",\"connectionId\":6,\"connectionCount\":0}}\nOne last detail which may be useful:Running this command:$ openssl verify -CAfile TrustChain.pem Xalways returns:X: OK", "username": "Michel_Bouchet" }, { "code": "mongomongod", "text": "SSL peer certificate validation failed: unable to get issuer certificateI think you are missing --host in your mongo\nFor TLS/SSL you have to use --host\nThe mongo shell verifies the certificate presented by the mongod instance against the specified hostname and the CA file.", "username": "Ramachandra_Tummala" }, { "code": "mongo --tls --tlsCertificateKeyFile Client.cert --tlsCAFile IntermedCA.pem --host 127.0.0.1", "text": "So do you mean the command should be this?mongo --tls --tlsCertificateKeyFile Client.cert --tlsCAFile IntermedCA.pem --host 127.0.0.1I just tried, it did not work as you can see below:$ mongo --tls --tlsCertificateKeyFile Client.cert --tlsCAFile IntermedCA.pem --host 127.0.0.1\nMongoDB shell version v4.4.2\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n{“t”:{\"date\":\"2021-01-12T08:06:01.339Z\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23256, \"ctx\":\"js\",\"msg\":\"SSL peer certificate validation failed\",\"attr\":{\"error\":\"SSL peer certificate validation failed: unable to get issuer certificate\"}}\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: unable to get issuer certificate :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n", "username": "Michel_Bouchet" }, { "code": "", "text": "openssl req -x509 -newkey rsa:4096 -days 3653Please try with FQDN\nWhen you generated root CA certificate what value was given to CN?\nCN should match --host in your mongo command", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I am not sure what CN is. But in my post (the first one above) you can see how I generate the root CA and tell me if this right or not. I wrote in detail all the commands I used to generate all the certificates, starting with the root CA.There is only one detail I forgot; I used this command:$ sed s/CA:FALSE/CA:TRUE/ < /etc/ssl/openssl.cnf > openssl.cnfto create the openssl.cnf file for the IntermedCA generation.But this another subject.", "username": "Michel_Bouchet" }, { "code": "/usr/local/share/ca-certificates/sudo update-ca-certificates--tlsCAFile ", "text": "SSL peer certificate validation failed: unable to get issuer certificateThe root CA needs to be in the system CAs or specified on the command line.\nOn ubuntu/debian you can add it to /usr/local/share/ca-certificates/ and execute sudo update-ca-certificatesTo specifiy it with mongod use the flag --tlsCAFile ", "username": "chris" }, { "code": "--tlsCAFile IntermedCA.pem\n$ sudo update-ca-certificates \nUpdating certificates in /etc/ssl/certs...\n0 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n$\nupdate-ca-certificates", "text": "OK, thank you for this tip. As you can see in my first post above, I am using thison the command line. Is the intermediate certificate not enough in this case?And trying the other approach, after copying the root CA to /usr/local/share/ca-certificates/ as you suggest, this is what I see running the update-ca-certificates command.Is this what is expected?By searching the net about update-ca-certificates, I have been able to make some slight progress, but it is still not working. I am not aware of what should match what in this whole process, and I am not aware of what is missing when I see messages like:SSL peer certificate validation failed: unable to get issuer certificateor:connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: unable to verify the first certificate", "username": "Michel_Bouchet" }, { "code": ".crt.pem.crtman 8 update-ca-certificates/usr/local/share/ca-certificates/cat key.pem cert.pem intermediates.pem > server.pem", "text": "Hi @Michel_Boucheton the command line. Is the intermediate certificate not enough in this case?No, It needs to be the root of the trust chain.Is this what is expected?No. The only file included in the update are ones ending in .crt. Copy the .pem over to a .crt\nman 8 update-ca-certificatesFurthermore all certificates with a .crt extension found below /usr/local/share/ca-certificates are also included as implicitly\ntrusted.Just reading over the details of your setup steps.Your server certificate needs the intermediate concatenated to it too. Alternatively add the intermediate to your system’s /usr/local/share/ca-certificates/ too.\ncat key.pem cert.pem intermediates.pem > server.pem", "username": "chris" }, { "code": "", "text": "OK. I will try, but what do you call key.pem and cert.pem in your last expression ?", "username": "Michel_Bouchet" }, { "code": "", "text": "They would be the server’s certificate/key pair. I refer to this step from your setup.We must then create a certificate of an adequate shape for mongod to work, using:$ cat Server.key.pem Server.pem > Server.cert", "username": "chris" }, { "code": "-subj option parameteropensslmongodmongo shell", "text": "So if I am following correctly, you mean that instead of:$ cat Server.key.pem Server.pem > Server.certI need to do:$ cat Server.key.pem Server.pem IntermedCA.pem > Server.certI did as well, without being sure it was needed :$ cat Client.key.pem Client.pem > Client.certDo I also need to do:$ cat Client.key.pem Client.pem IntermedCA.pem > Client.certIs there any condition that I need to match concerning the -subj option parameter when running openssl for the different certificates? Like equality requirement or difference requirement?After I will try to launch mongod an mongo shell as I am already doing.Please let me know if you still see a problem.", "username": "Michel_Bouchet" }, { "code": "", "text": "Yes the client should also have the intermediate.Check out MongoDB Courses and Trainings | MongoDB University it is a great course on the topic of mongdb security.", "username": "chris" }, { "code": "", "text": "Thank you for the tip. I have already completed this course.But as you must know following a course is one thing. Facing the real world is usually a few steps above Now I am trying to polish what I learned, by making some real world use of my knowledge.Beside, the course is indeed good as you write; but it does not go into details on how you should prepare X509 certificates. They basically come up ready made. This is one thing I had to search from other resources, and as you can see I still have a lot to learn.… I will try again and see what happens …", "username": "Michel_Bouchet" }, { "code": "$ mongo --tls --tlsCertificateKeyFile Client.cert --tlsCAFile IntermedCA.pem\nMongoDB shell version v4.4.2\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n{\"t\":{\"$date\":\"2021-01-16T03:23:33.124Z\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23256, \"ctx\":\"js\",\"msg\":\"SSL peer certificate validation failed\",\"attr\":{\"error\":\"SSL peer certificate validation failed: unable to get issuer certificate\"}}\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: unable to get issuer certificate :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n$", "text": "Well, even after trying again using this new way; I still get this result when launching mongo shell:", "username": "Michel_Bouchet" }, { "code": " --tlsCAFile IntermedCA.pem", "text": " --tlsCAFile IntermedCA.pemIf the root ca is now installed in the system correctly this should not be needed. Using this it should be the root ca not an intermediate.", "username": "chris" }, { "code": "", "text": "So the commands to launch mongod an mongo shell should become this?mongod --tlsMode requireTLS --tlsCertificateKeyFile Server.cert --tlsCAFile RootCA.pem --dbpath /mnt/mongoDB-One/DB_X509 --logpath /mnt/mongoDB-One/DB_X509/mongod.log --forkand:mongo --tls --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pemI may have to double check about the “root ca installed in the system correctly”. Because I am not too confident about that.But do I need both? I mean “root ca installed in the system” and the option --tlsCAFile above?\nOr only one is enough?… For the time being I have tried running, (starting mongod with --tlsCAFile RootCA.pem) without checking the “root ca installation in the system” :$ mongo --tls --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pemand it still does not work. I will check the “root ca installation in the system” and see …", "username": "Michel_Bouchet" }, { "code": "The server certificate does not match the host name. Hostname: localhost does not match SAN(s): mongo-0-a, mongo-0-b, mongo-0-c CN: Your Cert CN ", "text": "It is very likely that one or both certs you are using are not correct.openssl is a great tool but I find it cumbersome for using as a CA. I would recommend you find a CA Suite you like and use that as a CA. After trying a few I use Hashicorp Vault but that has its own special learning curve.Another thing, when you run mongo you will want to use a hostname that matches the server’s certificate subject, or one of the SANs, as the next thing will be a handshake error due to a hostname mismatch.\ne.g.\nThe server certificate does not match the host name. Hostname: localhost does not match SAN(s): mongo-0-a, mongo-0-b, mongo-0-c CN: Your Cert CN Beside, the course is indeed good as you write; but it does not go into details on how you should prepare X509 certificates. They basically come up ready made. This is one thing I had to search from other resources, and as you can see I still have a lot to learn.I seem to remember having to to it, but courses change and memory is fallible(I did it 2017).", "username": "chris" }, { "code": "update-ca-certificatesCNopensslsubjectAltName", "text": "@chrisIndeed there was a mistake and I am now able to connect to the database with mongo shell. I use this command to launch the daemon:mongod --tlsMode requireTLS --tlsCertificateKeyFile Server.cert --tlsCAFile RootCA.pem --auth --dbpath /mnt/mongoDB-One/DB_X509 --logpath /mnt/mongoDB-One/DB_X509/mongod.log --forkAnd then I use this one to fire mongo shell:mongo --tls --host localhost --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pemThank you very much for all your help. You certainly gave me a number of valuable advices to reach this point.I still have to solve a few issues. First check the handling of update-ca-certificates to properly use it. And also though I can set my CN when running openssl to use this to connect:mongo --tls --host 127.0.0.1 --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pemI am not yet able to use both localhost and 127.0.0.1, and I am not able to use the IP (192.168.1.2) either. I have read that I should make use of subjectAltName, but I haven’t figured out how to do it. What I tried at this point failed.", "username": "Michel_Bouchet" } ]
MongoDB 4.4.2 / X509 -> Can't connect with mongo shell!
2021-01-11T04:58:19.894Z
MongoDB 4.4.2 / X509 -&gt; Can&rsquo;t connect with mongo shell!
10,037
https://www.mongodb.com/…4_2_1024x512.png
[ "security", "configuration" ]
[ { "code": "", "text": "Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.I am trying to re-set my username and password by following this tutorial:O/S: Windows 10\nhowever, i can’t restart the MongoDB instance as advised here:Do I need to restart all over again from C:\\ ?", "username": "Nobody" }, { "code": "", "text": "What issue are you facing when starting mongod?\nPlease show full command you used\nMake sure dbpath dir exists and no other mongod running on the same port", "username": "Ramachandra_Tummala" }, { "code": "spring.data.mongodb.uri=mongodb://root:rootpassword@localhost:27017/testdb\nspring.data.mongodb.authentication-database=admin\nspring.data.mongodb.username=myUserAdmin\nspring.data.mongodb.password=abc123\nC:\\Program Files\\MongoDB\\Server\\4.4\\bin>mongo --port 27017\nMongoDB shell version v4.4.1\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"54a136cb-2358-4751-bd56-6be311dfc86f\") }\nMongoDB server version: 4.4.1\n---\nThe server generated these startup warnings when booting:\n 2021-01-14T00:39:32.164+08:00: ***** SERVER RESTARTED *****\n 2021-01-14T00:39:34.467+08:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n---\n---\n Enable MongoDB's free cloud-based monitoring service, which will then receive and display\n metrics about your deployment (disk utilization, CPU, operation statistics, etc).\n\n The monitoring data will be available on a MongoDB website with a unique URL accessible to you\n and anyone you share the URL with. MongoDB may use this information to make product\n improvements and to suggest MongoDB products and deployment options to you.\n\n To enable free monitoring, run the following command: db.enableFreeMonitoring()\n To permanently disable this reminder, run the following command: db.disableFreeMonitoring()\n---\n> use admin\nswitched to db admin\n> db.createUser(\n... {\n... user: \"myUserAdmin\",\n... pwd: \"abc123\",\n... roles: [ { role: \"userAdminAnyDatabase\", db: \"admin\" } ]\n... }\n...\n... db.adminCommand( { shutdown: 1 } )\n<dependency>\n\t\t<groupId>org.springframework.boot</groupId>\n\t\t<artifactId>spring-boot-starter-data-mongodb</artifactId>\n\t</dependency>\n\t<!-- https://mvnrepository.com/artifact/org.mongodb/mongo-java-driver -->\n\t<dependency>\n\t\t<groupId>org.mongodb</groupId>\n\t\t<artifactId>mongo-java-driver</artifactId>\n\t\t<version>3.12.7</version>\n\t</dependency>", "text": "Hi Ramachandra,My latest issue is I can’t get SpringBoot to recognise the uri in application.propertiesBelow is the command I have entered:Could you let me know how to check if the new password and username is accepted by mongodb?The error I get from Spring Boot isDescription:Failed to configure a DataSource: ‘url’ attribute is not specified and no embedded datasource could be configured.Reason: Failed to determine a suitable driver classbut I have gotten all the right dependencis in my maven pom.xml", "username": "Nobody" }, { "code": "", "text": "Please check your application.properties\nfor uri param you are using root but in the next line you are using different userid/passwordIf your user creation command went successfully you can check by login as that usermongo -u myUserAdmin -p abc123 --authenticationDatabase admin --port 27017", "username": "Ramachandra_Tummala" } ]
C:\Program Files\MongoDB\Server\4.4\bin>mongod --port 27017 --dbpath
2021-01-16T09:36:52.054Z
C:\Program Files\MongoDB\Server\4.4\bin&gt;mongod &ndash;port 27017 &ndash;dbpath
4,525
null
[ "capacity-planning" ]
[ { "code": "", "text": "I am a computational social science phd student (and citizen scientist) and love mongodb! All my research data is organized using mongodb (much to the chagrin of my advisor that would prefer to use flat files, which I begrudgingly admit can be much faster sometimes for analysis purposes).This data collection project started out small but quickly grew into something I’m barely able to wield control of.I have read every article and documentation regarding optimizing and speeding up my database and they have been very useful. They are mostly OS and Software (index, query) related. But I have reached a point where my hardware setup is holding me back from analyzing this data.I am trying to grok where my bottlenecks are so I can make cost-benefit determinations regarding hardware options and changes.Can I get some advice on things I can do to make my research project more successful? Any advice could really help me. While limited on funds, If I knew more about various hardware optimizations, I could determine if it is worth spending my student-loan money on some upgrades.I have one research digital anthropology project regarding social media interactions that are stored in 1 database with 4 collections with just the primary shard (no sharding). On a computer solely dedicated to mongodb. And I have many useful columns indexed. The media collection can be ignored, that setup is straightforward and while it doesn’t (to my knowledge) have any performance impact, I don’t even use it in my analysis.Other than occasional data fixes, this database is offline and thus does not handle transactions, and is used for data science. (WORM)MongoDB version: 3.6.21Processor: 16 core Intel Xeon CPU E5-2630 v3 @ 2.40GHzMemory: 64gb DDR4 ECCSwap: Dedicated 64gb ssdGPU: GTX 1070, 8gb vRAM, 1920 CUDA coresOS: Ubuntu 18.04OS Disk: Dedicated 16gb ssdDatabase Location: 4TB LVM2 formatted XFS (4- 1TB Seagate Constellation 7200RPM)In addition to any feedback from the community, I do have some specific questions as well:What would the benefit be if I were to use a cluster of raspberry pi’s instead?I did not consider sharding due to the additional complexity and risk when I was first setting up this project. But based on what I have read, it seems that I stand to gain significant performance by sharding the database. But I’m unsure of the various ways I could achieve this. Kubernetes, Raspberry pi’s, some dell Optiplex 7040 thin clients, etc… I am also worried about corrupting the data somehow and not noticing it until too late.Based on watching my resource monitor during various operations and tasks, it is clear that I am I/O bottlenecked. Would it be worth it to move to shards or stick with my computer detailed above and use SSDs for the LVM2 instead? My neophyte knowledge prevents me from making a good mental comparison of the trade-off.Would it be smart to upgrade to version 4? Last year I tried to upgrade to mongodb version 4 and learned the hard way to carefully read the instructions. And lost about three weeks when the database had to be rebuilt using a single thread. (Mongo really needs multithreaded rebuilds!)What would the trade-off be for creating a sharded cluster using shards that were not identical? Could I unintentionally cause a config bottleneck?", "username": "Gabriel_Fair" }, { "code": "", "text": "Hi @Gabriel_Fair,That sounds like a fascinating project and we have a great series of blogs on performance improvement including sharding configuration:\nPerformance Best Practices: Benchmarking | MongoDB BlogI recommend you go through them.Having said that, I cannot understand why would someone want to.manage and tune HW without specific knowledge of the software when they can Use MongoDB Atlas and all its tools to scale up or harizontal seamlessly.If the cluster does not require to be up all the time you cab pause it. We offer students packages and other promotions like Day Zero so you could try the product with topped up credits.I think this is the solution I would go for using NVME storage for example.MongoDB Student PackGet your ideas to market faster with a developer data platform built on the leading modern database. MongoDB makes working with data easy.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Before looking at any hardware optimizations I would take a look at the queries an indexes first. More often than not it’s the area where you get the biggest payback in term of performance.", "username": "Rhys_Campbell" }, { "code": "", "text": "Thank you @Pavel_Duchovny for the best practices. Very useful when I start dabbling in using shards.I have learned a lot from administering my mongodb database. I have a background in enterprise HDFS admin, so it was neat to learn something new. I am also a tinkerer. All my phones are jailbroken and I’ve gotten debian running on my kitchen oven. So manage and tuning HW is like managing my truck and tuning it so I can get better mileage and performance. It also helps me better defend and recommend technology solutions to my colleagues when they come to me with a problem.Thank you for your recommendation of trying Atlas. I guess I did not consider it since I do not have to pay for electricity, egress or ingress with my self managed solution. So in the two years of using this project I have had to pay zero dollars. I would be spooked to move everything to Atlas, only to find out 6 months later that my department’s volatile funding for students is canceling my grant, and then have to pay more to get everything off Atlas.I looked around the web for an Atlas cost calculator and couldn’t find one.Also my use case is analysis and discovery. Which means I don’t yet fully understand my data. So a lot of my quieres run for a bit only to come back empty. Which is why I’m worried about trying Atlas and blowing through my budget with queries that in hindsight were pretty dumb. But of course, I would be more careful if I were to start using Atlas. I will consider it.", "username": "Gabriel_Fair" }, { "code": "", "text": "Hi @Gabriel_FairI appreciate your interest in managing MongoDB and mastering it, so if you like it keep on going.I would say that going to 4.0 or 4.2 will open a new world of opportunity to you as they introduce new aggregation/new indexes, like wild card and materialized views which can assist you digest your arbitrary queries better .I suggest you to look for our whats new blogs at MongoDB BlogNow regarding atlas you can open a free account and play with topologies and see the hourly and monthly bill for your deployment. In atlas you are paying for the time the cluster is up (when paused you pay a small storage fee)So there is no billing per queries , you can run 100 or 1M and the cost will be the same.I suggest you to explore this option, perhaps. You can consider offloading some small workload to ease the current system or try some new version without upgrading the current env.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "storage.wiredTiger.engineConfig.directoryForIndexes", "text": "I saw your reply in the linked thread. I did not participate at first because using Atlas was making so much technical sense. But it looks like it is out of the question. So I will try to give you some of my ideas.The most important factor afterBefore looking at any hardware optimizations I would take a look at the queries an indexes first.for performance is to have your working set (which included the appropriate indexes) in memory, otherwise you end up with disk I/O bottleneck. The indexes are taken care:And I have many useful columns indexed.and you are aware ofit is clear that I am I/O bottlenecked.Absolutely none. In particular with:4x Raspberry v4 (model B-8GbSee https://docs.mongodb.com/manual/core/wiredtiger/ for the cache size calculation. If your I/O is your bottleneck then it means your working set is roughly bigger than ( (64 - 1) / 2) = 30.5 Gb. With 4 RPI, you can have 4 x ( (8-1)/2 ) = 14 Gb of WT cache. To get close to 30.5Gb of WT cache you would need more than 8 RPI and you would need to need to implement sharding for the data itself but your would need so RPI to run the config server. Having a cluster of machine does not help performance if you do not shard, it helps availability only. ButI did not consider sharding due to the additional complexity and risk when I was first setting up this projectI agree with the additional complexity. Do not consider sharding UNLESS there is no other way. So forget your 4 RPI or your 4 jail broken phones. I am not here to put down RPI. I love RPI. I own 3 and would not consider anything else for what I do with them.Sinceit is clear that I am I/O bottleneckedusing your disks in RAID 0 configuration might be a better avenue that having them in a single logical volume but what you gain in performance you loose in resiliency. But I suspect you do much more read than write and you are not live so if you have a good backup you do not need that much resiliency. You could do RAID 0 if and only if your current LVM2 is less than 50% usage.The best way to remove I/O bottleneck would be to increase RAM.If you have budget for SSDs you might want to look at storage.wiredTiger.engineConfig.directoryForIndexes so that index files are stored in different disks so that reads of indexes that do not fit the WT cache are faster.If I can conclude:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Ideal hardware setup for a PhD researcher drowning with a huge dataset?
2020-12-06T19:45:49.975Z
Ideal hardware setup for a PhD researcher drowning with a huge dataset?
4,032
null
[ "atlas-device-sync" ]
[ { "code": "dbdb", "text": "I read on Realm Sync documentation that realms should be opened asynchronously on the very first app open and otherwise opened synchronously. When I implemented this, no data populated to the cloud whenever realms were opened synchronously, although the changes happened on device just fine. What I’m thinking now is:Am I thinking about sync vs async realms correctly? Is there a better pattern to handle this case?", "username": "Peter_Stakoun" }, { "code": "", "text": "The pattern suggested in the documentation does work so we need to understand the difference between that and what you’re doing.It would be helpful to see what code you’re using to do the initial async open and then the code used to open it synchronously. Also include your platform.", "username": "Jay" }, { "code": "shouldConnectAsynctruedatabaseexport const connect = async (shouldConnectAsync?: boolean) => {\n if (!app.currentUser) {\n return Promise.reject(new Error('Current user not found'))\n }\n\n const partitionValue = getPartition() // This gets the partition value for the current user\n\n if (!partitionValue) {\n return Promise.reject(new Error('Authorization failed'))\n }\n\n const config = {\n sync: {\n user: app.currentUser,\n partitionValue,\n },\n schema,\n }\n\n database = shouldConnectAsync ? await Realm.open(config) : new Realm(config)\n}\n", "text": "I’m using React Native. Here’s the function that I call to open a connection to Realm.shouldConnectAsync is passed in as true when the partition value is changed (on login) and the database variable is exported from this file and used throughout the app.", "username": "Peter_Stakoun" }, { "code": "", "text": "Looks like this was actually related to this error: Keep getting BadChangeset Error (ProtocolErrorCode=212) in Realm Sync. Looks like a complete coincidence that it started happening right after we made this change to opening sync vs async. Seems to be resolved with realm 10.1.3.", "username": "Peter_Stakoun" }, { "code": "ios vRealmJS/10.1.3react-native vRealmJS/10.0.1", "text": "This seems to work just fine on simulators once upgraded to 10.1.3, but for some reason whenever it’s loaded onto an iOS device (release mode), the same issue arises. The SDK version showing up when on the simulators is ios vRealmJS/10.1.3 (which I assume is correct) and the one coming in from the device is react-native vRealmJS/10.0.1 (this one has the bug). I’ve tried clearing node_modules, the ios build folder, pods, and it still isn’t using the same SDK version.", "username": "Peter_Stakoun" }, { "code": "", "text": "I have also noticed that you cannot use aysncOpen() when trying to open a Realm on a background thread. asyncOpen always forces the callback on the primary thread. Which means that if you want to open Realm to write to it on a background thread, you need to use Realm.open() instead.I wrote a Medium article on this last moth.MongoDB Realm is the leading offline-first synchronizing platform for developing collaborative cross-platform applications that do not…\nReading time: 6 min read\nRichard", "username": "Richard_Krueger" } ]
When to open realms async vs sync
2021-01-15T23:11:28.014Z
When to open realms async vs sync
4,938
null
[ "performance" ]
[ { "code": "", "text": "Hi! Wanted to know why am I getting different execution times for the same query I ran on different servers and on my local machine. For instance, I am getting 0.3 seconds on my laptop, 4 seconds on an ubuntu server and 1.8 seconds on another ubuntu server. All of them ran the same code on the same Atlas instance and same database. The time remains the same (±0.2 seconds) on multiple tries, for all of them.Shouldn’t it take the same amount of time regardless of where it is run from since the actual execution takes place on cloud server? Or does it depend on something?", "username": "Amin_Memon" }, { "code": "", "text": "Hello Amin,\nCould you provide some more information…\nAre the specs for each machine you ran it on the same? For example RAM/CPU.What were you running the query from? Were you connected to the Mongo shell, Compass, or through an application?If you are running from the shell you could add the .explain(“executionStats”) to the end of the query to get a more detailed breakdown.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Are the specs for each machine you ran it on the same? For example RAM/CPU.No. The specs are different:But hardware shouldn’t have any effect right? Since query execution takes place on Atlas servers?What were you running the query from? Were you connected to the Mongo shell, Compass, or through an application?I am running the queries from Django/Celery using pymongo (pymongo · PyPI), using the Atlas connection url (mongodb+srv://username:*******@clustername.abcd.mongodb.net/?retryWrites=true&w=majority)If you are running from the shell you could add the .explain(“executionStats”) to the end of the query to get a more detailed breakdown.I’ll see if I can do this from pymongo", "username": "Amin_Memon" }, { "code": "", "text": "How did you determine your execution time?Yes, the processing is happening on the Atlas instance but a few things need to be considered.", "username": "steevej" } ]
MongoDB Atlas - Different execution times when query is run from different servers
2021-01-15T09:41:55.695Z
MongoDB Atlas - Different execution times when query is run from different servers
4,151
null
[ "backup", "atlas" ]
[ { "code": "", "text": "I read in the documentation that “For multi-region Atlas clusters, Atlas stores backup data for the cluster in a data center specific to the geographical location of the cluster’s Preferred Region.”. Would anyone know if it is possible to replicate these snapshots to another region? For example, say my Preferred Region is us-east-1, could I then replicate those snapshots to us-west-2 for disaster recovery purposes?", "username": "Gary_Hampson" }, { "code": "", "text": "I am also looking to do this, did you find a good solution at all?", "username": "sean_redmond" }, { "code": "", "text": "No, I haven’t received any reply nor have I come up with any solution as yet.", "username": "Gary_Hampson" }, { "code": "", "text": "Hi @Gary_Hampson and @sean_redmond,Atlas currently does not have a feature to replicate backup snapshots to a second region.There is an existing feature suggestion in the MongoDB Feedback Engine you can watch, upvote, and comment on: Atlas backup to second region.Regards,\nStennie", "username": "Stennie_X" } ]
Continuous Backup Snapshot Replication?
2020-03-27T21:35:58.169Z
Continuous Backup Snapshot Replication?
2,494
null
[ "node-js", "java", "monitoring" ]
[ { "code": "", "text": "I want to get the output of printReplicationInfo or getReplicationInfo using Java or Node JS but I could not get can any one help me out.", "username": "Pradeep_kumar.K" }, { "code": "db.printReplicationInfo()db.getReplicationInfo()mongomongo> db.printReplicationInfo\nfunction() {\n var result = this.getReplicationInfo();\n if (result.errmsg) {\n var isMaster = this.isMaster();\n if (isMaster.arbiterOnly) {\n print(\"cannot provide replication status from an arbiter.\");\n return;\n } else if (!isMaster.ismaster) {\n print(\"this is a secondary, printing secondary replication info.\");\n this.printSecondaryReplicationInfo();\n return;\n }\n print(tojson(result));\n return;\n }\n print(\"configured oplog size: \" + result.logSizeMB + \"MB\");\n print(\"log length start to end: \" + result.timeDiff + \"secs (\" + result.timeDiffHours + \"hrs)\");\n print(\"oplog first event time: \" + result.tFirst);\n print(\"oplog last event time: \" + result.tLast);\n print(\"now: \" + result.now);\n}\ndb.printReplicationInfo()db.getReplicationInfo()isMastermongoexec mongo --eval \"db.printReplicationInfo()\" --quiet", "text": "Welcome to the MongoDB community @Pradeep_kumar.K!The db.printReplicationInfo() and db.getReplicationInfo() shell helpers are implemented in JavaScript in the mongo shell. These administrative helpers aren’t part of the standard driver API, but you can implement the same logic using any driver.Since these helper functions are written in JavaScript in the mongo shell, you can invoke the helpers without parentheses to see the underlying JavaScript code.For example:From this source example you can see that db.printReplicationInfo() uses data from db.getReplicationInfo() and the output of the isMaster command.If you don’t want to reimplement this functionality in your preferred language driver, another option would be to capture mongo output using your language’s shell exec equivalent to call something like mongo --eval \"db.printReplicationInfo()\" --quiet.Regards,\nStennie", "username": "Stennie_X" }, { "code": "db.printReplicationInfo()mongo.exe --eval=db.printReplicationInfo() --quietprivate static void doCommand() \n\t\tthrows IOException, InterruptedException {\n\n\tfinal String [] cmd = { \"mongo.exe\", \"--port=30001\", \"--eval=db.printReplicationInfo()\", \"--quiet\" };\n\n\tProcessBuilder ps = new ProcessBuilder(cmd);\n\tps.redirectErrorStream(true);\n\tProcess pr = ps.start();\n\t\n\ttry(BufferedReader in = new BufferedReader(\n\t\t\tnew InputStreamReader(pr.getInputStream()))) {\n\t\tString line;\n\t\twhile ((line = in.readLine()) != null) {\n\t\t\tSystem.out.println(line);\n\t\t}\n\t\tpr.waitFor();\n\t\tSystem.out.println(\"Done.\");\n\t}\n}\nconfigured oplog size: 990MB\nlog length start to end: 251secs (0.07hrs)\noplog first event time: Tue Jan 05 2021 12:42:54 GMT+0530 (India Standard Time)\noplog last event time: Tue Jan 05 2021 12:47:05 GMT+0530 (India Standard Time)\nnow: Tue Jan 05 2021 12:47:15 GMT+0530 (India Standard Time)\nDone.\ncmddb.getReplicationInfo()--port", "text": "Hello @Pradeep_kumar.K,Based upon @Stennie_X 's note, here is a way to run the db.printReplicationInfo() command using Java code. The Java code basically runs the command mongo.exe --eval=db.printReplicationInfo() --quiet from the OS command line and prints the output.For the above code I got an output like this:Just substitute the cmd variable for the db.getReplicationInfo() command or any other command you want to run. You can also use other parameters like --port as per your requirement.", "username": "Prasad_Saya" }, { "code": "", "text": "I am getting below error.java.io.IOException: Cannot run program “mongo.exe”: CreateProcess error=2, The system cannot find the file specified\nat java.lang.ProcessBuilder.start(Unknown Source)\nat TestMongo.doCommand(TestMongo.java:1517)\nat TestMongo.main(TestMongo.java:1539)\nCaused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified\nat java.lang.ProcessImpl.create(Native Method)\nat java.lang.ProcessImpl.(Unknown Source)\nat java.lang.ProcessImpl.start(Unknown Source)", "username": "Pradeep_kumar.K" }, { "code": "cmd", "text": "Instead of the “mongo.exe” try using the complete path to it, e.g., “C:\\mongodb-server-4.2.8\\bin\\mongo.exe” in the variable cmd.", "username": "Prasad_Saya" }, { "code": "", "text": "This works thanks for the help.", "username": "Pradeep_kumar.K" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to get data for printReplicationInfo using Java or Node JS?
2021-01-04T07:10:05.999Z
How to get data for printReplicationInfo using Java or Node JS?
3,099
null
[ "aggregation" ]
[ { "code": "countAllOrders: { $sum: 1 },countCompletedOrders: { $sum: { $cond: { if: $eq: [ \"$order.status\", \"completed\" ], then: 1, else: 0 } } },", "text": "Hey guys,\nI have ecommerce orders and I’d like to aggregate data by\ncustomer_id,\nstoring aggregated data by customer_id AND by order_status,\nand in different calculated fields the count of that specific status ofI use $group for grouping orders by customer_id, I know how to calculate “countAllOrders”:\ncountAllOrders: { $sum: 1 },\nbut I was not able to figure out how can I calculate:\n“countAllUniqueOrders” and “countCompletedOrders”.\nIn my case the expression for\n“countAllUniqueOrders” is:\nfilter by the given customer_id and count the distnict values of order_ids related,\nin case of\n“countCompletedOrders” is:\nfilter by the given customer_id and where order.status = “completed” and count them.I was trying $group, $bucket and $facet, but the latest 2 is not appropriate: I can’t set the number of buckets in advance and I want to output every filtered documents, sub-stages ( $facet doesn’t allow it AFAIK ) to separate collections.Basically how should I modify\n{ $sum: 1 }\nin my case?\nI tried like this:\ncountCompletedOrders: { $sum: { $cond: { if: $eq: [ \"$order.status\", \"completed\" ], then: 1, else: 0 } } },\nbut it drops an “Unknown error” like others\nThank you!", "username": "Vane_T" }, { "code": "{\n $group :\n {\n _id :\n {\n \"customer\" : \"$customer_id\" ,\n \"status\" : \"$order.status\" \n } ,\n \"count_by_customer_and_status\" : { \"$sum\" : 1 }\n }\n}\n", "text": "The followingaggregated data by customer_id AND by order_statusandcustomer_id and where order.status = “completed”indicate that you want to start withYou will not get exactly what you want but it should then be possible to group all documents of a customer into a single one with $push.", "username": "steevej" }, { "code": "\"customer\" : \"$customer_id\" ,\"customer_x\" : \"$customer_id\" ,", "text": "Hi @steevej,Thank you for your response! I’m sorry, I’m a noob, I don’t get your concept. I tried it but it doesn’t provide what I want.\nAlso, the stage output preview documents only show the unique status values but not the unique customer_id values, so it looks it is NOT grouped by customer_id ( I checked it )\nEDIT:\nI modified line:\n\"customer\" : \"$customer_id\" ,\nto:\n\"customer_x\" : \"$customer_id\" ,\nand it now groups by both!\nPls. note I DON’T have such a field named “customer” !But thank you again! ", "username": "Vane_T" }, { "code": "", "text": "It is always better to share a sample of the input documents so that we can test what we propose as solution. Otherwise, we simply give pointers because creating the input documents is time consuming and time is a limited resource.", "username": "steevej" }, { "code": "{\n\t\t\"$group\" : {\n\t\t\t\"_id\" : \"$_id.customer_x\",\n\t\t\t\"counts\" : {\n\t\t\t\t\"$push\" : {\n\t\t\t\t\t\"status\" : \"$_id.status\",\n\t\t\t\t\t\"count\" : \"$count\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n", "text": "unique status values but not the unique customer_id valuesIt gives the unique combination of the 2.You can then group all the status counts of one customer with another group such as:", "username": "steevej" }, { "code": "{\n \"$group\" : {\n \t\"_id\" : \"$_id.customer_x\",\n \t\"counts\" : {\n \t\t\"$push\" : {\n \t\t\t\"status\" : \"$_id.status\",\n \t\t\t\"count\" : \"$count_by_customer_and_status\"\n \t\t}\n \t}\n }\n}\n", "text": "@steevejthank you, it works For others who also need such help, the correct code in the second $group is:", "username": "Vane_T" }, { "code": "\"count\" : \"$count_by_customer_and_status\"\n\"count\" : \"$count\"\ngroup_by_customer_and_status =\n{\n $group :\n {\n _id :\n {\n \"customer_x\" : \"$customer_id\" ,\n \"status\" : \"$order.status\" \n } ,\n \"count_by_customer_and_status\" : { \"$sum\" : 1 }\n }\n} ;\ngroup_by_customer =\n{\n \"$group\" :\n {\n _id\" : \"$_id.customer_x\",\n \"counts\" :\n {\n \"$push\" :\n {\n \"status\" : \"$_id.status\",\n \"count\" : \"$count_by_customer_and_status\"\n }\n }\n }\n} ;\npipeline = [ group_by_customer_and_status , group_by_customer ] ;\n", "text": "Yes. I made a typo. It isvsAs a side note, put your code and sample document between 2 lines of triple back ticks. This forum will keep your formatting, highlight the syntax, and more importantly, will produce the correct quotes, double quotes and you will not lose some dollar signs in the rendered page.In the previous post _id.customer_x and _id.status really needs the dollar sign but we do not see them in the rendered page. So the real final (hopefully) pipeline is:", "username": "steevej" }, { "code": "", "text": "That works like a charm!\nI added a $unwind and a $addFields stages at the end of that and the aggregations are now in distinct documents, with some further aggregations", "username": "Vane_T" }, { "code": " $group: {\n _id: {\n \"email\": \"$order.billing.email\",\n \"globalOrderId\": \"$global.order_id\",\n \"status\": \"$order.status\"\n },\n \"countEach\": {\n \"$sum\": 1\n }\n }\n{\n \"_id\": {\n \"$oid\": \"5ffb36c87f9b06d538fa78a3\"\n },\n \"order\": {\n \"billing\": {\n \"email\": \"[email protected]\",\n \"phone\": \"+420 296 631 111\"\n },\n \"id\": {\n \"$numberInt\": \"1465\"\n },\n \"status\": \"pending\",\n \"currency\": \"EUR\",\n \"date_created\": {\n \"$date\": {\n \"$numberLong\": \"1610298684000\"\n }\n },\n \"date_modified\": {\n \"$date\": {\n \"$numberLong\": \"1610299059000\"\n }\n },\n \"total\": {\n \"$numberDouble\": \"140.94\"\n },\n \"customer_id\": {\n \"$numberInt\": \"7\"\n },\n \"customer_ip_address\": null,\n \"customer_user_agent\": null,\n \"customer_note\": \"VA customer note\",\n \"payment_method\": \"other\",\n \"date_paid\": null,\n \"date_completed\": null\n },\n \"numVerify\": {\n \"valid\": true,\n \"international_format\": \"+420296631111\",\n \"country_prefix\": \"+420\",\n \"country_code\": \"CZ\",\n \"location\": \"Praha\"\n },\n \"businessMeta\": {\n \"client_id\": \"0001\",\n \"client_name\": \"Acme Co\",\n \"webshop_id\": \"00001\",\n \"webshop_name\": \"exampleshop.com\"\n },\n \"global\": {\n \"order_id\": \"00001-0000-1465\",\n \"customer_id\": \"00001-00001-7\"\n }\n}\n/**\n * outputFieldN: The first output field.\n * stageN: The first aggregation stage.\n */\n{\n \"groupedByEmail\":\n [\n { $match: { \"_id.email\": { $exists: 1 } } },\n { $group: {\n _id: \"$_id.email\",\n \"counts_ByEmail\":\n {\n $push:\n {\n \"globalOrderId\": \"$_id.globalOrderId\",\n \"status\": \"$_id.status\",\n \"count_ByEmail\": \"$countEach\"\n }\n }\n }\n }\n ],\n \"groupedByGOrderId\":\n [\n { $match: { \"$_id.globalOrderId\": { $exists: 1 } } },\n { $group: {\n _id: \"$_id.globalOrderId\",\n \"counts_ByGOrderId\":\n {\n $push:\n {\n \"email\": \"$_id.email\",\n \"status\": \"$_id.status\",\n \"count_ByGOrderId\": \"$countEach\"\n }\n }\n }\n }\n ],\n \"groupedByStatus\":\n [\n { $match: { \"$_id.email\": { $exists: 1 } } },\n { $group: {\n _id: \"$_id.status\",\n \"counts_ByStatus\":\n {\n $push:\n {\n \"globalOrderId\": \"$_id.globalOrderId\",\n \"status\": \"$_id.status\",\n \"count_ByStatus\": \"$countEach\"\n }\n }\n }\n }\n ]\n}\n", "text": "Hi @steevejtried to tweak your suggestion and I have 2 problems,My source document example is:since I added one more items to { _id } in first $group stage, I want to use $facet instead of 2nd $group, when the different sub-pipelines contains the different $groups ( similarly to you 2nd $group stage).My issue is I got a syntax error and I can not figure out what is wrong.\nThe content in the 2nd stage, in the $ facet block is:EDIT: The latest error message is “unknown top level operator: $_id.globalOrderId” -EDIT END\nEDIT 2: When I commented out all 3 $match stages, it works properly, so there must be the problem! -EDIT 2 END\nSo what is wrong with it and you might noticed that I extended your solution from grouping by 2 items to by 3, so how would you do it instead of mine?\nThanks a lot!", "username": "Vane_T" }, { "code": "", "text": "is totally beginner: in your two $group stage in some cases you use “” around “_id”, and “$group”, in some cases you don’t. Why is the difference?Cut-n-paste error from my part. I always use double quotes around literals.In the $match, you should not use the dollar sign for the field names _id.email and _id.globalOrderId.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to - complex (for me) aggregation pipeline
2021-01-10T14:13:19.219Z
How to - complex (for me) aggregation pipeline
4,535
https://www.mongodb.com/…e_2_1024x333.png
[ "aggregation" ]
[ { "code": "const agg = [\n {$match: {}},\n {$addFields: {\n avgGrowth: {\n $ceil: {\n $avg: '$history.growth'\n }\n }\n }\n },\n {\n $merge: {\n into: { db: \"mydb\", coll: \"test\" },\n on: '_id',\n whenMatched: 'replace'\n },\n }\n]", "text": "I’m trying to perform addFields and merge to take an entire collection every month and update it with new calculated fields. I’m getting the below error:image1076×350 18.5 KBHere’s the code:", "username": "lmb" }, { "code": "", "text": "Hi @lmb,I believe the issue is that with your version (4.2) you cannot do a merge back to the aggregate collection.It is allowed only from 4.4Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Merge Aggregation error with AddFields
2021-01-15T20:40:37.129Z
Merge Aggregation error with AddFields
1,643
null
[ "replication" ]
[ { "code": "", "text": "Hello Dears,\nI added a new mongo member to my replica set ,when the member exceeded 280 GB and after 10 hours , the new replication member starting again from scratch,\nI can’t detect the root case for restarting from scratch ,and I don’t know if the problem became from the replication parameters such as “initialSyncTransientErrorRetryPeriodSeconds” ,“oplogInitialFindMaxSeconds”,\nDoes anyone have a concern or recommendation about my case?,\nNotes : my database exceeded 3.5 TB , version 4.4\nThanks a lot", "username": "Abdelrahman_N_A" }, { "code": "initialSyncTransientErrorRetryPeriodSeconds10", "text": "Hard to determine without logs from the added member. If you post these the relevant information for the restart of sync.Fault toleranceIf a secondary performing initial sync encounters a non-transient (i.e. persistent) network error during the sync process, the secondary restarts the initial sync process from the beginning.Starting in MongoDB 4.4, a secondary performing initial sync can attempt to resume the sync process if interrupted by a transient (i.e. temporary) network error, collection drop, or collection rename. The sync source must also run MongoDB 4.4 to support resumable initial sync. If the sync source runs MongoDB 4.2 or earlier, the secondary must restart the initial sync process as if it encountered a non-transient network error.By default, the secondary tries to resume initial sync for 24 hours. MongoDB 4.4 adds the initialSyncTransientErrorRetryPeriodSeconds server parameter for controlling the amount of time the secondary attempts to resume initial sync. If the secondary cannot successfully resume the initial sync process during the configured time period, it selects a new healthy source from the replica set and restarts the initial synchronization process from the beginning.The secondary attempts to restart the initial sync up to 10 times before returning a fatal error.", "username": "chris" }, { "code": "", "text": "The chunks collection may be caused the problem, the size of collection exceeded 1.5 TB ,{“t”:{\"$date\":“2021-01-13T22:39:39.179+04:00”},“s”:“I”, “c”:“INITSYNC”, “id”:21183, “ctx”:“ReplCoordExtern-4”,“msg”:“Finished cloning data. Beginning oplog replay”,“attr”:{“databaseClonerFinishStatus”:“InitialSyncFailure: CallbackCanceled: Error cloning collection ‘MyDB.fs.chunks’ :: caused by :: Initial sync attempt canceled”}}\n{“t”:{\"$date\":“2021-01-13T22:39:39.179+04:00”},“s”:“I”, “c”:“INITSYNC”, “id”:21191, “ctx”:“ReplCoordExtern-4”,“msg”:“Initial sync attempt finishing up”}\n{“t”:{\"$date\":“2021-01-13T22:39:39.179+04:00”},“s”:“I”, “c”:“INITSYNC”, “id”:21192, “ctx”:“ReplCoordExtern-4”,“msg”:“Initial Sync Attempt Statistics”,“attr”:{“statistics”:{“failedInitialSyncAttempts”:2,“maxFailedInitialSyncAttempts”:10,“initialSyncStart”:{\"$date\":“2021-01-11T21:33:46.425Z”},“initialSyncAttempts”:[{“durationMillis”:42270609,“status”:“MaxTimeMSExpired: error fetching oplog during initial sync :: caused by :: Error while getting the next batch in the oplog fetcher :: caused by :: operation exceeded time limit”,“syncSource”:“10.74.4.24:27017”,“rollBackId”:1,“operationsRetried”:1,“totalTimeUnreachableMillis”:2245},{“durationMillis”:93613792,“status”:“MaxTimeMSExpired: error fetching oplog during initial sync :: caused by :: Error while getting the next batch in the oplog fetcher :: caused by :: operation exceeded time limit”,“syncSource”:“10.74.4.24:27017”,“rollBackId”:1,“operationsRetried”:10,“totalTimeUnreachableMillis”:19056}],“appliedOps”:0,“initialSyncOplogStart”:{\"$timestamp\":{“t”:1610536774,“i”:5}},“initialSyncOplogFetchingStart”:{\"$timestamp\":{“t”:1610536773,“i”:7}},“totalTimeUnreachableMillis”:15213,“databases”:{“databasesCloned”:1,“admin”:{“collections”:3,“clonedCollections”:3,“start”:{\"$date\":“2021-01-13T11:19:34.542Z”},“end”:{\"$date\":“2021-01-13T11:19:37.545Z”},“elapsedMillis”:3003,“admin.system.version”:{“documentsToCopy”:2,“documentsCopied”:2,“indexes”:1,“fetchedBatches”:1,“start”:{\"$date\":“2021-01-13T11:19:34.705Z”},“end”:{\"$date\":“2021-01-13T11:19:35.612Z”},“elapsedMillis”:907,“receivedBatches”:1},“admin.system.users”:{“documentsToCopy”:2,“documentsCopied”:2,“indexes”:2,“fetchedBatches”:1,“start”:{\"$date\":“2021-01-13T11:19:35.612Z”},“end”:{\"$date\":“2021-01-13T11:19:37.221Z”},“elapsedMillis”:1609,“receivedBatches”:1},“admin.system.keys”:{“documentsToCopy”:3,“documentsCopied”:3,“indexes”:1,“fetchedBatches”:1,“start”:{\"$date\":“2021-01-13T11:19:37.221Z”},“end”:{\"$date\":“2021-01-13T11:19:37.545Z”},“elapsedMillis”:324,“receivedBatches”:1}},“MyDB.fs.chunks”:{“documentsToCopy”:4135045,“documentsCopied”:8019,“indexes”:3,“fetchedBatches”:302,“start”:{\"$date\":“2021-01-13T12:17:01.011Z”},“receivedBatches”:302}}},“config”:{“collections”:0,“clonedCollections”:0},“test”:{“collections”:0,“clonedCollections”:0}}}}}", "username": "Abdelrahman_N_A" }, { "code": "db.printReplicationInfo()...\n\"initialSyncAttempts\": [\n {\n \"durationMillis\": 42270609,\n \"status\": \"MaxTimeMSExpired: error fetching oplog during initial sync :: caused by :: Error while getting the next batch in the oplog fetcher :: caused by :: operation exceeded time limit\",\n \"syncSource\": \"10.74.4.24:27017\",\n \"rollBackId\": 1,\n \"operationsRetried\": 1,\n \"totalTimeUnreachableMillis\": 2245\n },\n {\n \"durationMillis\": 93613792,\n \"status\": \"MaxTimeMSExpired: error fetching oplog during initial sync :: caused by :: Error while getting the next batch in the oplog fetcher :: caused by :: operation exceeded time limit\",\n \"syncSource\": \"10.74.4.24:27017\",\n \"rollBackId\": 1,\n \"operationsRetried\": 10,\n \"totalTimeUnreachableMillis\": 19056\n }\n ],\n...\n", "text": "Looks like it may be related to fetching the oplog. It could be ageing out before before this new secondary can replicate it.It could be worth checking the output of db.printReplicationInfo()", "username": "chris" }, { "code": "uncaught exception: Error: error: {\n \"topologyVersion\" : {\n \"processId\" : ObjectId(\"5ffcc41079e2c352127fb36d\"),\n \"counter\" : NumberLong(2)\n },\n \"operationTime\" : Timestamp(0, 0),\n \"ok\" : 0,\n \"errmsg\" : \"Oplog collection reads are not allowed while in the rollback or startup state.\",\n \"code\" : 13436,\n \"codeName\" : \"NotMasterOrSecondary\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1610606466, 4),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}", "text": "I tried to run the command “db.printReplicationInfo()” , but because the server status is startup2 ,the server raised error,", "username": "Abdelrahman_N_A" }, { "code": "", "text": "That should be run on your existing cluster.", "username": "chris" }, { "code": "", "text": "the problem fixed when I increased the following parameters\ndb.adminCommand( { setParameter: 1, initialSyncTransientErrorRetryPeriodSeconds: 864000 } )\ndb.adminCommand( { setParameter: 1, oplogInitialFindMaxSeconds: 600 } )", "username": "Abdelrahman_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why the new replication member not continuing the replication?
2021-01-13T13:41:11.895Z
Why the new replication member not continuing the replication?
5,190
null
[ "atlas-device-sync" ]
[ { "code": "failed to validate upload changesets: failed to validate ArrayInsert instruction: cannot have index (30) greater than prior-size (0) (ProtocolErrorCode=212) { \"partition\": \"farm_${id}\" }", "text": "Keep getting BadChangeset Error (ProtocolErrorCode=212) in Realm SyncI currently had developing using MongoDB Realm in React Native, and I keep running into this error in my MongoDB Realm logs:failed to validate upload changesets: failed to validate ArrayInsert instruction: cannot have index (30) greater than prior-size (0) (ProtocolErrorCode=212)I created the partition doing the following:\n { \"partition\": \"farm_${id}\" }I’ve searched around, but haven’t seen anything about this error message in topics.I’m using RealmJS 10.12.1 and React Native 0.63.3.I hope that someone helps to solve this issue.", "username": "Henike_Voss" }, { "code": "", "text": "We believe you have encountered a bug and we are currently working on a fix.", "username": "Kenneth_Geisshirt" }, { "code": "", "text": "@Henike_Voss We have released Realm JavaScript v10.1.3 with a fix.", "username": "Kenneth_Geisshirt" }, { "code": "", "text": "@Kenneth_Geisshirt thank you very much. I will test it and soon send you feedback.", "username": "Henike_Voss" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Keep getting BadChangeset Error (ProtocolErrorCode=212) in Realm Sync
2021-01-08T00:25:32.171Z
Keep getting BadChangeset Error (ProtocolErrorCode=212) in Realm Sync
2,509
null
[ "monitoring" ]
[ { "code": "", "text": "Hi!We have started experiencing fairly high response times on our M10 cluster (around 1000-1300ms, previously 200-300ms) the past month. This cluster has not been publicly deployed, and the only connections that should have been made to the cluster is from our automated tests and our development team (whose homes are the only whitelisted IPs).While troubleshooting this using the shard analytics dashboards we found something weird. Even though we haven’t touched the cluster or its connected realm apps for the past month, it has constantly had around 60 connections to it. We tried to restart the cluster by pausing it for 1h, and then resuming it, but the connections were still there.Furthermore, when looking at the Memory graph, our virtual memory has been around 2-3GB, while our resident memory has not ever exceeded 550MB, which according to the information tab could be because of the amount of open connections to the database.We’re considering upgrading our cluster to M30, but before we are still interested in knowing what might be causing this. Are these kind of mysterious connections common? Can they affect our response times this much?Thanks in advance!", "username": "clakken" }, { "code": "", "text": "These sound like internal connections required for the distributed system to maintain internal availability heartbeats, as well as for MongoDB Atlas’ built in monitoring data", "username": "Andrew_Davidson" } ]
Atlas Cluster connections and memory allocation that can not be explained
2021-01-14T14:25:15.847Z
Atlas Cluster connections and memory allocation that can not be explained
1,891
null
[ "event" ]
[ { "code": "", "text": "Hi everyone, MongoDB.live 2021, our biggest conference of the year where the data community comes to connect, explore, and learn, is back on July 13-15 and we’re looking for speakers from the community to inspire attendees by introducing them to new technologies, ideas, and solutions.We are accepting submissions for multiple formats from deep live tutorials to 15-minute lightning talks across both technical and non-technical tracks below:Whether you are a first-time or experienced speaker, we want to hear from you. We offer professional speaker coaching and workshops for all accepted speakers and we will be there to support you every step of the way.Need some help with ideation and writing your abstract? Check out our free MongoDB University course, “Crafting Conference Abstracts” and this article on \" How to Write a Compelling Event Submission \".Call for Speakers for MongoDB.live closes on February 11, 2021 - learn more and submit your talk here.Comment below if you have any questions. We look forward to seeing submissions from our community!", "username": "Celina_Zamora" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Calling speakers for MongoDB.live 2021
2021-01-15T19:13:07.284Z
Calling speakers for MongoDB.live 2021
3,139
null
[]
[ { "code": "", "text": "Hi,I have a requirement where I am migrating data from Kafka using Kafka-Connect and dumping the data to collection A. And, I am transforming the data present in collection A using Aggregation commands such as $project, $map and $merge, final step is to merge the transformed data from collection A to collection B.There are around 1612790 documents in collection A (18.91GB size), now when I click on “merge documents” button only some 30K documents are being pushed to collection B and later the merge aggregation command throws error: “An unknow error occured”.Please let me know why this error is occurring ? I am not sure what is the exact issue. I suspect it could be the size of the data which it is not able to migrate in one shot.Kindly help me out on this issue.Thanks,\nVinay", "username": "Vinay_Gangaraj" }, { "code": "", "text": "Hi @Vinay_Gangaraj,When you say pressing a button named “merge documents” are you referring to running it via compass or atlas data explorer?If yes I assume that there are 2 limitations in place:What I would recommend is to take the pipeline and use it in a mongo shell or any language script like python or js.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Merge Aggregation command in Mongo Shell throws error
2021-01-15T08:15:05.588Z
Merge Aggregation command in Mongo Shell throws error
2,042
null
[]
[ { "code": "", "text": "Hi there, I’ve been through the docs a number of times now but I just can’t figure out the answer to my question. Realm seems all or nothing in one direction, which would make it not usable for my project, but maybe people can shed light on what I found to not be documented well. My use case:Like I said above, it seems clear that you can take an app that was initially offline only and make it cloud sync only, but I can’t find anything about turning that on and off as needed, or rather even any documentation on the transferring of offline to cloud sync at all. I only see it talked about here on the forums.", "username": "Steven_Lambe" }, { "code": "", "text": "I can only assume no one knows the answer to this, or its not possible at this point. So does anyone know if you can turn sync on and off programatically as needed?", "username": "Steven_Lambe" }, { "code": "-suspend-resume", "text": "@Steven_Lambe There is a connection -suspend and connection -resume API which will allows you to programmatically control the syncing. However, I’m not sure that’s what you are looking for.Non-sync realms and Sync realms have different file formats and are opened with different configuration parameters. There is no API to convert between the two so you’ll to copy data from one file to another realm file manually.", "username": "Ian_Ward" }, { "code": "", "text": "I’m essentially trying to not force users to signup for basic use, but if they want to pay a subscription fee, then they sign up and can start to sync. Until then though, I want them to have full local abilities without blockers. General flow:A suspend and resume API might work, but I presume that requires an account from the get go which I’m trying to avoid? I’m trying to put up as few barriers to entry as possible. Talking to many in my target audience they are still people who if they are prompted to create an account on first open to use the app essentially offline, they bounce.", "username": "Steven_Lambe" }, { "code": "", "text": "Hmm - one thing you might consider is anonymous authentication -\nhttps://docs.mongodb.com/realm/ios/authenticate#anonymous-authenticationThat way you can log them in from the start in the code without the user being aware", "username": "Ian_Ward" }, { "code": "", "text": "It likely wouldn’t with the clause of: “An Anonymous user object is not intended to be reused, and once a user logs out, they will not be able to retrieve any previous user data.”I want to give users the ability to essentially use the app offline indefinitely if the so choose, and optionally turn on syncing, and at that create an account. If the user’s phone, or their identity provider, logged them out for some reason, all their data would essentially be lost. That’s why I’d like to keep it all local until they choose to turn on syncing.Essentially what I’m getting is it is all or nothing when it comes to syncing. You’re either an offline DB, or an online DB that requires authentication of the user, making it not viable for my usage.", "username": "Steven_Lambe" }, { "code": "let taskResults = syncdRealm.objects(TaskClass.self) //this reads all sync'd tasks\nlet realm = try! Realm() //this will create a local default.realm file\ntry! realm.write {\n for task in taskResults {\n let localTask = realm.create(TaskClass.self, value: task, update: .all) //write out the data\n realm.add(localTask)\n }\n}\n", "text": "I really think Realm fits this use case pretty easily.Let me clarify a couple of things:If the user’s phone, or their identity provider, logged them out for some reason, all their data would essentially be lostRealm is an offline first database - meaning that data is always stored locally first and then sync’d to the cloud. If the user disconnects from their account, their data is still on the device and not lost.You’re either an offline DB, or an online DB that requires authentication of the user,Right! According to how the use case was described above, that’s exactly this use case; When they want to use sync’d data, they have an account and when they are local only, they don’t. Sound like a good fit!For your use case, if the users starts off local-only, they don’t need any kind of account. When they choose to move to sync, they would create an account and their data would sync.If later they decide they no longer want sync’d data, it would not be difficult to move back to local only storage not requiring an account, and you could remove the online data.Let me speak to this with some Code (Swift in this case) as talking to sync’d data vs local data is just a few different lines of code:Reading sync’d data is pretty much three stepsyour_app.login(credentials: creds) { result in }Realm.asyncOpen(configuration: config) { result in }let realm = try! Realm(configuration: user!.configuration(partitionValue: somePartition))\nlet mySyncdTasks = realm.objects(TaskClass.self) //read the sync’d tasksReading local data it’s pretty much two stepslet myLocalRealm = try! Realm()let myTasks = realm.objects(TaskClass.self)when the user switches from sync to local, all of the data is already stored locally so copying it from the local sync Realm file to a local only file would be pretty painless - the internal file formats are slightly different as @Ian_Ward mentioned above so that step is required.The code to move sync’d data to local data is just this:I would encourage you to write some code to get the feel for Realm and explore it for this use case.If there’s something we didn’t address, please let us know as it really does seem like a good fit to me.", "username": "Jay" }, { "code": "my_app", "text": "That sounds a lot more like what I expected to be able to do. To clarify a few of things though:For your use case, if the users starts off local-only, they don’t need any kind of account.I would be working in React Native, and I kind of get the top half, but I more understand the last code example.Any help clarifying the above would be appreciated. I found your initial code examples lacking context (I haven’t gone deep with Swift, but I think it’s more just me wondering what my_app was supposed to be and the like). If you are able to example in JS great, otherwise Swift is great as well and maybe just a bit more of a verbose example like your online to offline one might help me understand what you mean?", "username": "Steven_Lambe" }, { "code": "%22Task%20Tracker%22", "text": "If I start with offline only DBs, and transition to a synced DB does the underlying database file remain, or does the synced database file have the same name therefore overwriting it?No - it is not overwritten. When you transition from a local database to sync’d the filename will be different. The local database can be called anything you want but by default it’s called default.realm. Whereas sync’d realms are auto-named according to their partition key; so for example an Object whose partition key is Task Tracker will have a local file name of %22Task%20Tracker%22. Additionally the path to the file stored on the device will be different.I’m honestly still not clear how the transition from offline to online works. Are you referencing using the standalone RealmDB, and then transition to the Sync RealmDB and vice versa? Are they separate libraries? Do you have to essentially do the inverse of your online to offline example or does it just read the existing data and go from there?You are in control of that process. If you read in realm results from the local realm, connect to the sync’d realm and write the objects, the files and structure is taken care of for you - the local sync files are created and the objects are written to the server. So yes, if you follow that pattern, it’s the inverse of going from sync to local. It’s really not a big deal - I updated an app we use internally based on the Task Tracker demo to be either online or offline and it took about 30 minutes, including updating the UI.initially recommended using Anon Auth to start users offIf you are starting uses off storing the data locally, there is no authentication at all so you do not need Anonymous Users in any way.but I am curious if there are any foreseen issues if a user then returns to “sync mode”?Nope. Whether local or online how your code interacts with Realm objects are the same. The difference is how to ‘open a connection’ to realm as it’s different between a local file connection and a sync connection. The documentation has all of that laid out.I found your initial code examples lacking contextlol. Well, the context was in the steps, not the code! The code wasn’t really the important part as steps are identical across the board which was really the point of that. The code itself is swift but if you review the documentation, you will find the corresponding steps and code in React.Any further examples would just be a duplicate of he existing documentation so I think that’s a great place to start. The Realm Getting Started Guide is available. Once you have a basic app framework coded and feel more comfortable then you can implement sync’ing and the BETA documentation is a great place to start. Sync Data", "username": "Jay" }, { "code": "", "text": "No code examples required, I think you clarified what I was missing in this reply well. I was missing what the process was when you were talking about transitioning to online mode and I wasn’t sure if it was supposed to be “magic” somehow or if it was a more programatic move. I much prefer a more programatic move so there is more control as it seems there is.Thank you.", "username": "Steven_Lambe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does Realm Fit My Use Case?
2021-01-05T19:16:58.876Z
Does Realm Fit My Use Case?
3,847
null
[ "dot-net" ]
[ { "code": "", "text": "Hi,I have a question regarding to Definitions and Builders. How I can append $comment when working with FilterDefinition like FilterDefinition filter = Builders.Filter.Eq(x => x.Id, entity.Id) where TEntity are my POCOs objects?", "username": "Harnier" }, { "code": "var filter = Builders<Test>.Filter.Eq(x=> x.Foo, \"baz\");\n\nFindOptions MyFindOptions = new FindOptions();\nMyFindOptions.Comment = \"token-001\";\n\nvar document = collection.Find(filter, MyFindOptions).First();\n", "text": "Hi @Harnier, and welcome to the forum!How I can append $comment when working with FilterDefinitionYou can try to append the comment as an option after the filter, for example:Regards,\nWan.", "username": "wan" }, { "code": "protected OperationResult<UpdateResult> UpdateMany(FilterDefinition<TEntity> filter, UpdateDefinition<TEntity> update,\n UpdateOptions updateOptions = null, long? aggregateVersion = null)\n{\n var result = Collection.UpdateMany(filter, update, updateOptions ?? new UpdateOptions());\n\n .....\n\n return result.AsSuccessResult();\n}\n", "text": "Hi @wan,\nThanks for the quick response. I should have been more specific when I posted my question. As you explained earlier, working with reading operations is pretty straight forward.\nHowever my question is regarding to write operations like update and delete. Let’s assume I have this method to update multiple documents:In this case UpdateOptions class doesn’t have support for the Comment property like FindOptionsBase class has. In a previous post you mentioned that there is a ticket DRIVERS-742 to add support to this\nbut I don’t have access to see its status. Is it already resolved? If the answer is no, the only workaround for updates is the $comment query operator?", "username": "Harnier" } ]
How I can append $comment when working with FilterDefinition
2021-01-11T17:34:05.457Z
How I can append $comment when working with FilterDefinition
2,586
null
[ "atlas-triggers" ]
[ { "code": "", "text": "I have a Database trigger that runs a function when a new document in collection A is added or updated, and it works fine. Is there a way to fire the trigger only if a field in my other collection B has a specific value?An example of what I am trying to do is to fire a trigger when a user document in my USERS collection is added or updated, but only if a store document in my STORES collection has a value of true for “open” field. Let’s assume that I have to keep USERS and STORES as two separate collections!", "username": "Akbar_Bakhshi" }, { "code": "matchUSERS$lookupUSERSSTORESUSERSSTORES", "text": "Hi Akbar, welcome to the forum!When creating a database trigger, you can provide a match pattern that filters the change, but it will only have visibility of the USERS document. I haven’t tested to see whether you could include a $lookup in that statement to check on a different collection.I’d tackle this in one of two ways:", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi Andrew,\nI actually originally took the 1st route that you suggested, but then I realized that the condition on the other collection is not met most of the times and so the trigger would get fired every time, but the function would not run half the times or more and so that’s too many unnecessary triggers.I think I am going to take a variation of the second route that you suggested. It seems to be a more efficient way for my case.Thanks for the help.", "username": "Akbar_Bakhshi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Conditional Database Trigger on collection A based on a document's field value in collection B
2021-01-14T18:16:37.147Z
Conditional Database Trigger on collection A based on a document&rsquo;s field value in collection B
3,341
null
[ "app-services-user-auth" ]
[ { "code": "https://stitch.mongodb.com/api/client/v2.0/app/savory-backend-ailsq/location\n{\"error\":\"cannot find app using Client App ID 'savory-backend-ailsq'\"}\nhttps://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login?cookie=true\n{\"error\":\"error getting root API entity: IP address 18.192.255.128 is not allowed to access this resource.\",\"error_code\":\"AuthError\"}\nnslookup$ nslookup 18.192.255.128\nServer:\t\t192.168.1.1\nAddress:\t192.168.1.1#53\n\nNon-authoritative answer:\n128.255.192.18.in-addr.arpa\tname = ec2-18-192-255-128.eu-central-1.compute.amazonaws.com.\n\nAuthoritative answers can be found from:\n192.18.in-addr.arpa\tnameserver = x2.amazonaws.com.\n192.18.in-addr.arpa\tnameserver = pdns1.ultradns.net.\n192.18.in-addr.arpa\tnameserver = x4.amazonaws.org.\n192.18.in-addr.arpa\tnameserver = x3.amazonaws.org.\n192.18.in-addr.arpa\tnameserver = x1.amazonaws.com.\n", "text": "Hi there.Since at least Fri Jan 15 5:30 UTC, my app is having troubles connecting to the Realm backend.My app is a javascript webapp, and right now the following request is returning 404.Error responseI am not able to access Realm web console either. I am able to login at cloud.mongodb.com but when I try to switch to the Realm tab, it just redirects back to Atlas.From the chrome network tab, it looks like some Auth error. This url appears in the network trace and is returning 401.ResponseThis is not my IP address. A nslookup seems to indicate it as an AWS address.The status page is green: https://status.cloud.mongodb.com/Can anyone help?Thanks,\nAkshay", "username": "Akshay_Kumar" }, { "code": "", "text": "Looks like the problem is specific to my region? I am not able to access from India, but I am able to access it over a US VPN.", "username": "Akshay_Kumar" }, { "code": "", "text": "Can confirm that I’m having this same issue.Can’t login to my Realm apps without a US VPN", "username": "Tanuj_Wadhi" }, { "code": "", "text": "I got a confirmation from support that this is a known issue. They are looking into it.", "username": "Akshay_Kumar" }, { "code": "", "text": "Thanks @Akshay_Kumar!", "username": "Tanuj_Wadhi" }, { "code": "", "text": "Looks like it’s resolved now.For such incidents, having a status update on the status page would be really good.", "username": "Akshay_Kumar" }, { "code": "", "text": "Hi @Akshay_Kumar and @Tanuj_Wadhi,The fastest way to get support for urgent MongoDB Cloud service issues (including Atlas & Realm) is to contact the support team directly by logging into your account. The support team can investigate your cluster logs, collect additional information, and correlate patterns of reported problems so an incident report can be created.Once this issue was confirmed, a public incident report was created and updated on the MongoDB Cloud Status site.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "It seems like the status page was updated too late.Support confirmed to me that there is an ongoing issue as early as 6:30am UTC but the timestamp on the incident page says that it started at 8:15am UTC.", "username": "Akshay_Kumar" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Realm seems down? My app is not able to connect, cannot login to web console
2021-01-15T05:55:30.403Z
Realm seems down? My app is not able to connect, cannot login to web console
5,540
null
[ "aggregation" ]
[ { "code": "{\n \"name\":...,\n ...\n \"administration\" : [\n {\"name\":...,\"job\":...},\n {\"name\":...,\"job\":...}\n ],\n \"shareholder\" : [\n {\"name\":...,\"proportion\":...},\n {\"name\":...,\"proportion\":...},\n ]\n}\ndb.collection.aggregate([\n {\"$match\" : \n {\n \"$or\" : \n [\n {\"name\" : {\"$regex\": \"Keyword\"}}\n {\"administration.name\": {\"$regex\": \"Keyword\"}},\n {\"shareholder.name\": {\"$regex\": \"Keyword\"}},\n ]\n }\n },\n])\n{\"name\" : {\"$regex\": \"Keyword\"}}{\"$project\" : \n {\n \"_id\":false,\n \"name\" : true,\n \"__regex_type__\" : \"name\"\n }\n },\n{\"administration.name\" : {\"$regex\": \"Keyword\"}}\"__regex_type__\" : \"administration.name\"{\"$project\" : \n {\n \"_id\":false,\n \"name\" : true,\n \"__regex_type__\" : \n {\n \"$switch\":\n {\n \"branches\":\n [\n {\"case\": {\"$regexMatch\":{\"input\":\"$name\",\"regex\": \"Keyword\"}},\"then\" : \"name\"},\n {\"case\": {\"$regexMatch\":{\"input\":\"$administration.name\",\"regex\": \"Keyword\"}},\"then\" : \"administration.name\"},\n {\"case\": {\"$regexMatch\":{\"input\":\"$shareholder.name\",\"regex\": \"Keyword\"}},\"then\" : \"shareholder.name\"},\n ],\n \"default\" : \"Other matches\"\n }\n }\n }\n },\nSELECT name,administration.name,shareholder.name,(\n CASE\n WHEN name REGEXP(\"Keyword\") THEN \"name\"\n WHEN administration.name REGEXP(\"Keyword\") THEN \"administration.name\"\n WHEN shareholder.name REGEXP(\"Keyword\") THEN \"shareholder.name\"\n END\n)AS __regex_type__ FROM db.mytable WHERE \n name REGEXP(\"Keyword\") OR\n shareholder.name REGEXP(\"Keyword\") OR\n administration.name REGEXP(\"Keyword\");\n", "text": "Thank you for first.MongoDB Version:4.2.11I have a piece of data like this:I want to match some specified data through regular expressions: For a example:I want to set a flag when the $or operator successfully matches any condition, which is represented by a custom field, for example: {\"name\" : {\"$regex\": \"Keyword\"}} Execute on success:{\"administration.name\" : {\"$regex\": \"Keyword\"}} Execute on success: \"__regex_type__\" : \"administration.name\"I try do this:But $regexMatch cannot match the array,I tried to use $unwind again, but returned the number of many array members, which did not meet my starting point.I want to implement the same function as mysql this SQL statement in mongodb, like this:what should I do? Confused me for a long time.\nMaybe this method is stupid, but I don’t have a better solution.\nIf you have a better solution, I would appreciate it!!! Thank you!!!", "username": "binn_zed" }, { "code": "$facetdb.test.aggregate([\n {\n $facet: {\n name_match: [\n { $match: { name : { $regex: \"...\" } } },\n { $project: { name: 1, __regex_type__: \"name\" } }\n ],\n admin_name_match: [\n { $match: { \"administration.name\": { $regex: \"...\" } } },\n { $project: { name: 1, __regex_type__: \"admin_name\" } }\n ]\n }\n },\n { \n $project: {\n result: {\n $switch: {\n branches: [\n { case: \n { $gt: [ { $size: \"$name_match\" }, 0 ] }, \n then: { $arrayElemAt: [ \"$name_match\", 0 ] } \n },\n { case: \n { $gt: [ { $size: \"$admin_name_match\" }, 0 ] }, \n then: { $arrayElemAt: [ \"$admin_name_match\", 0 ] } \n }\n ],\n default: \"Other Matches\"\n }\n }\n }\n },\n]).pretty()\n{\n \"result\" : {\n \"_id\" : 1,\n \"name\" : \"john\",\n \"__regex_type__\" : \"admin_name\"\n }\n}\n$facet", "text": "Hello @binn_zed, welcome to the MongoDB Community forum.You can use $facet stage to match each of the conditions and project the result like in the following aggregation.This prints an output like this:This gives the required output, I think. You can also use other ways to project after the $facet stage.", "username": "Prasad_Saya" }, { "code": "db.base.aggregate([\n {\"$match\" : \n {\n \"$or\" : \n [\n {\"name\" : {\"$regex\": \"...\"}},\n {\"administration.name\": {\"$regex\": \"...\"}},\n {\"shareholder.name\": {\"$regex\": \"...\"}},\n ]\n }\n },\n {\n $facet: {\n name_match: [\n { $match: { name : { $regex: \"...\" } } },\n { $project: { name: 1, __regex_type__: \"name\" } }\n ],\n admin_name_match: [\n { $match: { \"administration.name\": { $regex: \"...\" } } },\n { $project: { name: 1, __regex_type__: \"admin_name\" } }\n ],\n shareholder_name_match: [\n { $match: { \"shareholder.name\": { $regex: \"...\" } } },\n { $project: { name: 1, __regex_type__: \"shareholder_name\" } }\n ]\n}\n },\n { \n$project: {\n \"name_match\":true,\n \"admin_name_match\":true,\n \"shareholder_name_match\":true\n}\n },\n])\n**_id name __regex_tyepe__**\n", "text": "hey.First thank youAfter reading your answer, I am first convinced by your wisdom, This is really beautiful\nBut the $switch behind the latter $project only limits the last result set, which is not what I want. In the end, I did it like this.\nThis will return a matching array for each corresponding item, but I prefer it not in an array format, but in this format:Only add one string of matching type per lineAnyway thank you ", "username": "binn_zed" }, { "code": "", "text": "And how to limit the result set in each $facet, for example, I need to limit the total number of records to 10, ie if I use {$limit 10} in the $facet, it will become the result of the limit per condition, not all Sum of resultsThanks again", "username": "binn_zed" }, { "code": "db.test.aggregate([\n { \n $match : {\n $or : [\n { name : { $regex: \"…\" }},\n { \"administration.name\": { $regex: \"…\" }},\n { \"shareholder.name\": { $regex\": \"…\" }},\n ]\n }\n },\n {\n $facet: {\n name_match: [\n { $match: { name : { $regex: \"...\" } } },\n { $project: { name: 1, regex_type: \"name\" } }\n ],\n admin_name_match: [\n { $match: { \"administration.name\": { $regex: \"...\" } } },\n { $project: { name: 1, regex_type: \"admin_name\" } }\n ],\n shareholder_name_match: [\n { $match: { \"shareholder.name\": { $regex: \"…\" } } },\n { $project: { name: 1, regex_type: \"shareholder_name\" } }\n ]\n }\n },\n { \n $project: { \n result: { \n $concatArrays: [ \"$name_match\", \"$admin_name_match\", \"$shareholder_name_match\" ] \n }\n }\n },\n { \n $unwind: \"$result\" \n },\n { \n $replaceRoot: { newRoot: \"$result\" } \n }\n])\n{ \"_id\" : 2, \"name\" : \"pete\", \"regex_type\" : \"name\" }\n{ \"_id\" : 1, \"name\" : \"john\", \"regex_type\" : \"admin_name\" }\n{ \"_id\" : 3, \"name\" : \"jim\", \"regex_type\" : \"admin_name\" }\n$limit$match$facet", "text": "Here is the solution:The output looks like this:I need to limit the total number of records to 10,Add the $limit stage after the initial $match stage (before the $facet stage).", "username": "Prasad_Saya" }, { "code": "", "text": "Sir, it is an honor to receive your suggestions. This solved my problem.I added $limit 10 after $match, and the result still returns> 10 data\n3853×572 22.3 KB\nI do not know why.", "username": "binn_zed" }, { "code": "", "text": "i got it.\nAdd $limit10 to the end of the aggregate ", "username": "binn_zed" }, { "code": "_id", "text": "I added $limit 10 after $match, and the result still returns> 10 dataThat is because a document may have more than one match - e.g., name match and admin match (you can know this by verifying the _id field’s value). When there is more than one match, they show as multiple documents in the result.", "username": "Prasad_Saya" }, { "code": "", "text": "I have one more question for you. This is very simple for you. What if I need to sort the matches?\nFor example: I need to match the matching degree to sort:\n1.name: “keyword” -> exact match\n2.$regex: “^keyword” -> match at the beginning of the line\n3.$regex: “.+keyword.+” -> middle match\n4.regex: \"keyword \" -> match at the end of the lineThen according to the matching degree in ascending order, from small to", "username": "binn_zed" }, { "code": "", "text": "Yes, I see that you want to sort on matching degree; that may be possible. I don’t have a clear thought about it right away. How would you go about it, for example, in pseudo-code?", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query problem: how to get the matching items of the $or operator
2021-01-09T07:56:04.750Z
Query problem: how to get the matching items of the $or operator
14,946
null
[]
[ { "code": "", "text": "Sharding dont work for one Collection wiht Error:2021-01-14T18:13:13.898+0100 I SHARDING [Balancer] Balancer move lidl_index.SourceFile: [{ sourceFileId.country: MinKey, sourceFileId.source: MinKey, XXXx.filename: MinKey }, { sourceFileId.country: “DE”, sourceFileId.source: “l”, sourceFileId.filename: “2020/0668/12/28/receipts/0668_20XXXXXXX_Transaction_7a3ce786-17e5-4cad-beec-d138ff97801d.xml” }), from rs6, to shard-rs3 failed :: caused by :: CommandFailed: commit clone failed :: caused by :: Location40650: detected change of term from 14 to -1Any Idea ?", "username": "Kiron_Ip_Yarda" }, { "code": "", "text": "Hello @Kiron_Ip_Yarda,Reading the description it looks like you are hitting this bug: https://jira.mongodb.org/browse/SERVER-35658Also please confirm MongoDB version you are using to understand more.", "username": "Aayushi_Mangal" }, { "code": "", "text": "We have Mongodb 3.64", "username": "Kiron_Ip_Yarda" } ]
Sharding: CommandFailed: commit clone failed
2021-01-14T20:17:39.516Z
Sharding: CommandFailed: commit clone failed
1,518
null
[ "atlas-device-sync", "graphql" ]
[ { "code": "", "text": "Hi!I have a question regarding Realm Sync.\nHere is the use case:\ndocuments stored in Atlas Cluster have some data associated with them in a third-party service and a client should be able to retrieve data from that third-party service along with documents.In case of GraphQL API this can be solved via a custom resolver with a function that will query the data from third party, so the client will receive all the data in response.\nI wounder is there a way to achieve similar behavior in case of Realm Sync?", "username": "Ivan_Bereznev" }, { "code": "", "text": "Hi Ivan, welcome to the forum!I can see a couple of options for this:", "username": "Andrew_Morgan" }, { "code": "", "text": "Thank you for the quick answer!\nI was assuming that there are only options you’ve listed, but it’s always better to have an authoritative opinion instead of just assuming.\nLikely the first option is the way to go for my use case, as offline access is important.", "username": "Ivan_Bereznev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Sync: is there anything like Custom Resolvers for GQL API
2021-01-14T19:24:01.931Z
Realm Sync: is there anything like Custom Resolvers for GQL API
2,122
null
[ "queries" ]
[ { "code": "<h3>Introduction:</h3>\n<p>The purpose of this instruction is to provide directions for using billview. </p>\n<h3>Details:</h3> \n<p><b><span>To access BillViewer use: </span></b></p> \n<ul> \n <li>Select or the drop down available in CFE under links or </li> \n <li><span>Copy and paste <u> <a target=\\\"_blank\\\" href=\\\"http://10.204.175.97/billview/login.asp\\\">http://10.204.175.97/billview/login.asp</a></u></span>&nbsp;into a browser window\n <ul> \n <li>Select the <a href=\\\"/km/view/kforce99073\\\" target=\\\"_blank\\\">3 letter client code</a> from the drop down list</li> \n <li>Select “<b>Login”</b></li> \n </ul> </li> \n</ul> \n<p>On the <b>Main Query screen:</b></p> \n<ul> \n <li>Enter one of the following&nbsp; to find a participant\n", "text": "Hi All,I have a document which is having urls. I need make search a word which exist in url , but the same word is also exist in document.Now i need to create a search which search in url only.billview is appearing in url and other place of document.I need to search billview in url only. How do i achieve it ?Please help .", "username": "Amita_Rawat" }, { "code": "\"/billview/\"", "text": "Hello @Amita_Rawat, welcome to the MongoDB Community forum.You can try searching for \"/billview/\", the string along with the slash (\"/\") character - note the slashes happen to be there only within the URL.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks Prasad. But we have two text index on two different fields of collection. I need to this full search only on one field. As your query is searching on both the field. How to we perform this url search only on one field which is having text index.Any help will be appreciated.", "username": "Amita_Rawat" }, { "code": "", "text": "You can try a regex search on the specific field.", "username": "Prasad_Saya" } ]
Full Search Issue in url
2021-01-12T10:33:32.265Z
Full Search Issue in url
2,278
null
[]
[ { "code": "", "text": "Hi,\nIn a previous post How to enable profiling? - C# Driver - #6 by wan was mentioned that there is a ticket DRIVERS-742 for all MongoDB drivers to support the Comment property in UpdateOptions class. Are we going to see the Comment property in BulkWriteOptions class as well? Is that ticket resolved already?", "username": "Harnier" }, { "code": "", "text": "Hi @Harnier,Are we going to see the Comment property in BulkWriteOptions class as well? Is that ticket resolved already?For MongoDB .NET/C# driver, please see a more specific ticket tracker CSHARP-1541: Add support for $comment for profiling. Feel free to add yourself as a watcher or upvote the ticket to be notified on the ticket progress.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Support for Comment property in UpdateOptions class
2021-01-14T20:19:12.342Z
Support for Comment property in UpdateOptions class
1,408
null
[ "aggregation", "mongoose-odm" ]
[ { "code": "", "text": "Hi,\nI have multiple databases In MongoDB that each of them has its collections such as:Users Database has these collections:\n1. usersInfo 2.usersReq\nOR\nLicenses Database has these collections:\n1. licenseInfo 2.licenseHistoriesand …one of the basically Qry s for me is:\nHow can I use Aggregation ben multiple Databases without populate or after/befor populate?Do you have any idea to do this?stackoverflow Link", "username": "mehdi_parastar" }, { "code": "", "text": "Hi @mehdi_parastar and welcome,How can I use Aggregation ben multiple Databases without populate or after/befor populate?As of current version of MongoDB (v4.4), there is no way to perform cross-database lookup operation. There is an open ticket to track this request SERVER-34935, please feel free to upvote or add yourself as a watcher to receive notification updates on the ticket progress.Do you have any idea to do this?I’d suggest to redesign the schema. At least to merge the two databases into one.Also, I’d suggest to design the schema based on how your application will process the data. Currently it looks like it’s based on entities, i.e. usersInfo, usersReq, licenseInfo, etc. Depending on your use case, you may be able to embed some of the information. I’d highly recommend to review Building with patterns: A summary as a good reference starter.Regards,\nWan", "username": "wan" } ]
Aggregate Across Databases
2021-01-06T08:00:27.177Z
Aggregate Across Databases
7,450
null
[]
[ { "code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"title\": [\n {\n \"minGrams\": 3,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\n// Creating the Movies model\nconst Movies = mongoose.model(\"Movies\", new mongoose.Schema({}), \"movies\");\n\n// Impplementing autocomplete search\napp.get(\"/search\", async (req, res) => {\n try {\n let result = await Movies.aggregate([\n {\n $search: {\n autocomplete: {\n path: \"title\",\n query: req.query.title,\n fuzzy: {\n maxEdits: 2,\n prefixLength: 3,\n },\n },\n },\n },\n ]);\n res.status(200).json({\n status: \"success\",\n results: result.length,\n data: { result },\n });\n } catch (error) {\n console.log(error);\n }\n});\n", "text": "I am using mongodb’s sample movie database (https://docs.atlas.mongodb.com/sample-data/sample-mflix#std-label-sample-mflix) to experiment with mongodb’s autocomplete functionality. The search always returns an empty array. I am using Atlas on the free tier.I have set up a Search Index as follows:The model and the search query are setup as follows:I am using postman to run test queries and a sample query is: 127.0.0.1:3030/search?title=blackThe model can be queried using .find(), for example, and returns the full collection of documents.Any and all help is greatly appreciated.", "username": "Eest_Said" }, { "code": "", "text": "Hi @Eest_Said,I have exactly the same problem. .find() works fine with and without params and .aggregate() works with a “$match” stage, but when using autocomplete it only returns an empty array.Did you find any solution for this issue?", "username": "LudiG" }, { "code": "", "text": "@LudiG. Apologies for the delay in responding. Unfortunately I did not find a solution to utilise mongodb’s autocomplete feature. I ended up using another approach to address the issue I was facing. What are you trying to use the feature for? Best of luck.", "username": "Eest_Said" }, { "code": "", "text": "@Eest_Said no worries!\nI am still trying to figure out the error, but couldn’t track it down yet. At first I thought it’s an issue of mongoose returning plain javascript instead of mongoose objects when using aggregate (mongoose documentation), but it works fine with a $match stage so this shouldn’t be the problem.I want to use it for suggestions in a text search. It’s pretty much a straight forward application scenario for autocomplete. What did you use instead?", "username": "LudiG" }, { "code": " {\n \"$search\": {\n \"index\": \"<your-index-name>\",\n \"autocomplete\": {\n \"query\": `${req.query.name}`,\n \"path\": \"name\"\n }\n }\n }\n", "text": "@Eest_Said I finally happened to solve it!!The issue was that I used a different index name than “default” when creating the index and the autocomplete documentation (autocomplete) didn’t mention that this name has to be specified if it deviates from “default”. However I found a small note here in step 8, where it says that you need to specify the index name in the parameters of $search.Hence, when specifying the index like this it worked:", "username": "LudiG" }, { "code": "newQueryArray.splice(-1, 1, new RegExp());let result = await FruitTest.find({\n\n tags: {\n\n // Filter must inclupde ALL array items.\n\n // Final array item is regex - see newQueryArray above.\n\n $all: newQueryArray,\n\n },\n\n });\n", "text": "@LudiG wonderful! I can’t wait to test this out.So I used a simple regex to query the DB. My requirement is to allow searching of consecutive values. For example, the user might be searching for all entries that contain apples AND bananas so a search for ‘apples,b’ should return all entries that contain apples AND all entries starting with ‘b’. Because I use an array to store the progressive search, the key line of code to capture the current search value is …newQueryArray.splice(-1, 1, new RegExp(^${currentQueryString}));Then the DB is queried with a .find() using the $all operator.Thanks again @LudiG and I will try an implement what you have suggested over the weekend.", "username": "Eest_Said" } ]
mongoDB autocomplete returns empty array
2020-12-28T08:14:16.690Z
mongoDB autocomplete returns empty array
4,547
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.12-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.11. The next stable release 4.2.12 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.12-rc0 is released
2021-01-14T21:27:12.511Z
MongoDB 4.2.12-rc0 is released
2,574
null
[]
[ { "code": "", "text": "Hey folks,I’m Scott , an independent full stack software engineer from New York City. I’m currently working on CrowdPower - an automated user engagement tool for SaaS companies. It’s a simple low-cost option for tracking users and automating messages to them. Thanks to @Manuel_Meyer for accepting CrowdPower into the startup program.This is my first project built with MongoDB (and also VueJS), and I don’t know how I could have built this without it. With regards to Mongo, I’m most interested in learning about design patterns and performance optimizations.Looking forward to seeing what everyone is working on.Thanks,\nScott", "username": "Scott_Weiner" }, { "code": "", "text": "Hi @Scott_Weiner! Welcome to the community & congrats on your acceptance into the startup program Would love to hear more about what you’ve learned so far and how you used that to build cool things like CrowdPower.Cheers,Jamie", "username": "Jamie" }, { "code": "", "text": "Thanks Jamie. It was challenging to grasp the concept of a document store at first, coming from building MySQL applications. But it’s been a tremendous help to not have to worry about each record matching a schema exactly or writing migration scripts for every field added to a table.I was able to build a rules engine to segment customers, a drag and drop email builder, and a marketing automation platform fairly easily with Mongo. It would have been a challenge to do all this with MySQL.", "username": "Scott_Weiner" }, { "code": "", "text": "Hello @Scott_Weiner, welcome to the MongoDB Community forum.With regards to Mongo, I’m most interested in learning about design patterns and performance optimizations.These two topics are very important in building an application (performance - everybody wants it!). Design of the data can affect the performance. Designing NoSQL document based data is different than that of the tabular data design, though few principles (e.g., relationships) are common. Having knowledge of any kind of data design helps grasipng he principles of document data design.That said, I learnt my document data design and about performance from the MongoDB University courses - I had found them quite comprehensive. In addition, there is information from documentation, blogs and webinars. I hope you will look into these and find something suitable to your needs.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks. I did do a couple of those courses when I was just getting started. Incredibly helpful. Will have to check out some others.", "username": "Scott_Weiner" } ]
Hello from New York City
2021-01-11T02:09:45.292Z
Hello from New York City
2,141
null
[ "atlas-functions" ]
[ { "code": "", "text": "Given a synced realm with the Email/Password authentication provider enabled, is it possible via a server function to search all app users by email address ?", "username": "Mauro" }, { "code": "UserUserUserexports = function({user}) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const userCollection = db.collection(\"User\");\n \n const partition = `user=${user.id}`;\n const defaultLocation = context.values.get(\"defaultLocation\");\n const userPreferences = {\n displayName: \"\"\n };\n \n console.log(`user: ${JSON.stringify(user)}`);\n \n const userDoc = {\n _id: user.id,\n partition: partition,\n userName: user.data.email,\n userPreferences: userPreferences,\n location: context.values.get(\"defaultLocation\"),\n lastSeenAt: null,\n presence:\"Off-Line\",\n conversations: []\n };\n \n return userCollection.insertOne(userDoc)\n .then(result => {\n console.log(`Added User document with _id: ${result.insertedId}`);\n }, error => {\n console.log(`Failed to insert User document: ${error}`);\n });\n};\nUserUserUserUser", "text": "Hi @Mauro - welcome to the MongoDB community forum!How I’d handle this would be to have a User collection that contains a document for each registered user (chances are that you’ll want to store some extra data for each user anyway. You can optionally link the User collection to the Realm user through custom data (in which case, the user token contains a snapshot of that custom data – refreshed every time the user logs in).You can auto-populate the User collection by setting up a Realm trigger that executes when a new user is registered:\n\nimage1396×795 72.8 KB\nThis is the code from that trigger:You then have to decide how you want the mobile app to access the data from the User collection:This article describe a chat app I built that uses option 1. It describes the data architecture used and includes a link to the repo containing both the iOS (SwiftUI) and Realm apps.", "username": "Andrew_Morgan" }, { "code": "", "text": "@Andrew_Morgan, thank you! Your solution works well.", "username": "Mauro" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Function & App Users
2021-01-14T05:18:31.564Z
Realm Function &amp; App Users
2,123
null
[]
[ { "code": "", "text": "Hey there,\nI’m Philipp and I love nature, surfing, beachvolleyball, food, computer games, shiny led projects, decent techno and much more. I am an engineer that shifted to software development which is now my full time passion.\nA bit over a year ago I decided with friends, which I met at a hackaton, that we will found a startup to tackle climate change issues. We develop a public webplatform and a software to bring structure and light in the sustainability jungle of companies to help them tackle aims like carbon neutrality or sustainable supply chains.\nFor this we need to build huge datasets - scrape data in the web - build data pipelines - do a lot of networking - … - it’s a long road.\nWe are all very motivated, in good faith, sometimes too young and unexperienced - but waiting is simply not an option for us ;). This is luckily not a big issue as working / helping out together with our network and experienced partners / communities already gave us the boost we need to become more and more successful with our vision.It is always a pleasure to tackle issues together in a strong community and maybe we can share some traveltime on different roads - always happy to help and thankful for help See you soon and please stay safe and soundPhilipp ", "username": "Philipp_Wuerfel" }, { "code": "", "text": "Hi @Philipp_Wuerfel, Welcome to the MongoDB community!Being from the West Coast, USA, I’m not terribly familiar with surfing outside of California and Hawaii Where do you prefer to surf?Would love to learn more about your climate change startup too!", "username": "Jamie" }, { "code": "", "text": "Hey Jamie!Thank you !\nI love to surf Atlantic coast line of France / Spain / Portugal → Most famous would be Hossegor (La Graviere or Peniche in Portugal) but also surfed the baltic sea (which is only possible during storm, onshore-windswell) which results in a quite cold and different experience \nNext time around May it will be the cold arctic north for me (Norway - Lofoten) as I will stay there a while visiting my brother and working remotely.´\nI’d realy love to experience surfing California and Hawaii but it is too far away Regarding the startup:\nOur first prototype: https://www.ecosearch.tech/\nThis will be our first access point to our platform. Every company will get a profile based on our sustainability data in our database. So basically every human should be able to check whether a company is active in this field and motivate them in case they are not.\nIn addition we consider giving companies the opportunity to be more transparent e.g. with data publications or trustful partnerships e.g. famous ngo’s and of course we offer them to work with us on possible solutions to tackle their sustainability issues.We have quite a few ideas in mind which we currently discuss with some companies in Germany.But we need to solve quite big problems and narrow them down → divide and conquer!\nBig questions are:\nHow do we know if the companies data is affected by Greenwashing?\nHow do we verify if data is correct?\nHow do we messure impact?\n…\nMuch things to do ", "username": "Philipp_Wuerfel" }, { "code": "", "text": "Thanks for sharing! I wonder if there are green certification orgs that you can work with to build a legitimacy-based model that you could then set metrics against What does the business model look like for your co? How do you plan to use the data to motivate businesses to change their practices (certification, PR, consumer pressure, etc.)?", "username": "Jamie" }, { "code": "", "text": "Exactly! We continuously evolve those partnerships but this takes time - networking - trust - … .\nThere are multiple motivations and we will not need to increase pressure as this will most likely increase anyways because of ongoing political changes, laws, general awareness of beeing part of the problem and scared of consquences, to just name a few. There are also quite a lot certifications out there and we will not add a new one to the jungle. We want to motivate companies to pick the good ones and if they participate let others know that they do!The main problem we see is, that companies willing to change are overwhelmed and every decision/investment is compared to profit. Planting trees to become climate neutral does sadly not fix the problem. We can’t plant that many trees (we burn them anyway) and we can’t wait 30 years to let them do their decarbonization. Companies need to reduce emissions, compensation is a part of it, but not the solution.Just as an example: Studies say in average around 70 to 80% of company emissions are so called scope 3 emissions. This means they are somewhere in the supply chain out of company environment and hard to measure but they need to radically reduce as soon as possible to prevent paris convention from failing.How nice would it be, if you could see which companies in your supply chain could be replaced by more sustainable ones? And how nice would it be, that this directly improves your own sustainability score because other companies up the ladder will pick you because you are more sustainable?Ecosearch is just a small piece to give informations on the surface to bring companies to a network of solution providers which are all part of the movement to face the crisis and not ignore it until it is too late.", "username": "Philipp_Wuerfel" } ]
Hello - Hello and happy new year from Berlin
2021-01-05T19:17:28.849Z
Hello - Hello and happy new year from Berlin
2,620
https://www.mongodb.com/…16c231127932.png
[ "mongodb-shell" ]
[ { "code": "", "text": "Hi,I am just started learning Mongo University M001 Course and I am trying to connect the MongoDB Cloud Atlas DB using my local system instead of integrated IDE environment provided by Course.I have installed the MongoDB Community Edition. and I have set it up by executing “C:\\Program Files\\MongoDB\\Server\\4.4\\bin\\mongo.exe” in Windows CMD.Then entered all below commands and none of them working.mongo “mongodb+srv://sandbox.y8aoy.mongodb.net/admin” --username m001-student\nmongo “mongodb+srv://m001-student:[email protected]/admin” (Used by Course)\nmongo mongodb+srv://m001-student:[email protected]/adminfor the first two commands, im getting (uncaught exception:test955×442 18.4 KB :\n@(shell):1:6) error.\nand for the last one im getting response as three dots like …can anyone assist me in connecting to MongoDB Atlas.", "username": "Amirhossein_Moravvej" }, { "code": "", "text": "You should use MongoDB University course specific forum for the course related question.You are running the mongo command while you are already in the mongo shell. You have to exit the mongo shell and run your command in the bash prompt of the IDE.", "username": "steevej" } ]
Connecting MongoDB Atlas to Local System Command Prompt / SyntaxError: unexpected token: string literal
2021-01-14T17:51:47.358Z
Connecting MongoDB Atlas to Local System Command Prompt / SyntaxError: unexpected token: string literal
2,981
null
[ "sharding" ]
[ { "code": "", "text": "Having issues with timeouts just running simple list collection functions after restoring a sharded cluster. I have 6 shards + configdb. After restoring all the data directories and start the cluster everything starts and connects but mongo shell will not pull data, yet I can see sh.status() output. Most of the errors on the mongo shell are “NetworkInterfaceExceededTimeLimit” running something simple like “show collections”. I see on the configdb it has a bunch of errors in general about \" *Command on database config timed out waiting for read concern to be satisfied.**\". I did not restore this cluster with 3 replicas each like the original cluster. It only has a single replica per shard and 1 configdb and all have an appropriate PRIMARY elected. Does a restored cluster need all replicas restored even if each shard has a primary host? This is version 3.6.16.", "username": "Todd_Vernick" }, { "code": "", "text": "I am seeing the same problem with 3.6, did you find a resolution?", "username": "Jonathan_Stairs" } ]
Timeouts connecting to restored sharded cluster
2020-08-25T20:53:24.076Z
Timeouts connecting to restored sharded cluster
2,220
null
[ "graphql" ]
[ { "code": "{\n \"field_name\": \"xxx\",\n \"function_name\": \"xxxfunc\",\n \"id\": \"7f50d150df84be4a7ecd25bc\",\n \"description\": \"Lorem ipsum\",\n \"input_type_format\": \"custom\",\n \"input_type\": {\n \"title\": \"xxxtitle\",\n \"type\": \"object\",\n \"properties\": {\n \"xxx\": {\n \"bsonType\": \"string\"\n },\n \"xxx\": {\n \"bsonType\": \"string\"\n },\n \"xxx\": {\n \"bsonType\": \"string\"\n }\n }\n },\n \"on_type\": \"xxxtype\",\n \"payload_type\": {\n \"type\": \"object\",\n \"title\": \"xxx\",\n \"properties\": {\n \"result\": {\n \"bsonType\": \"boolean\"\n },\n \"error\": {\n \"bsonType\": \"string\"\n }\n }\n },\n \"payload_type_format\": \"custom\"\n}", "text": "Hi!I’m trying to extend the documentation of my Mongo Realm GraphQL API. This far the barebone Graphiql-interface has been enough, but I now wish to add descriptions to each query and custom resolver.I tried to add the field “description” to my Custom Resolver, but the matching query in Graphiql Documentation Explorer still shows “No description”. As usual I can’t find any documentation on how the Custom Resolver json-files should be structured, so I’ll ask here instead. How can I populate my Graphiql Documentation Explorer with descriptions?This is an example of what I’ve tried:", "username": "petas" }, { "code": "", "text": "Hey @petas - unfortunately there is no way to modify the custom resolver description field at the moment. I would encourage you to add that in our feedback forum. We monitor that pretty closely to help influence our features/roadmap.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks @Sumedha_Mehta1. For anyone with the same wish, I posted the suggestion on the feedback forum here: GraphiQL Description field – MongoDB Feedback Engine", "username": "petas" } ]
Add description to Custom Resolver
2021-01-12T14:27:16.711Z
Add description to Custom Resolver
2,395
null
[ "atlas-device-sync", "app-services-data-access" ]
[ { "code": "parentIdcollection2permissions", "text": "Hi all!I’m building an iOS app with Realmk using Sync.\nI’m currently trying to implement data permissions to restrict access of the users to the data.\nAs I saw in this post, collection level permissions are not yet available, so the only way to manage read/write access for users is to do it with sync permission. Am I right?As far as I understand (from here), the management of the read/write permissions with sync is based on the comparison between the user id or a list of values which can be store in a collection and the partition key defined when enabling Sync.If my understanding is correct, I have the following problem: I have several collections, each representing a group containing other collections. I’m currently using the partition key to know which collection is included in which other collection, for example:Collection1:\nGroup1(id=“group1”, parentId=“root”), Group2(id=“group2”, parentId=“root”)Collection2:\nGroup1.2(id=“group1.1”, parentId=“groupe1”), Groupe1.2(id=“group1.2”, parentId=“group1”),\nGroup2.1(id=“group2.1”, parentId=“groupe2”)with partition key = parentIdSo with this architecture, let’s say I have 100 groups in collection2, if I want my user to have read access to 80 groups, does it mean that I need to store all the partitionId of all the groups in a specific collection (like the permissions one in this example)?I also have a second question: is there a way to let a user have read/write access only to objects in a collection that he created?Thanks for your help!", "username": "Julien_Chouvet" }, { "code": "partition: parentId=123345canReadPartitions: [\"parentId=12345\", \"parentId=54321\", \"otherAttribute=Fred\"]\nparentIdparentIdparentId: \"group1\"parentId: \"group1.1\"parentId: \"group1.1\"partition: \"user=fred\"", "text": "Hi Julien, you’re correct that MongoDB Realm Sync uses the partition value to determine what data gets syncd to the mobile app. Your only allowed a single partition key attribute for all collections, but than be an ecoded key-value pair e.g., partition: parentId=123345) which opens up any number of schemes.You can include arrays in your User document to specify what partitions a user can access. e.g.For your Realm sync permissions, you then call a Realm function (passing in the partition value) to determine whether that partition is in the current user’s approved list.In your case, if parentId is the only attribute then maybe store a list of parentIds in the user document. Note that you could also have your Realm function match patterns - e.g. so if they have parentId: \"group1\" in their list then it syncs parentId: \"group1.1\", parentId: \"group1.1\", …To only sync collections that the user has created, you could set the partition key to “user=fred”, or store the group id in a list within the user’s document. You could implement either scheme using database triggers. It feels like partition: \"user=fred\" could be the simpler approach.We’ve a new guide on the data architecture/partitioning used for a mobile chat app which may give you more ideas on how to make sync partitioning work best for you.", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi @Andrew_Morgan, thanks for your help!I read the mobile chat app example which is very clear and helped me to better understand the use of key-value pair for partition, and the use of triggers to check sync permissions.I just have a question: I understand that setting sync permissions allow to increase the security of the app. However, for my use-case I was wondering what is the difference between using sync permissions and the key-value partitions, as you did in the chat app, and keeping the partition value to know which group is included in which parent group (as explained in my first post) and adding another parameter to store the id of the user who created the group or “shared” if it is a group for all users. In this way, in my swift code I just need to filter my realm objects to match this new parameter = currentUser.id or “shared”.\nFor my use-case this solution appears to be easier than creating sync permissions, triggers and so on. However, I would like to know if it is reliable and safe to do that.Thanks again for your help!", "username": "Julien_Chouvet" }, { "code": "", "text": "Hi @Julien_Chouvet, I think there are a couple of potential disadvantages in relying on a filter in your query vs. restricting things through sync permissions (which means your local realm would contain all of the data for all groups, your filter would just restrict what data the application acted on):", "username": "Andrew_Morgan" }, { "code": "", "text": "Ok I see. I didn’t know that filtering objects implies that the local realm will contain all the data.\nThanks again for your help! ", "username": "Julien_Chouvet" }, { "code": "partitionValue: \"user=\\(user.id)\"", "text": "Hello again @Andrew_Morgan,Sorry to bother you with my questions but as I was reading again your chat app example, I was wondering what is the point of setting a permission for the User because when you open the Sync, you specify partitionValue: \"user=\\(user.id)\". Thus, it is impossible for a user to access the data of other users.Thanks.", "username": "Julien_Chouvet" }, { "code": "", "text": "It’s a security thing. If someone hacked or cloned the mobile app, the (secure) backend ensures that they can only sync data that the logged in user is entitled to.", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm permissions - Manage uses' rights
2021-01-11T15:28:58.285Z
Realm permissions - Manage uses&rsquo; rights
5,417
null
[ "aggregation", "performance" ]
[ { "code": ".aggregate()", "text": "If a write a really long aggregation pipeline in my application, will the aggregation pipeline itself be transferred to mongodb on each .aggregate() call, or is the query itself cached?", "username": "Alex_Bjorlig" }, { "code": "", "text": "Hi @Alex_BjorligWe do cache the plans of a query, however to compare a cache on the server we need to receive the aggregation pipeline to the server.If you want to avoid that and the pipeline code is static you can use a view to define it.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Ok thanks @Pavel_Duchovny - in my case it’s not static so I will not use a view then.", "username": "Alex_Bjorlig" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Will mongo cache big aggregation query?
2021-01-14T09:32:56.233Z
Will mongo cache big aggregation query?
3,794
https://www.mongodb.com/…385df81c3abd.png
[ "atlas-functions", "atlas-triggers" ]
[ { "code": "", "text": "I have created a trigger with linked function, it will trigger when a document insert in specified collection,Total Requests: 32\nimage807×140 5.32 KB\nTotal Logs: 8\nimage834×394 32.4 KB\nApp shows Total Requests: 220\nimage789×213 10.2 KB\nIs there any way to see total requests count logs entry?", "username": "turivishal" }, { "code": "", "text": "From my own observations, the reported number of requests doesn’t necessarily give an instantaneous count (I’ve just logged in, synced some data and executed a trigger without the number of reports increasing).From the docs, these are the different types of events:There should be a log entry for most of these events.", "username": "Andrew_Morgan" } ]
How realm calculates the requests count?
2021-01-13T14:13:35.934Z
How realm calculates the requests count?
2,654
null
[]
[ { "code": "db.Col.aggregate( [\n { $match: { \"Symbol\" : \"x\" }},\n { $group: { _id: \"$GroupTime\", total: { $size: \"$MyArray\" } } }\n ])\n", "text": "Size dont work with aggregation\nMongodb 4.2.8return error\n“errmsg” : “unknown group operator ‘$size’”,\n“code” : 15952,", "username": "alexov_inbox" }, { "code": "$size$project$addFields", "text": "“errmsg” : “unknown group operator ‘$size’It is not a group operator. These are the group operators: Accumulators $group. But, you can use the $size in a $project (or $addFields) stage.What is it you are trying to do?", "username": "Prasad_Saya" }, { "code": "", "text": "how will be look like query with $addFields ?", "username": "alexov_inbox" }, { "code": "$sizeMyArray{ _id: 1, MyArray: [ \"apples\", \"bananas\", \"oranges\" ] }{ $addFields: { arraySize: { $size: \"$MyArray\" } } }arraySize: 3", "text": "In general, you use the $size operator with a field MyArray in a document: { _id: 1, MyArray: [ \"apples\", \"bananas\", \"oranges\" ] }The size of the array can be determined as follows:{ $addFields: { arraySize: { $size: \"$MyArray\" } } }This will get the output as: arraySize: 3", "username": "Prasad_Saya" } ]
Size dont work with aggregation
2021-01-12T12:54:10.052Z
Size dont work with aggregation
6,482
https://www.mongodb.com/…_2_1024x385.jpeg
[ "java", "connecting" ]
[ { "code": "", "text": "I am not able to connect to atlas mongo db. Screenshot 2021-01-13 at 4.41.22 PM1645×619 328 KB\nShared screen shot of error.", "username": "Pankaj_Kumar_Pandey" }, { "code": "", "text": "Hi @Pankaj_Kumar_Pandey and welcome in the MongoDB Community !Can you connect from your current machine to this Atlas cluster?Can you please double check your user & password and also double check that the IP address you are trying to connect from is correctly set in the IP Access List? You should have at least 2 as I guess you are not running Debezium locally ─ so this means you are connecting from another IP address.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "mongodb://...\nmongodb+srv://...\n;; QUESTION SECTION:\n;_mongodb._tcp.merchant-service.me2wy.mongodb.net. IN ANY\n\n;; ANSWER SECTION:\n_mongodb._tcp.merchant-service.me2wy.mongodb.net. 59 IN\tSRV 0 0 27017 merchant-service-shard-00-00.me2wy.mongodb.net.\n_mongodb._tcp.merchant-service.me2wy.mongodb.net. 59 IN\tSRV 0 0 27017 merchant-service-shard-00-01.me2wy.mongodb.net.\n_mongodb._tcp.merchant-service.me2wy.mongodb.net. 59 IN\tSRV 0 0 27017 merchant-service-shard-00-02.me2wy.mongodb.net.\n", "text": "I suspect you try to connect with the URIrather than the seed list styleThe DNS entry merchant-service.me2wy.mongodb.net is not an host is a replica set cluster seedlist address. If you want to connect with a mongodb:// URI you have to specify the hosts of your replica set. The hosts for your cluster are", "username": "steevej" } ]
Not able to connect to atlas mongo cluster from Debezium
2021-01-13T19:11:18.272Z
Not able to connect to atlas mongo cluster from Debezium
3,714
null
[]
[ { "code": "", "text": "We have officially launched our Gaming + MongoDB community with our MongoDB expert panel @nraboy @yo_adrienne @Karen_Huaulme, and @JimB —where they discuss all things MongoDB and gaming development.Do you have gaming development questions? Or curious about how to use MongoDB’s technologies to minimize friction in game development? Ask your questions here!Event Information:\nHere’s the link to the event, we will provide the recording on this page once it’s uploaded— so keep an eye out. Make sure to join the community today to keep up to date with all events occurring in the future!", "username": "Celina_Zamora" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Gaming + MongoDB: Ask your questions!
2021-01-12T19:27:52.083Z
Gaming + MongoDB: Ask your questions!
3,007
null
[]
[ { "code": "", "text": "I am auditing all database activity via audit using config setting auditAuthorizationSuccess: true. With this all crud activity is captured including the data values. Some of this info is nppi which shows as plain text in a json audit log. Is there any way to mask this data in the audit log to be able to show the crud operation but masking some values and displaying others? Or, is audit an all-or-nothing function displaying all values all the time? If it’s not possible in audit, is there another way to mask the data before it goes out to the file’s consumers i.e Splunk?", "username": "JamesT" }, { "code": "", "text": "Welcome to the community @JamesT!Can you confirm the specific version of MongoDB server you are using and whether this is self-hosted or managed (eg MongoDB Atlas)?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I have 4.0.18 and 4.2.8 in-house rhel 7 servers. Both versions perform the same.", "username": "JamesT" }, { "code": "", "text": "Hi James,Did you end up with an answer? I think you can use regex in the db audit configuration json config.thanks\nNorm", "username": "nchan" }, { "code": "", "text": "I have not; and, would you be able to provide an example of how you’d setup the regex to do this.", "username": "JamesT" } ]
Data masking in the audit log
2020-07-14T15:58:30.454Z
Data masking in the audit log
2,133
null
[]
[ { "code": "", "text": "I want to know how to create database in python using pymongo in atles,\nmongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority\nIt is basic format of connection,and it directly connect database,and we can create collection but,how to create database with pymongo\nPlease help ,in atles, not in localhost", "username": "Shubham_Kumar_Bansal" }, { "code": "client = pymongo.MongoClient(\"mongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority/\")\nserver = client[\"yourDatabase\"]\nserver", "text": "In MongoDB a database is not created until it gets any type of content, so you simply need to define it as you are defining an existing database:then insert any type of data using server and you’re done", "username": "Silvano_Hirtie" }, { "code": "", "text": "Thanks,I really get help,now I can go more,to learn", "username": "Shubham_Kumar_Bansal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Pymongo database create in atles
2021-01-10T11:06:52.882Z
Pymongo database create in atles
1,150
null
[ "node-js", "change-streams" ]
[ { "code": " while (await changeStreamIterator.hasNext()) {\n let change = await changeStreamIterator.next();\n if (change == null)\n console.log(\"end\");\n}\n", "text": "http://mongodb.github.io/node-mongodb-native/3.1/api/ChangeStream.html according to this document, the next function must return null if no exist anymore document but for me not working.\nthis is my code :", "username": "Pejman_Azad" }, { "code": "let change = await changeStreamIterator.next();changemycollconst database = client.db(\"test\");\nconst collection = database.collection(\"coll\");\nconst changeStream = collection.watch();\n\nwhile (await changeStream.hasNext()) {\n let change = await changeStream.next();\n if (change == null) console.log(\"event doc was null!!\");\n console.log(change); // notified change event document\n} \nmycoll{\n _id: {\n _data: '825FFED...D986AE41A80004'\n },\n operationType: 'delete',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1610536524 },\n ns: { db: 'test', coll: 'mycoll' },\n documentKey: { _id: 5ffed280937038d986ae41a8 }\n}\nnull", "text": "Hello @Pejman_Azad, welcome to the MongoDB Community forum.Within the while loop, in the statement let change = await changeStreamIterator.next(); the variable change has a Change Event document.The loop waits until there is a change in the collection (or database or the deployment). The change event document will have the info about the change (i.e., about the update on the collection).For example:There is a collection called mycoll and as I do insert, update or modify operations on it, the change stream on the collection will notify the changes:When a document is inserted/updated/deleted in the mycoll the change event document is printed. For example, a delete generates this change document:… the next function must return null if no exist anymore document but for me not working.In case of a condition where the change event document cannot be generated then there will be a null - I suspect it might be some kind of error or abnormal condition (I coudn’t find any documentation on this).Meanwhile the change stream cursor remains open until one of these occur - the cursor is explicitly closed or an invalidate event occurs - and then exists the loop.", "username": "Prasad_Saya" } ]
ChangeStream not working
2021-01-13T08:42:32.406Z
ChangeStream not working
4,517
null
[ "dot-net" ]
[ { "code": "", "text": "I’m wondering, is there any way to set data types while converting JSON to BSON with c# driver?\nI already tried this :BsonDocument doc = BsonDocument.Parse(\"{ t : -54542356354645452145 }\");and this is the result!‘Value was either too large or too small for an Int64.’Data is in JSON and receives by a REST API, I can’t define any C# model for this data. There are so many irregularities.Thanks in advance!", "username": "Nima_Niazmand" }, { "code": "BsonDocumentDecimal128BsonDecimal128", "text": "Hello @Nima_Niazmand, welcome to the MongoDB Community forum.The number is too big to be converted to a int or a long - you can store it as NumberDecimal BSON type. You may have to create a BsonDocument with the number field of type Decimal128 / BsonDecimal128 and save to the database.", "username": "Prasad_Saya" }, { "code": "", "text": "Do you mean modifying the content of the JSON before converting to BSON? I can’t modify the JSON, I have no idea what is inside them.", "username": "Nima_Niazmand" }, { "code": "", "text": "Extract the fields of the JSON and convert to BSON document. The JSON data is as strings (key-value). Based upon the key the value can be formatted to appropriate BSON.I can’t modify the JSON, I have no idea what is inside them.What do you intend to do with the data? Just parse and store as it comes. In such case store all fields as strings - and convert them later based upon the field name.", "username": "Prasad_Saya" }, { "code": "", "text": "Well… That’s what I’m looking for but, the parser does not accept any config/mapping! Data are already stored in SQLSERVER as strings. There is no point in converting them later. Besides, there are more than 100 million records. I have no idea about the future of the data. I’m the migrator! I guess somebody chose the wrong database for storing JSON data! now I have to manually convert them!", "username": "Nima_Niazmand" }, { "code": "", "text": "I guess I have to answer my own question! There are two options. Since C# driver’s serialization is based on a wrong assumption. I have to write my own parser. it would be tricky but anything is better than what’s shipped with the driver or using a real JSON document database like I_have_no_idea_DB.", "username": "Nima_Niazmand" } ]
Is there any way to set data types while converting JSON to BSON with c# driver?
2021-01-12T07:21:41.382Z
Is there any way to set data types while converting JSON to BSON with c# driver?
5,349
null
[ "backup" ]
[ { "code": "mongod --dbpath \"my-files-path\" --repair{\"t\":{\"$date\":\"2021-01-12T17:03:47.956-08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":21028, \"ctx\":\"initandlisten\",\"msg\":\"Collection validation\",\"attr\":{\"results\":{\"ns\":\"REDACTED.checkins\",\"nInvalidDocuments\":0,\"nrecords\":576055,\"nIndexes\":3,\"keysPerIndex\":{\"_id_\":576055,\"encrypted_user_id_1\":576055,\"date_1_encrypted_user_id_1\":576055},\"indexDetails\":{\"_id_\":{\"valid\":true},\"encrypted_user_id_1\":{\"valid\":true},\"date_1_encrypted_user_id_1\":{\"valid\":true}}}}}\n{\"t\":{\"$date\":\"2021-01-12T17:03:47.956-08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":21027, \"ctx\":\"initandlisten\",\"msg\":\"Repairing collection\",\"attr\":{\"namespace\":\"REDACTED.patterns\"}}\n{\"t\":{\"$date\":\"2021-01-12T17:03:47.967-08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22327, \"ctx\":\"initandlisten\",\"msg\":\"Verify succeeded. Not salvaging.\",\"attr\":{\"uri\":\"table:collection-39--6970771647752306069\"}}\n\n\n{\"t\":{\"$date\":\"2021-01-12T17:03:48.044-08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":21028, \"ctx\":\"initandlisten\",\"msg\":\"Collection validation\",\"attr\":{\"results\":{\"ns\":\"admin.system.version\",\"nInvalidDocuments\":0,\"nrecords\":1,\"nIndexes\":1,\"keysPerIndex\":{\"_id_\":1},\"indexDetails\":{\"_id_\":{\"valid\":true}}}}}", "text": "I have a zipped directory full of .wt files that we would like to import into Atlas. They were exported from either mLab or Atlas on 23 December 2020. I tried running\nmongod --dbpath \"my-files-path\" --repair\nin an attemtp to get a db running so we could run mongoexport, and got this log. Running “show dbs” and “show databases” in a mongo client running against the local instance that should have this data, and seeing 0 GB present.\nThe full log is available via DM but includes a lot of “Verify succeeded. Not salvaging.”Unfortunately this backed up data is all on a windows machine and since it is healthcare data we are very reluctant to move it elsewhere using any transport options we have thought of yet.", "username": "CWW" }, { "code": "", "text": "Hi @CWWWelcome to MongoDB community.Have you tried to just start this mongod (without --repair) and use a mongodump and mongorestore to atlas?If the files you have are not corrupted starting the relevant version instance should not be a problem.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yes, I tried that. mongod declined to start. I didn’t save the error but I can replicate it if it might be useful.Unfortunately we are not sure what version of mongod the files were exported from, but I believe that they were extracted from mLab as part of the mLab shutdown.I don’t know how to see the version of mongod that the .wt files were exported from in the files themselves- I’ve been trying to read them with bsondump but I don’t understand them. Do you think there’s a way to tell what version of mongod should be used from the files themselves?", "username": "CWW" }, { "code": "", "text": "@CWW,Seeing the error might help, as incompatible binaries might error our starting a different version dbPath", "username": "Pavel_Duchovny" } ]
Importing to Atlas from .wt files
2021-01-13T02:42:03.546Z
Importing to Atlas from .wt files
4,240
null
[]
[ { "code": "$sample$sample", "text": "What is pseudo-random cursor?I was looking at the “$sample” manual and the word “pseudo-random cursor” appeared.So I searched it, but I couldn’t find out.If all the following conditions are met, $sample uses a pseudo-random cursor to select documents:", "username": "Kim_Hakseon" }, { "code": "$sample", "text": "Hi @Kim_Hakseon,The pseudo-random description means that the underlying process of selecting documents for a $sample stage meeting the listed conditions has a deterministic sequence based on an internal seed value from a Pseudorandom number generator (PRNG) rather than being truly random (in the statistical sense).However, this will be sufficiently random for the purposes of sampling (and successive queries will return different results).The importance of the section you quoted is that this approach is generally more efficient and scalable than the alternative described immediately after:If any of the above conditions are NOT met, $sample performs a collection scan followed by a random sort to select N documents. In this case, the $sample stage is subject to the sort memory restrictions.Wikipedia has a more detailed description of “True” vs pseudo-random.In particular:While a pseudorandom number generator based solely on deterministic logic can never be regarded as a “true” random number source in the purest sense of the word, in practice they are generally sufficient even for demanding security-critical applications.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you for your great explanation.I was also curious about the difference between the random sort and the pseudo-random while reading the $sample, but both questions have been solved.It was really interesting.Thank you very much.\nHappy New Year~ ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is pseudo-random cursor?
2021-01-13T05:50:00.398Z
What is pseudo-random cursor?
2,930
null
[ "data-modeling", "java", "spring-data-odm" ]
[ { "code": "PlacesUsersReviewsjavaspringbootmongodb @Document(collection = \"user\")\npublic class User {\n\n @Id\n private String id;\n @Indexed(unique = true, direction = IndexDirection.DESCENDING, dropDups = true)\n private String email;\n private String password;\n private String fullname;\n \n private byte[] image;\n private String imagename;\n \n private boolean enabled;\n @DBRef\n private Set<Role> roles;\n\tprivate String firstname;\n\tprivate String lastname;\n\t\n\t@DateTimeFormat(iso = DateTimeFormat.ISO.DATE)\n\tprivate Date lastPasswordResetDate;\n}\n@Document(collection = \"place\")\npublic class Place {\n\n\tpublic Place() {}\n\t\n\tprivate String place_name;\n\t\n\t@Id \n\tprivate String id;\n\tprivate Double longitude;\n\t\n\tprivate Double latitude;\n\t\n\t\n\tprivate String municipalitySubdivision;\n\tprivate String municipality;\n\tprivate String countrySecondarySubdivision;\n\tprivate String countryTertiarySubdivision;\n\tprivate String countrySubdivision;\n\tprivate String country;\n\t\n\tprivate String category;\n\t\n}\n@Document(collection = \"review\")\npublic class Review {\n\n\tpublic Review() {\n\t}\n\n\t@Id\n\tprivate String id;\n\n\t@TextIndexed(weight=7)\n\tprivate String first_point;\n\n\t@TextIndexed(weight=6)\n\tprivate String second_point;\n\n\t@TextIndexed(weight=5)\n\tprivate String third_point;\n\n\t@TextIndexed(weight=4)\n\tprivate String fourth_point;\n\n\t@TextIndexed(weight=3)\n\tprivate String fifth_point;\n\n\t@TextIndexed(weight=2)\n\tprivate String sixth_point;\n\n\t@TextIndexed(weight=1)\n\tprivate String seventh_point;\n\n\tprivate Place place;\n\t\n\t@DBRef\n\tprivate User user;\n\n\t\n\tprivate String date;\n\t\n\t@TextScore\n\tprivate Float textScore;\n}\n@Document(collection = \"role\")\npublic class Role {\n\n @Id\n private String id;\n \n @Indexed(unique = true, direction = IndexDirection.DESCENDING, dropDups = true)\n private String role;\n}\n", "text": "Hello All,I have a very simple use case, in which I have some Places and some Users who can write some Reviews.I have implemented the code using java, springboot and mongodb. Here are my main classes:Here is the place class:and here is the review class:Here is my Role class:In this project, the review is not a simple text, but is composed of multiple texts which have different weights.\nthe user can search for a keyword and the reviews which are more relevant are sorted according to the their scores.As I am totally new to mongodb, I have read the documentations about DBRef, Textscore, etc. and have designed it as mentioned. I am wondering if I have used them correctly and my implementation is reasonable or if I have to make big changes to it.\nI should also mention that the code works without any problem, but I am wondering if it will also work when it is live as a web page and many users want to use it at the same time.Your help would be appreciated.", "username": "dokhtar_ma" }, { "code": "// User collection\n{\n \"_id\": ObjectId(\"123456\"), // ID field of user collection.\n \"name\": \"Hello man\"\n}\n\n// Review Collection\n{\n \"_id\": ObjectId(\"2345678\"), // ID field of review collection.\n \"reviewText\": \"Hello this is my review.\",\n \"placeId\": ObjectId(\"123123\"), // ID of place document.\n \"userID\": ObjectID(\"123456\") // ID of User document.\n}\n", "text": "Hi @dokhtar_ma,Your approach is pretty much decent and the models look good. A few points that I would mention, which will make your application better -Then when you want to fetch the reviews of particular user, you can simply search and do a $lookup to perform join.Do tell me if your package made the links automatically or did it embed the document.\nCheers…!", "username": "shrey_batra" } ]
Is my database design acceptable?
2021-01-12T12:09:32.453Z
Is my database design acceptable?
4,441