image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
[
"aggregation",
"node-js"
] | [
{
"code": "",
"text": "Hello Mongo-maniacsFor lack of a better place I’ll post it here.I’d like to announce that I’ve started a “Build real world applications” series use MongoDB, Nodejs, ExpressJS and other Javascript technologies.Let me demonstrate a few simple database queries on MongoDB using their Aggregation Pipeline Framewor...",
"username": "Melody_Maker"
},
{
"code": "",
"text": "Here are the currently published articles on using MongDB to interact with the Database.\nI have a new tip going up very day.MongoDB: Inserting Data - Series #08Intro In the last article in the series I showed how to establish a connection to MongoDB....MongoDB: Connect to the Mongo Driver - Series #07Intro This is our lowest level code for establishing a connection to the Mongo database....MongoDB - insert embedded documents - series #06Intro Let's say a user wants to send us some correspondence. [aside] Do we want to sto...NodeJS, MongoDB - “OR” queries - series #05Intro When querying the database in MongoDB, all \"Matches\" you perform are conjunctions (i...NodeJS, ExpressJS, MongoDB - Paginate - series #04Intro A quick example on actually a very important feature: \"paginate\" Always paginate yo...NodeJS, ExpressJS, Redis, MongoDB - series #03Intro These two code snippets have the identical behavior. What does the code do? See if...MongoDB - Aggregation Framework - series #02Intro Note You know, designing systems is about shaping data. You want the tools that faci...MongoDB - Aggregation Framework - series #01Let me demonstrate a few simple database queries on MongoDB using their Aggregation Pipeline Framewor...",
"username": "Melody_Maker"
}
] | I've started a "Building Real World Apps with MongoDb Aggregation Framework" Series | 2021-02-05T03:21:51.561Z | I’ve started a “Building Real World Apps with MongoDb Aggregation Framework” Series | 2,000 |
|
null | [
"replication",
"devops"
] | [
{
"code": "",
"text": "Hello community, I’m new to mongoDB.\nI want to implement transaction feature to one of my small project.\nis it possible && okay to have below structure of replica set to minimize number of nodes?server1 - Web application server + mongos(router) + primary\nserver2 - secondary\nserver3 - secondary",
"username": "Joohoon_Maeng"
},
{
"code": "",
"text": "Hi @Joohoon_Maeng,Welcome to the MongoDB Community.It’s good practice to keep mongos (router) on the application server. But keeping primary on the application server wouldn’t be a good option I believe.Why it’s not a good option:Both of above points will require amount of RAM possibly equal to your working data set if you want to get best performance.In the case that you’ve mentioned, The application server would be already taking up some of your RAM to process the request / response. So it would be a good option to keep your Primary on separate node.You’ve mentioned mongos but haven’t mentioned config server? I assume you’re running config server on the separate instance as well if you’re intended to run / apply sharding.I am sure that you would have already checked on why to shard and when to shard. If not, This documentation will help you to decide considering the factors mentioned on that page. https://docs.mongodb.com/manual/sharding/.I hope it answers your question!",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Wow! this is a mind blowing answer!\nThank you.Can I ask one more question? is it okay to have config on the application server also?\nIt’ll look like :\napplication server - mongos / config\nserver1 - primary\nserver2 - secondary\nserver3 - secondary",
"username": "Joohoon_Maeng"
},
{
"code": "",
"text": "You’re welcome @Joohoon_Maeng.The good option would be to have config server separate and you should also setup replica set for your config server (while running in production) so you can have one node always available in case of any failure.Thanks,\nViraj",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "@Joohoon_Maeng\nSharding config requires a full replicaset so you’'ll need another 3 db hosts for that.If you are not sharding anytime soon stick with a simple replicaset and scale it up until you hit bottlenecks.",
"username": "chris"
},
{
"code": "",
"text": "I want to add some clarifications to what @viraj_thakrar wrote.Yes, all read operations, when using default readConcern and readPreference, will be performed on the primary. However, all data bearing nodes in a replica set (not only the primary) handle all the write operations and also have all the indexes.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @chris\nIf I’m not sharding, meaning I won’t need a config server. I see.\nSo in this case I’ll need 3 nodes for dbs, 4 total including WAS.\nThank you!",
"username": "Joohoon_Maeng"
},
{
"code": "",
"text": "Hi @steevej\nCan you elaborate little more on ‘all data bearing nodes in a replica set handle all the write operations’ ?\naccording to the docs(https://docs.mongodb.com/manual/core/replica-set-write-concern/), it seems primary by default is the one handles write data operation.",
"username": "Joohoon_Maeng"
},
{
"code": "",
"text": "The sole purpose of having secondaries (data bearing) and replication is that your data is replicated. So if the primary update document d1, then this update must be replicated on all secondaries. Otherwise your data is not replicated so you lose the update if the primary goes down.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Setup replica set in 3 nodes including web server | 2021-02-09T07:55:52.757Z | Setup replica set in 3 nodes including web server | 3,635 |
null | [] | [
{
"code": "ClientOutOfLineExecutor::~ClientOutOfLineExecutor() noexcept {\n invariant(_isShutdown);\n}\n{\"t\":{\"$date\":\"2021-02-09T02:33:45.899+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn165377\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"XXXXXXX\",\"client\":\"conn165377\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.6.3\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"4.19.0-9-amd64\"},\"platform\":\"'Node.js v10.19.0, LE (legacy)\"}}}\n{\"t\":{\"$date\":\"2021-02-09T02:33:45.907+01:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23079, \"ctx\":\"listener\",\"msg\":\"Invariant failure\",\"attr\":{\"expr\":\"_isShutdown\",\"file\":\"src/mongo/db/client_out_of_line_executor.cpp\",\"line\":58}}\n{\"t\":{\"$date\":\"2021-02-09T02:33:45.913+01:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23080, \"ctx\":\"listener\",\"msg\":\"\\n\\n***aborting after invariant() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2021-02-09T02:33:45.913+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"listener\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n{\"t\":{\"$date\":\"2021-02-09T02:33:45.917+01:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":31430, \"ctx\":\"listener\",\"msg\":\"Error collecting stack trace\",\"attr\":{\"error\":\"unw_get_proc_name(559C30C01C41): unspecified (general) error\\nerror: unw_step: unspecified (general) error\\nunw_get_proc_name(559C30C01C41): unspecified (general) error\\nerror: unw_step: unspecified (general) error\\n\"}}\n{\"t\":{\"$date\":\"2021-02-09T02:33:45.917+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31431, \"ctx\":\"listener\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"559C30C01C41\",\"b\":\"559C2DF13000\",\"o\":\"2CEEC41\"}],\"processInfo\":{\"mongodbVersion\":\"4.4.3\",\"gitVersion\":\"913d6b62acfbb344dde1b116f4161360acd8fd13\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"4.19.0-14-amd64\",\"version\":\"#1 SMP Debian 4.19.171-2 (2021-01-30)\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"559C2DF13000\",\"elfType\":3,\"buildId\":\"6C8A93F8D2B544901FC58C1CCD203AEA182627B5\"}]}}}}\n{\"t\":{\"$date\":\"2021-02-09T02:33:45.917+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"listener\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"559C30C01C41\",\"b\":\"559C2DF13000\",\"o\":\"2CEEC41\"}}}\n",
"text": "Hi, all\nI have a problem with mongo 4.4.3, which crashes around 32.000 connections.\nArrived towards the 32k, mongo makes an error of assertion (on this piece of code) and this crashes the server.Log :Has anyone ever had the problem ? Do you have any idea where it came from / how to solve it ?\nThanks",
"username": "Luke_Skywalker"
},
{
"code": "systemctl edit mongod.service[Service]\nLimitNOFILE=infinity #<<< HERE\nLimitNPROC=infinity #<<< HERE\n\nPermissionsStartOnly=true\nExecStartPre=/bin/sh -c \"echo never > /sys/kernel/mm/transparent_hugepage/enabled\"\nExecStartPre=/bin/sh -c \"echo never > /sys/kernel/mm/transparent_hugepage/defrag\"\n",
"text": "Arrived towards the 32k, mongo makes an error of assertion (on this piece of code) and this crashes the server.Hi all\nWe found the “issue”. Just forgot to increase the service limits with systemctl edit mongod.serviceMongo 4.3 gave us the message “Too many open files” while Mongo 4.4 crashed without this information.Hope this can help someone in the future ",
"username": "Luke_Skywalker"
}
] | Mongo v4.4.3 crash "invariant in client_out_of_line_executor.cpp" | 2021-02-09T12:25:37.275Z | Mongo v4.4.3 crash “invariant in client_out_of_line_executor.cpp” | 3,360 |
null | [
"queries"
] | [
{
"code": "[\n {\n name: \"contact_1\",\n business_address: \"street 1\"\n },\n {\n name: \"contact_2\",\n personal_address: \"street 2\"\n },\n {\n name: \"contact_3\",\n number: 123456\n }\n]\n",
"text": "Dear community,is there a way to query collections with wildcard on attributes (keys)?If we have the collection below, and I would like to retrieve all entries having attributes with a key containing “_address”, would there be any way to achieve this in a query?:The query should lead to retrieval of contact_1 and contact_2.",
"username": "Christoph_Rohatsch"
},
{
"code": "[\n {\n \"name\": “contact_1” ,\n \"tags\": [ \"business\" ] ,\n \"address\" : “street 1”\n },\n {\n \"name\": “contact_2”,\n \"tags\" : [ \"personnel\" , \"family\" ]\n \"address\" : “street 2”\n },\n {\n \"name\": “contact_3”,\n \"tags\" : [ \"personnel\" , \"friend\" ] ,\n \"number\" : 123456\n }\n]\n",
"text": "Take a look atPersonally, I would change the schema using the attribute pattern, Building with Patterns: The Attribute Pattern | MongoDB Blog, to something like:One of the advantages is that the sub-document schema is consistent. This helps when coding. For example, the same simpler code can be used to print an address. Otherwise you need logic to determine if you want to access the field business_address or personal_address. This also helps reduce the number of indexes since the fields have the same name. Otherwise you need an index for business_address and one for personal_address.",
"username": "steevej"
}
] | Query mongodb with wildcard on keys | 2021-02-09T15:38:47.778Z | Query mongodb with wildcard on keys | 4,522 |
null | [
"developer-hub",
"careers"
] | [
{
"code": "",
"text": "I recently posted my top 10 tips for making remote work actually work right now:https://www.mongodb.com/developer/article/10-tips-making-remote-work-actually-work/I’m also going to be presenting this topic as the morning keynote at Devnexus on Feb 17. Registration is free. And, oh yeah, LeVar Burton is going to be there as well.https://devnexus.com/presentations/5865/How is remote work going for you? Loving it? Hating it? I’d love to hear your tips! ",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "Hi @Lauren_Schaefer\nThanks for sharing these links to the article!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | New Article & Keynote: Top 10 Tips for Making Remote Work Actually Work Right Now | 2021-02-09T12:38:12.495Z | New Article & Keynote: Top 10 Tips for Making Remote Work Actually Work Right Now | 5,456 |
null | [
"aggregation",
"change-streams"
] | [
{
"code": "",
"text": "I have been exploring the change stream API and workarounds for my usecase.\nWhat I have understood is that I can detect any change in my collection via change stream “watch” function. I am trying to figure out that how can I read two or more fields from a collection and apply aggregation in the collection and get some result and merge result to another collection. All this based on every change stream event occurring. Every time a change occurs in the collection, I want to re-aggregate the data and store it in another collection again via upsert.I can do that by Triggers but these are supported in Atlas Only. What other options can I possibly utilize to get this done on my on Premise MongoDB.",
"username": "MWD_Wajih_N_A"
},
{
"code": "",
"text": "Hello @MWD_Wajih_N_A, Change Stream is the only option for the requirement you have:Every time a change occurs in the collection, I want to re-aggregate the data and store it in another collection again via upsert.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Yes, I am exploring it. As per my understanding, it will aggregate only the incoming record where as I am looking for a way to re aggregate the whole data.\nFor Instance, consider There are two collections A and B. There are three fields in A i.e., xid, xinvoice, xamount. Now, one xid can have multiple xinvoice with certain xamount. A want to sum all amounts and store it in B. Now, if any update occur in any of the xinvoice, I want to re aggregate all xinvoice’s amount as per the updated value and store this updated sum in the same field in B.\nThis aggregation involves the historic data in the collection and updating only changed field. Not just aggregating the change stream document but the whole data.",
"username": "MWD_Wajih_N_A"
},
{
"code": "",
"text": "The Change Events document has the details of the change (e.g., an update event has info about the document key and the updated / removed fields). This is the information you can use to perform further querying (or aggregation) and the update to another collection. You can define a transaction and perform the aggregation and update atomically.",
"username": "Prasad_Saya"
}
] | Aggregation on historic data, based on every change stream on On Premise MongoDB | 2021-02-09T11:53:47.307Z | Aggregation on historic data, based on every change stream on On Premise MongoDB | 2,637 |
null | [
"queries",
"indexes"
] | [
{
"code": "// User Collection\n{\n _id,\n employeeInfo: [{\n employeeId, \n businessId, <-- From Business Collection\n roles: [],\n locations: [\"lat, lng\", ...],\n }],\n}\n\n// Business Collection\n{\n _id,\n...\n}\nemployeeInforolebusinessIduser_businessesBusinesses",
"text": "My current data model is as follows (relevant fields only):My application will show the businesses a user owns or works for (employeeInfo) along with their role (position) when they sign in, allowing them to select which business to operate the application from. I’m guessing I need to index the businessId field for a user_businesses index, however don’t I need to query the Businesses collection to return a list of them?How would I go about accomplishing this? Should I modify my existing model? Currently it seems like I’d need to do multiple queries, first on Users, then on Businesses. How can I avoid this? Apologies if my understanding isn’t quite clear. Any additional learning material is greatly appreciated! ",
"username": "Andrew_W"
},
{
"code": "",
"text": "Hello @Andrew_W, here are some points.Don’t I need to query the Businesses collection to return a list of them?Not necessarily. The user collection, in addition to id, can also store the business name. This will work fine as the business names change rarely (and you will be storing business name in both collections).If you think the business names are going to change often or business name cannot be included in users collection, then you need to query the business collection to get the business name - this can be done using a single Aggregation $lookup query (a “join” operation). In such a case only the business id is stored in the user collection. Note that lookup queries generally perform slower than that of a single collection query.I’m guessing I need to index the businessId field for a user_businesses indexIf you are querying on the business id field, an index will be useful in terms of query performance. Indexes on array fields are called as Multikey Indexes.",
"username": "Prasad_Saya"
},
{
"code": "insert",
"text": "Thank you @Prasad_Saya – This certainly gives me something to think about. I’ll do some tests and see where it gets me. I’m guessing, then, that on the creation of a new business, I’ll need to do an two insert calls on both the user and business collection. This is OK? Seems like it’s the only way to make sure records are synced.",
"username": "Andrew_W"
},
{
"code": "insert",
"text": "I’m guessing, then, that on the creation of a new business, I’ll need to do an two insert calls on both the user and business collectionI think that a given business may or may not be associated with a user. So, when a user is created then the existing business’s id and name are included for that user. In case its a new business, then both collections are affected.And, dont forget that when you make an update of the business name, both collections need to be updated.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Any additional learning material is greatly appreciated!Data modeling (or data design) plays an important part of designing an application. So do the indexes defined on the collections. Both aspects can affect the performance and maintainability of an application. These topics are vast, in general, but I cannot tell what specific topics you are looking for. The links below are documentation links:At the top of this page (if you press the Home button on your keyboard you will get there ) there is a Menu, which has links to various resources, including videos, blog posts, etc, and you can search for any specific topics within. You can also search for “mongodb blog posts data models” and “mongodb blog posts indexes or performance” in the Google search box, you will find many useful links.Finally, MongoDB University has free online classes both for Data Modeling and Performance (and Indexes). It is another way to enhance your knowledge in the respective subjects.",
"username": "Prasad_Saya"
}
] | Index from One Document to Query Another? | 2021-02-08T14:41:33.954Z | Index from One Document to Query Another? | 1,823 |
null | [
"replication",
"installation"
] | [
{
"code": "",
"text": "Hi Team,We are trying to replicate the MongoDB from one windows server to another windows server. Basically for any fail over / High availability.We followed steps from link - https://ulyaoth.com/tutorials/how-to-install-mongodb-3-4-in-replication-on-windows-server-2016/ but somehow we are unable to see DB’s from Secondary Machine which was replication from Primary. Please help us on this.",
"username": "NithiN_Katta"
},
{
"code": "",
"text": "Hello @NithiN_Katta, welcome to the MongoDB Community forum.In general, you should be following the tutorial to deploy the replica-set (or any of the other available tutorials for replica-sets) to deploy one.Please verify the steps from the tutorial and see if you had followed all of them. Also, tell about the hardware and the MongoDB versions you are using.",
"username": "Prasad_Saya"
},
{
"code": "rs.status()\n",
"text": "It would be useful if you could share the output of",
"username": "steevej"
},
{
"code": "r2schools:PRIMARY> rs.status()\n{\n \"set\" : \"r2schools\",\n \"date\" : ISODate(\"2021-02-02T05:53:25.883Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(5),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 1,\n \"writeMajorityCount\" : 1,\n \"votingMembersCount\" : 1,\n \"writableVotingMembersCount\" : 1,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1612245201, 1),\n \"t\" : NumberLong(5)\n },\n \"lastCommittedWallTime\" : ISODate(\"2021-02-02T05:53:21.067Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1612245201, 1),\n \"t\" : NumberLong(5)\n },\n \"readConcernMajorityWallTime\" : ISODate(\"2021-02-02T05:53:21.067Z\"),\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1612245201, 1),\n \"t\" : NumberLong(5)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1612245201, 1),\n \"t\" : NumberLong(5)\n },\n \"lastAppliedWallTime\" : ISODate(\"2021-02-02T05:53:21.067Z\"),\n \"lastDurableWallTime\" : ISODate(\"2021-02-02T05:53:21.067Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1612245171, 1),\n \"electionCandidateMetrics\" : {\n \"lastElectionReason\" : \"electionTimeout\",\n \"lastElectionDate\" : ISODate(\"2021-02-02T05:35:50.893Z\"),\n \"electionTerm\" : NumberLong(5),\n \"lastCommittedOpTimeAtElection\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"lastSeenOpTimeAtElection\" : {\n \"ts\" : Timestamp(1611897984, 1),\n \"t\" : NumberLong(4)\n },\n \"numVotesNeeded\" : 1,\n \"priorityAtElection\" : 1,\n \"electionTimeoutMillis\" : NumberLong(10000),\n \"newTermStartDate\" : ISODate(\"2021-02-02T05:35:50.897Z\"),\n \"wMajorityWriteAvailabilityDate\" : ISODate(\"2021-02-02T05:35:50.918Z\")\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"localhost:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 1056,\n \"optime\" : {\n \"ts\" : Timestamp(1612245201, 1),\n \"t\" : NumberLong(5)\n },\n \"optimeDate\" : ISODate(\"2021-02-02T05:53:21Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1612244150, 1),\n \"electionDate\" : ISODate(\"2021-02-02T05:35:50Z\"),\n \"configVersion\" : 1,\n \"configTerm\" : 5,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612245201, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1612245201, 1)\n}\nr2schools:PRIMARY>\n",
"text": "",
"username": "NithiN_Katta"
},
{
"code": "members\" : [...",
"text": "Hi @NithiN_Katta, the status shows that there is only one member in the replica-set and it is a Primary node.members\" : [...",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Exactly, even after giving multiple servers in rs.add we see below error msg. Not sure what is going wrong here, please suggest.Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2\"",
"username": "NithiN_Katta"
},
{
"code": "",
"text": "If there is any document for replica/fail over between server to server please provide me the link, tried using all links from google, still its same",
"username": "NithiN_Katta"
},
{
"code": "rs.status()",
"text": "In the links provided in my earlier post “other available tutorials for replica-sets” there is this topic called as “Add Members to a Replica Set”. Since, you already have a replica-set with one member (your rs.status() shows that), you can add a second member (or more as needed) to the replica-set.",
"username": "Prasad_Saya"
},
{
"code": " rs0:PRIMARY> rs.config()\n{\n\t\"_id\" : \"rs0\",\n\t\"version\" : 2,\n\t\"term\" : 1,\n\t\"protocolVersion\" : NumberLong(1),\n\t\"writeConcernMajorityJournalDefault\" : true,\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"host\" : \"localhost:27017\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 1,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t}\n\t],\n\t\"settings\" : {\n\t\t\"chainingAllowed\" : true,\n\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\"heartbeatTimeoutSecs\" : 10,\n\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\"catchUpTimeoutMillis\" : -1,\n\t\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\t\"getLastErrorModes\" : {\n\t\t\t\n\t\t},\n\t\t\"getLastErrorDefaults\" : {\n\t\t\t\"w\" : 1,\n\t\t\t\"wtimeout\" : 0\n\t\t},\n\t\t\"replicaSetId\" : ObjectId(\"601bf79d6dc7b0a48a730b18\")\n\t}\n}\nrs0:PRIMARY> rs.add('mongo-0-b')\n{\n\t\"operationTime\" : Timestamp(1612445708, 1),\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2\",\n\t\"code\" : 103,\n\t\"codeName\" : \"NewReplicaSetConfigurationIncompatible\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1612445708, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t}\n}\nrs0:PRIMARY> var c = rs.conf()\nrs0:PRIMARY> c.members[0].host='mongo-0-a:27017'\nmongo-0-a:27017\nrs0:PRIMARY> rs.reconfig(c)\n{\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1612445905, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1612445905, 1)\n}\nrs0:PRIMARY> rs.add('mongo-0-b')\n{\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1612445940, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1612445940, 1)\n}\n",
"text": "Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2\"As your replicaset was initialised with localhost as the member you will have to reconfigure the replicaset.\nYou need to use the hostname for this, one that other members will use to connect on.This is the state you are at, localhost member and unable to add a new member:Now, to reconfigure to use hostname instead of localhost:After the reconfigue, adding a new member works:",
"username": "chris"
},
{
"code": "rs.conf()C:\\Program Files\\MongoDB\\Server\\4.4\\bin>mongo\nMongoDB shell version v4.4.3\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"5c18cf32-dc14-4acc-b17e-cecdc904b9b1\") }\nMongoDB server version: 4.4.3\n---\nThe server generated these startup warnings when booting:\n 2021-02-03T21:39:24.567-06:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n 2021-02-03T21:39:24.568-06:00: This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\n---\n---\n Enable MongoDB's free cloud-based monitoring service, which will then receive and display\n metrics about your deployment (disk utilization, CPU, operation statistics, etc).\n\n The monitoring data will be available on a MongoDB website with a unique URL accessible to you\n and anyone you share the URL with. MongoDB may use this information to make product\n improvements and to suggest MongoDB products and deployment options to you.\n\n To enable free monitoring, run the following command: db.enableFreeMonitoring()\n To permanently disable this reminder, run the following command: db.disableFreeMonitoring()\n---\nr2schools:PRIMARY> rsconf={_id:\"cbre\",members:[{_id:0,host:\"localhost:27017\",_id:1,host:\"10.71.101.187:27017\",_id:2,host:\"10.71.136.191:27017\"}]}\n{\n \"_id\" : \"cbre\",\n \"members\" : [\n {\n \"_id\" : 2,\n \"host\" : \"10.71.136.191:27017\"\n }\n ]\n}\nr2schools:PRIMARY> rs.initiate(rsconf)\n{\n \"operationTime\" : Timestamp(1612763918, 1),\n \"ok\" : 0,\n \"errmsg\" : \"already initialized\",\n \"code\" : 23,\n \"codeName\" : \"AlreadyInitialized\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612763918, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\nr2schools:PRIMARY> rs.add(\"10.71.101.186:27017\")\n{\n \"operationTime\" : Timestamp(1612763928, 1),\n \"ok\" : 0,\n \"errmsg\" : \"Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2\",\n \"code\" : 103,\n \"codeName\" : \"NewReplicaSetConfigurationIncompatible\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612763928, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\nr2schools:PRIMARY> rs.config()\n{\n \"_id\" : \"r2schools\",\n \"version\" : 1,\n \"term\" : 7,\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"localhost:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"60128e486bd6d3eea3782ebd\")\n }\n}\nr2schools:PRIMARY> rs.add(\"10.71.101.187:27017\")\n{\n \"operationTime\" : Timestamp(1612763958, 1),\n \"ok\" : 0,\n \"errmsg\" : \"Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2\",\n \"code\" : 103,\n \"codeName\" : \"NewReplicaSetConfigurationIncompatible\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612763958, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\n",
"text": "rs.conf()PLease find the output:",
"username": "NithiN_Katta"
},
{
"code": "rsconf={_id:\"cbre\",members:[{_id:0,host:\"localhost:27017\",_id:1,host:\"10.71.101.187:27017\",_id:2,host:\"10.71.136.191:27017\"}]}",
"text": "rsconf={_id:\"cbre\",members:[{_id:0,host:\"localhost:27017\",_id:1,host:\"10.71.101.187:27017\",_id:2,host:\"10.71.136.191:27017\"}]}You cannot mix localhost with non-localhost. Use the lan ip or hostname of the host instead of localhost.",
"username": "chris"
},
{
"code": "mongo > rsconf={_id:\"cbre\",members:[{_id:0,host:\"localhost:27017\",_id:1,host:\"10.71.101.187:27017\",_id:2,host:\"10.71.136.191:27017\"}]}\nmongo > rsconf\n{\n\t\"_id\" : \"cbre\",\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"host\" : \"10.71.136.191:27017\"\n\t\t}\n\t]\n}\nmongo > rsconf={_id:\"cbre\",members:[{_id:0,host:\"localhost:27017\"},{_id:1,host:\"10.71.101.187:27017\"},{_id:2,host:\"10.71.136.191:27017\"}]}\n{\n\t\"_id\" : \"cbre\",\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"host\" : \"localhost:27017\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"host\" : \"10.71.101.187:27017\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"host\" : \"10.71.136.191:27017\"\n\t\t}\n\t]\n}\n",
"text": "In addition, your syntax is wrong. You are missing braces between members.[ Edited to illustrate my comment above posted from my tablet ]As you have seen:gives the following value as you postedSo you only have 1 object in the array members. And since you cannot have 2 fields with the same key in a single document you end up with a document that contains the value of the last occurrence of a key. What you really wanted, (with the correct bracing) is:But the above will still be wrong despite having the correct syntax becauseYou cannot mix localhost with non-localhost. Use the lan ip or hostname of the host instead of localhost.",
"username": "steevej"
},
{
"code": "localhost",
"text": "Here is a post with an answer for the issue being discussed, and it says:As per the error message says, you have added the Primary node of the replica set as localhost . The rule is all nodes of the replica set should be localhost or all should be hostname/ip address…",
"username": "Prasad_Saya"
},
{
"code": "r2schools:PRIMARY> rsconf={_id:\"r2schools\",members:[{_id:0,host:\"10.71.101.186:27017\",_id:1,host:\"10.71.101.187:27017\",_id:2,host:\"10.71.136.191:27017\"}]}\n{\n \"_id\" : \"r2schools\",\n \"members\" : [\n {\n \"_id\" : 2,\n \"host\" : \"10.71.136.191:27017\"\n }\n ]\n}\nr2schools:PRIMARY> rs.initiate(rsconf)\n{\n \"operationTime\" : Timestamp(1612837719, 1),\n \"ok\" : 0,\n \"errmsg\" : \"already initialized\",\n \"code\" : 23,\n \"codeName\" : \"AlreadyInitialized\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612837719, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\nr2schools:PRIMARY> rs.add(\"10.71.101.186:27017\")\n{\n \"operationTime\" : Timestamp(1612837769, 1),\n \"ok\" : 0,\n \"errmsg\" : \"Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2\",\n \"code\" : 103,\n \"codeName\" : \"NewReplicaSetConfigurationIncompatible\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612837769, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\nr2schools:PRIMARY> rs.config()\n{\n \"_id\" : \"r2schools\",\n \"version\" : 1,\n \"term\" : 7,\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"localhost:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"60128e486bd6d3eea3782ebd\")\n }\n}\nr2schools:PRIMARY> rs.add(\"10.71.101.187:27017\")\n{\n \"operationTime\" : Timestamp(1612837789, 1),\n \"ok\" : 0,\n \"errmsg\" : \"Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2\",\n \"code\" : 103,\n \"codeName\" : \"NewReplicaSetConfigurationIncompatible\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612837789, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\nr2schools:PRIMARY> rs.add(\"10.71.136.191:27017\")\n{\n \"operationTime\" : Timestamp(1612837789, 1),\n \"ok\" : 0,\n \"errmsg\" : \"Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2\",\n \"code\" : 103,\n \"codeName\" : \"NewReplicaSetConfigurationIncompatible\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612837789, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\nr2schools:PRIMARY> rs.status()\n{\n \"set\" : \"r2schools\",\n \"date\" : ISODate(\"2021-02-09T02:30:04.835Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(7),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 1,\n \"writeMajorityCount\" : 1,\n \"votingMembersCount\" : 1,\n \"writableVotingMembersCount\" : 1,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1612837799, 1),\n \"t\" : NumberLong(7)\n },\n \"lastCommittedWallTime\" : ISODate(\"2021-02-09T02:29:59.837Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1612837799, 1),\n \"t\" : NumberLong(7)\n },\n \"readConcernMajorityWallTime\" : ISODate(\"2021-02-09T02:29:59.837Z\"),\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1612837799, 1),\n \"t\" : NumberLong(7)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1612837799, 1),\n \"t\" : NumberLong(7)\n },\n \"lastAppliedWallTime\" : ISODate(\"2021-02-09T02:29:59.837Z\"),\n \"lastDurableWallTime\" : ISODate(\"2021-02-09T02:29:59.837Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1612837759, 1),\n \"electionCandidateMetrics\" : {\n \"lastElectionReason\" : \"electionTimeout\",\n \"lastElectionDate\" : ISODate(\"2021-02-04T03:39:24.812Z\"),\n \"electionTerm\" : NumberLong(7),\n \"lastCommittedOpTimeAtElection\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"lastSeenOpTimeAtElection\" : {\n \"ts\" : Timestamp(1612250005, 1),\n \"t\" : NumberLong(6)\n },\n \"numVotesNeeded\" : 1,\n \"priorityAtElection\" : 1,\n \"electionTimeoutMillis\" : NumberLong(10000),\n \"newTermStartDate\" : ISODate(\"2021-02-04T03:39:24.818Z\"),\n \"wMajorityWriteAvailabilityDate\" : ISODate(\"2021-02-04T03:39:24.868Z\")\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"localhost:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 427841,\n \"optime\" : {\n \"ts\" : Timestamp(1612837799, 1),\n \"t\" : NumberLong(7)\n },\n \"optimeDate\" : ISODate(\"2021-02-09T02:29:59Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1612409964, 1),\n \"electionDate\" : ISODate(\"2021-02-04T03:39:24Z\"),\n \"configVersion\" : 1,\n \"configTerm\" : 7,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1612837799, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1612837799, 1)\n}\n",
"text": "Even after provide IP/HOST i still see same issue,somehow still it reads localhost. In windows HOST file also i have set to IP address.",
"username": "NithiN_Katta"
},
{
"code": "",
"text": "The changes you made did not get applied\nYour rs.status still shows localhost\nUnless you run reconfig changes will not happen\nPlease check the note from Prasad_Saya",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "@NithiN_KattaI posted 5 days ago stating what your issue is and how to resolve it by reconfiguring your replicaset.\nThis is the easiest non-impacting way to resolve your issue.Another option is to wipe your data and initiate the replicaset correctly.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help setting up replication between Windows servers | 2021-02-01T04:47:41.942Z | Help setting up replication between Windows servers | 4,470 |
null | [
"node-js"
] | [
{
"code": "async createProductCode (): Promise<string> {\n let productCode;\n let product;\n\n do {\n const randomVal = getRandomInt(1000000, 9999999);\n productCode = `${randomVal}`;\n product = await (Product as any).countDocumentsWithDeleted({ code: productCode });\n } while (product && product > 0);\n\n return productCode;\n}\n",
"text": "As part our requirements we are being asked to create a property which has a unique value, in the context of the collection. We can’t use the Mongo ID, since the UX team for our product indicates the value needs to be a number only, with a certain defined length.Accepting theses requirements that we need to work with, can anyone suggest an approach of creating a unique id?The current approach is to create the product number is as follows:The issue is that between the time the product code is created and it is used it may be used by another operation. As a workaround we have ensured that production creation is done synchronously, to avoid this issue.Can anyone suggest a better way to create a unique field value, that could be done in parallel?",
"username": "Andre-John_Mas"
},
{
"code": "$randdb.randomSamples.updateOne(\n {_id : ObjectId()},\n [{$set: {\n productCode: {\n $trunc : [ {$multiply: [\n {\n $rand: {}\n },\n 100000000000000\n ]\n }, -1]\n}\n}}], {upsert : true}\n)\n{ _id: ObjectID(\"60226a260439653f31ecd23a\"),\n productCode: 57465943495130 }\n{ _id: ObjectID(\"60226a490439653f31ecd23b\"),\n productCode: 87144984542740 }\n",
"text": "Hi @Andre-John_Mas,Welcome to MongoDB Community.I found a creative way to do it using pipeline updates and $rand aggregation. Those updateOne are atomic.I am using “Upsert” with a new _id to mimic an “insert” operation if needed:This creates random productCodes:Thanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | Creating random unique field value, via NodeJS? | 2021-02-05T20:50:58.801Z | Creating random unique field value, via NodeJS? | 5,438 |
null | [
"aggregation"
] | [
{
"code": "collection = {\n \"Link\":link,\n \"theAssociatedList\":myList\n}\nmyquery = { \"Link1\": uri }\nmyd = firstCollection.find(myquery)\ntheOutputList= myd[0][\"theAssociatedList\"]\n[A1,A2,...,A3], [B1,B2,...,B3], and [C1,C2,...,C3][A1,B1,C1,A2,B2,C2,A3,B3,C3,...]",
"text": "I currently have 3 mongodb collections that are in the following format:Here is an example query where I provide a link and retrieve the associated list:Where theOutputList is equal to [A1,A2,…,A3].After querying all three collections, I receive lists [A1,A2,...,A3], [B1,B2,...,B3], and [C1,C2,...,C3] as output, given inputs, Link1, Link2, and Link3. How can I receive [A1,B1,C1,A2,B2,C2,A3,B3,C3,...] in the same step as the query to save runtime, or some other fast solution?",
"username": "lilg263"
},
{
"code": "myquery = { \"Link1\": uri }\nfirstCollection.aggregate([{$match: {\n Link : \"http://myexample.com\"\n}},\n// Get second collections\n {$lookup: {\n from: 'secondCollection',\n pipeline : [{\"$match\" : {\n Link : \"http://myexample.com\"\n}}],\n as: 'secondCollection'\n}}, {$lookup: {\n from: 'thirdCollection',\n pipeline : [{\"$match\" : {\n Link : \"http://myexample.com\"\n}}, {$project : {theAssociatedList:1}}],\n as: 'thirdCollection'\n}},\n// Create 3 arrays\n {$project: {\n Link : 1,\n theAssociatedList1 : \"$theAssociatedList\",\n theAssociatedList2 : { $first : \"$secondCollection.theAssociatedList\" },\n theAssociatedList3 : { $first : \"$thirdCollection.theAssociatedList\" }\n}},\n\n// Zip them in the needed order \n{$project: {\n Link : 1,\n theAssociatedList: {\n $zip : {\n inputs : [\"$theAssociatedList1\", \"$theAssociatedList2\",\"$theAssociatedList3\"],\n useLongestLength : true\n }\n }\n}}, \n// Make one array of them\n{$project: {\n Link : 1,\n theAssociatedList : {$concatArrays: [ \n { $arrayElemAt : [\"$theAssociatedList\",0] }, \n {$arrayElemAt : [\"$theAssociatedList\",1]}, \n {$arrayElemAt : [\"$theAssociatedList\",2]}]\n }\n}}])\n[ { _id: \n { _bsontype: 'ObjectID',\n id: <Buffer 60 22 5a c4 7d 5d d6 62 64 6d e6 42> },\n Link: 'http://myexample.com',\n theAssociatedList: [ 'A1', 'B1', 'C1', 'A2', 'B2', 'C2', 'A3', 'B3', 'C3' ] } ]\n",
"text": "Hi @lilg263,Welcome to MongoDB Community.So this is possible to achieve via a complex aggregation.However I believe from coding perspective it is better to do it on the client side after doing 3 queries using your programming langugue.If you still need the aggregation see the following:This will output the following in my tests:Thanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | One output given 3 inputs for 3 different collections | 2021-02-07T10:06:45.041Z | One output given 3 inputs for 3 different collections | 1,790 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hello all,\nI am writing a js scripts in my_test_scripts.js but when I run the scripts by:\nmongo localhost:27017/test my_test_scripts.js,\nI get some log messge below:\nMongoDB shell version v4.2.0\nconnecting to: mongodb://localhost:27017/tests?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“8796fc44-221d-4862-8d6f-e439fd77b227”) }\nMongoDB server version: 4.2.12\nI don’t know what’s happening in this case, is there any way to debug the scripts?Many thanks,James",
"username": "Zhihong_GUO"
},
{
"code": "var doc = { a: \"str-2\" }\ndoc = db.test.insertOne(doc)\nvar doc = { a: \"str-2\" }\ndoc = db.test.insertOne(doc)\nprintjson(doc)\ndb.test.findOne()printjson(db.test.findOne())mongosh",
"text": "I don’t know what’s happening in this case, is there any way to debug the scripts?You can try adding some debug statements in your JavaScript file. I have tried this and it works fine. For example, if the script is:When you run it, you wont see any output indicating if the insert is success (or failure). By adding the following you can see what is the result of running the script:Similarly, with the statement db.test.findOne() can be coded as printjson(db.test.findOne()) so that you can see the output of the query.NOTE: The new (Beta) MongoDB Shell mongosh has the Retrieve Shell Logs feature. The logs for each session are stored and you can view them.",
"username": "Prasad_Saya"
}
] | How to debug shell scripts | 2021-02-09T01:36:42.609Z | How to debug shell scripts | 4,081 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "C# .NET MongoDB DriverAttempt to perform serverStatus command to a cloud.\nStudio 3T shows Server Status commas successfully.\nSame code works ok to the local MongoDB.Var command = new CommandDocument {{ “serverStatus”, 1 } };\nVar result = db.RunCommand(command);Returned exception “Duplicate element name ‘metrics’.”\nAny possible solution?Regards,\nYoav",
"username": "yoav_maor"
},
{
"code": "db.version()mongo",
"text": "Welcome to the MongoDB Community Forums @yoav_maor!This is an unusual message and we’ll need some more details to try to reproduce the issue:Is your MongoDB deployment in the cloud self-managed or a Database-as-a-Service (DBaaS) offering via a cloud provider?What specific version of MongoDB server are you using (as reported by db.version() when connected to your deployment via the mongo shell)?What specific version of the MongoDB C# driver are you using?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi,\nThanks for answering.The MongoDB was created Create a free tier cluster automatically for a new user on the cloud for free testing. It contains sample data only created as part of the automatic build.Cloud provider: AWS, N. Virginia (us-east-1)Db.versison = 4.4.3MongoDB C# = 2.10.4Since it contains test data only I can provide the connection string.Thanks,\nYoav",
"username": "yoav_maor"
}
] | serverStatus command caused exception "Duplicate element name 'metrics'." | 2021-02-08T17:33:16.786Z | serverStatus command caused exception “Duplicate element name ‘metrics’.” | 3,019 |
null | [
"atlas-functions"
] | [
{
"code": "const offerersSearchResolver = async (input) => {\n const db = context.services.get('mongodb-atlas').db('xxx');\n const users = db.collection('xxx');\n const skills = input.skills || [];\n return (await users\n .aggregate([\n {$match: {\n $text: { $search: input.searchTerm },\n },},\n { $sort: { score: { $meta: 'textScore' }, 'fullName.last': 1, 'fullName.first': 1 } }\n ])\n .toArray());\n};\nexports = offerersSearchResolver;\n(Location17313) $match with $text is only allowed as the first pipeline stage",
"text": "I have the following code to do a text search on a field:It only works as a system user.\nIf I select “Application Authentication” I get (Location17313) $match with $text is only allowed as the first pipeline stage.Please help, I can’t run this query as a system user!edit: I can even imagine how it happens: Realm probably inserts some conditions to enforce privileges and it upsets $text.",
"username": "dimaip"
},
{
"code": "",
"text": "I just realised that a resolver’s response is being filtered according to rules even when it’s running as System, so it solves the problem for me!\nThough would have been nice to update the docs and mention this problem.",
"username": "dimaip"
},
{
"code": "",
"text": "Are you using Atlas for this query?",
"username": "Marcus"
},
{
"code": "mongodb-atlas",
"text": "@Marcus, considering his service named mongodb-atlas i am pretty sure he is.They should definitely consider Atlas Search as you probably wanted to advisehttps://docs.atlas.mongodb.com/atlas-search/",
"username": "Pavel_Duchovny"
},
{
"code": "$matchtext",
"text": "Dimitri, you should be using Atlas Search instead of $match & $text.One thing that you eliminate here is you don’t have to sort by text score, as results are sorted by relevance by default.Your relevance will also be improved. Take a look and let me know if you have any questions: https://docs.atlas.mongodb.com/reference/atlas-search/query-syntax",
"username": "Marcus"
},
{
"code": "",
"text": "I apologise for slow response, missed the notifications.\nThanks for the hint, I’ll look into Atlas Search!",
"username": "dimaip"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Text search only works as a system user | 2021-02-05T09:31:28.707Z | Text search only works as a system user | 2,397 |
[
"graphql"
] | [
{
"code": "",
"text": "I wish to query properties on an array with Realm Graphql. From the screenshot, you can see that I’m trying to filter out “alias” that equal “metaTitle”. However, the results tab displays objects with “metaTitle”. I feel I’m close but the syntax is incorrect. This page has useful examples, however, these methods did not work for me.\nimage1332×532 36.7 KB\n",
"username": "Anthony_H"
},
{
"code": "fields_nin : { alias_nin : \"metaTitle\" }",
"text": "Hi @Anthony_HWelcome to MongoDB community.Have you tried: fields_nin : { alias_nin : \"metaTitle\" } ?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "contents(query: {fields: {alias_in: \"isFeatured\"}})contents(query: {fields: {alias_in: \"isFeatured\", boolean:true}})",
"text": "Thanks, that works!Hope you don’t mind a followup question. The following query returns results.contents(query: {fields: {alias_in: \"isFeatured\"}})however, when I add additional parameters. No data is returned.contents(query: {fields: {alias_in: \"isFeatured\", boolean:true}})",
"username": "Anthony_H"
},
{
"code": " { AND: [{ fields : {alias_in : \"isFeatured\" }, { fields : { boolean_in : true} }] }",
"text": "@Anthony_H,Perhaps use an and expression:\n { AND: [{ fields : {alias_in : \"isFeatured\" }, { fields : { boolean_in : true} }] }Thanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Array Queries with Realm GraphQL | 2021-02-08T12:08:40.308Z | Array Queries with Realm GraphQL | 2,966 |
|
null | [
"app-services-user-auth",
"realm-web"
] | [
{
"code": "resetPassword(password, token, tokenId)\nasync resetPassword(token, tokenId, password) {\n const appRoute = this.fetcher.appRoute;\n await this.fetcher.fetchJSON({\n method: \"POST\",\n path: appRoute.emailPasswordAuth(this.providerName).reset().path,\n body: { token, tokenId, password },\n });\n}\n",
"text": "After following the code example on how to perform the password reset and debugging for quite some time, I discovered that the docs (examples and SDK docs) are incorrect. They both list the parameters aswhen in actuality, the function is this:I’m just posting this information for the benefit of others who might run into this in the future. Don’t know if the docs team will see this and get things updated.",
"username": "Justin_Jarae"
},
{
"code": "",
"text": "Hi Justin!I’m on the docs team here at MongoDB. Thanks for bringing this to our attention! We’ve actually fixed this in a feature branch where we’re putting together some other docs improvements: https://github.com/mongodb/docs-realm/blob/new-ia/source/sdk/node/examples/manage-email-password-users.txt#L231 … so expect the fix for this in the next week or so. You can also report issues like this using the docs feedback widget (the little “feedback” tab in the bottom right of docs pages) to tell us when you notice other discrepancies like this. We try our best to keep code snippets as accurate as possible, but occasionally issues do slip through the cracks, so we really appreciate it when users like you let us know about these things.-Nate",
"username": "Nathan_Contino"
},
{
"code": "",
"text": "Thanks, Nate! Glad you and the team were already on top of it. I’ll be sure to use the feedback feature in the future. Though I’d hate to give the doc a bad rating just for some incorrect code. Don’t want y’all getting yelled at because the code changed.",
"username": "Justin_Jarae"
}
] | Password Reset returning 400 - Docs are wrong | 2021-02-09T00:37:22.958Z | Password Reset returning 400 - Docs are wrong | 1,968 |
null | [
"atlas-device-sync",
"app-services-user-auth"
] | [
{
"code": "",
"text": "If a user is logged in with anonymous credentials so that they can access some realm sync data, but they have not yet authenticated with an identity provided by any other means (i.e. email/password, “local-pass”), how would you check to see if they have an identity provider other than anonymous?So far the best I can do is check the RLMUser.identities array of strings to see. Is there a better way?(MongoDB Realm iOS SDK 10.4.1)",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "Hello Eric_Lightfoot, welcome back!So my question to this, is are you using the OAuth? Because a really easy way to situate something like user authentication would be the built in OAUTH capabilities such as JWT, Google, Facebook, and so on.More can be found about OAUTH with Realm here.",
"username": "Brock_GL"
},
{
"code": "RLMUserRealmSwift.App.currentUser.anonymous.emailPasswordcurrentUser == nil",
"text": "Hi!I plan to use several of the providers in the future, but for now I’m speaking specifically about the RLMUser object that is held in RealmSwift.App.currentUser on the iOS client.I’m using it as my source of truth for the app’s authentication state. The problem I have is that - because I support anonymous users - I need a way to tell if the currentUser was gotten by .anonymous credentials or .emailPassword (or others). So in other words, a strategy that goes beyond checking if currentUser == nil",
"username": "Eric_Lightfoot"
},
{
"code": "user.data.identities",
"text": "You can also self-populate this value in custom user data when the user first registers by using an authentication trigger, but the user.data.identities field in the user object should also have what you’re looking for.",
"username": "Sumedha_Mehta1"
}
] | Suggest a way to check the RLMUser object for what type of credentials they currently have | 2021-02-04T05:30:20.469Z | Suggest a way to check the RLMUser object for what type of credentials they currently have | 2,071 |
null | [
"app-services-user-auth"
] | [
{
"code": "users",
"text": "We have an existing database that I’ve cloned into a test cluster to enable Realm (Sync). Our users authenticate to our apps that talk to this db via credentials in a users collection. I’m trying to determine the best approach to managing their authentication to Realm since it seems to be separate. We need this to be automated and transparent to the end users so their experience stays the same. I’m assuming an API key is probably the best way to go as we can generate these. But wanted to get community advice.",
"username": "Gregg_Bolinger"
},
{
"code": "",
"text": "Hey Gregg, if this authentication system is used in other services/fully built out, would JWT Auth be sufficient for your use case? You can pass in a JWT for a logged in user and Realm will use that to authenticate them with Realm.",
"username": "Sumedha_Mehta1"
}
] | Some Confusion on Authentication options for Realm | 2021-02-01T21:05:23.427Z | Some Confusion on Authentication options for Realm | 1,584 |
[
"atlas-functions"
] | [
{
"code": "",
"text": "I create a fresh project, 1 vanilla package installed and get this error when I try and upload node_modules.zip file. (attached screenshot).\n\nimage1499×664 54.3 KB\n\nThe text is:Failed to upload node_modules.zip: unknown: Unexpected token, expected ( (38:6) 36 | let body = ‘’ 37 | response.setEncoding(‘utf8’) > 38 | for await (const chunk of response) { | ^ 39 | body += chunk 40 | } 41 |I see a couple of other folks with this problem and no answers. Has this been addressed?",
"username": "Ragno_Cucina"
},
{
"code": "",
"text": "Hey @Ragno_Cucina - what dependency were you trying to upload? Dependencies with Functions is still in beta so you might be trying to upload one that is unsupported/in progress for now. If there is a dependency that you would like to see supported that isn’t on this list, please feel free to suggest it here.",
"username": "Sumedha_Mehta1"
}
] | Error uploading realm functions dependencies | 2021-02-01T19:28:52.288Z | Error uploading realm functions dependencies | 1,789 |
|
null | [
"aggregation"
] | [
{
"code": "{\n _id: ObjectID\n ... other fields that I still want returned\n feedback: [{ <- this array may be null or empty\n _id: ObjectID\n _recordStatus : String <- I want to filter by the value of this field which also may or may not exist\n ... other fields I want returned\n }]\n}\n{\n $unwind: \"$feedback\",\n },\n {\n $match: {\n \"feedback._recordStatus\": {\n $ne: \"deleted\",\n },\n },\n },\n {\n $group: {\n _id: {\n feedback_id: \"$feedback._id\",\n },\n feedback: {\n $push: \"$feedback\",\n },\n },\n }\n",
"text": "Hey,I have a collection where each document might have a subdocument array.I’m already using an aggregation pipeline to return the documents, but I want to filter the subdoc array before returning it.I’m currently doing this in code after the database query, but was trying to investigate doing it in the database instead.I’ve got something that works, however it only returns the subdoc array, and none of the original parent document items. Ideally I want all the original fields, and a filtered result set of the subdoc array.Here’s a vague schema of my data:And here’s the aggregation steps I’ve tried so far:This works, but discards all the original parent fields (except _id).Any help would be amazing!",
"username": "Seb_Toombs"
},
{
"code": "{\"aggregate\" \"testcoll\",\n \"pipeline\"\n [{\"$addFields\"\n {\"feedback\"\n {\"$switch\"\n {\"branches\"\n [{\"case\" {\"$isArray\" [\"$feedback\"]},\n \"then\"\n {\"$filter\"\n {\"input\" \"$feedback\",\n \"as\" \"record\",\n \"cond\" {\"$ne\" [\"$$record._recordStatus\" \"deleted\"]}}}}\n {\"case\" {\"$ne\" [{\"$type\" \"$feedback\"} \"missing\"]},\n \"then\" \"$feedback\"}],\n \"default\" \"$$REMOVE\"}}}}],\n \"cursor\" {},\n \"maxTimeMS\" 1200000}\n{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$addFields\": {\n \"feedback\": {\n \"$switch\": {\n \"branches\": [\n {\n \"case\": {\n \"$isArray\": [\n \"$feedback\"\n ]\n },\n \"then\": {\n \"$filter\": {\n \"input\": \"$feedback\",\n \"as\": \"record\",\n \"cond\": {\n \"$ne\": [\n \"$$record._recordStatus\",\n \"deleted\"\n ]\n }\n }\n }\n },\n {\n \"case\": {\n \"$ne\": [\n {\n \"$type\": \"$feedback\"\n },\n \"missing\"\n ]\n },\n \"then\": \"$feedback\"\n }\n ],\n \"default\": \"$$REMOVE\"\n }\n }\n }\n }\n ],\n \"cursor\": {},\n \"maxTimeMS\": 1200000\n}\n",
"text": "HelloMQL has $filter to do that array operationif array filter\nelse if exist old_value\nelse $$REMOVE(is system variable,i used it , if feeback not exists,to not add the field keep document\nas it was)If the recordStatus dont exists document passes,because it cant be = “deleted”\nyou can change it,and if not exists remove the member,here it keeps it.Missing : , but more concise printSame query valid JSON",
"username": "Takis"
},
{
"code": "",
"text": "Amazing! Thanks so much. I’d been wondering whether addFields was what I needed.I didn’t know how to use switch here either.Works perfectly I think!",
"username": "Seb_Toombs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Filter subdocument array but return all parent fields | 2021-02-08T23:36:59.447Z | Filter subdocument array but return all parent fields | 4,779 |
null | [] | [
{
"code": "",
"text": "I am attempting to import a CSV into an existing class but it fails to read one of the fields, which contains postcode data such as “0820”, as a string. The CSV file is similar to the following;…csv\nid,name,addressLine1,addressLine2,suburb,state,postcode,manager,managerPhone,managerEmail,active\n“24a74f5f-d418-4c82-9185-6e0b3fb6742c”,\"MX HALLAM Logistics “,“74-80 Melverton Drive”,“Unit 10”,“Hallam”,“Vic”,“3803”,“Barney Rubble”,“1300 116 339”,“[email protected]”,1\n“50f69c47-8e51-4448-97d6-23b67edc5b38”,“MX NEWCASTLE Logistics”,“194 Cormorant Rd”,” \",“Kooragang”,“NSW”,“2304”,“Fred Flintstone”,“1300 116 339”,“[email protected]”,1\n…The error displayed is “Depot.postcode must be of type ‘string’, got ‘number’ (3803)”. How can I force the import procedure to recognise this as a string?",
"username": "Raymond_Brack"
},
{
"code": "id.string(),name.string(),addressLine1.string(),addressLine2.string(),suburb.string(),state.string(),postcode.string(),manager.string(),managerPhone.string(),managerEmail.string(),active.boolean()\n\"24a74f5f-d418-4c82-9185-6e0b3fb6742c\",\"MX HALLAM Logistics \",\"74-80 Melverton Drive\",\"Unit 10\",\"Hallam\",\"Vic\",\"3803\",\"Barney Rubble\",\"1300 116 339\",\"[email protected]\",1\n\"50f69c47-8e51-4448-97d6-23b67edc5b38\",\"MX NEWCASTLE Logistics\",\"194 Cormorant Rd\",\" \",\"Kooragang\",\"NSW\",\"2304\",\"Fred Flintstone\",\"1300 116 339\",\"[email protected]\",1\nmongoimport --drop -d test -c coll --type csv --headerline --columnsHaveTypes --file lol.csv \n2021-02-08T18:27:19.100+0100\tconnected to: mongodb://localhost/\n2021-02-08T18:27:19.100+0100\tdropping: test.coll\n2021-02-08T18:27:19.133+0100\t2 document(s) imported successfully. 0 document(s) failed to import.\ntest:PRIMARY> db.coll.find().pretty()\n{\n\t\"_id\" : ObjectId(\"6021747779ecf1295f3ab01c\"),\n\t\"id\" : \"24a74f5f-d418-4c82-9185-6e0b3fb6742c\",\n\t\"name\" : \"MX HALLAM Logistics \",\n\t\"addressLine1\" : \"74-80 Melverton Drive\",\n\t\"addressLine2\" : \"Unit 10\",\n\t\"suburb\" : \"Hallam\",\n\t\"state\" : \"Vic\",\n\t\"postcode\" : \"3803\",\n\t\"manager\" : \"Barney Rubble\",\n\t\"managerPhone\" : \"1300 116 339\",\n\t\"managerEmail\" : \"[email protected]\",\n\t\"active\" : true\n}\n{\n\t\"_id\" : ObjectId(\"6021747779ecf1295f3ab01d\"),\n\t\"id\" : \"50f69c47-8e51-4448-97d6-23b67edc5b38\",\n\t\"name\" : \"MX NEWCASTLE Logistics\",\n\t\"addressLine1\" : \"194 Cormorant Rd\",\n\t\"addressLine2\" : \" \",\n\t\"suburb\" : \"Kooragang\",\n\t\"state\" : \"NSW\",\n\t\"postcode\" : \"2304\",\n\t\"manager\" : \"Fred Flintstone\",\n\t\"managerPhone\" : \"1300 116 339\",\n\t\"managerEmail\" : \"[email protected]\",\n\t\"active\" : true\n}\n",
"text": "Hi @Raymond_Brack,Step 1Transform the CSV header line so it looks like this:Step 2ResultNoteBecause I can handle any type I want with this, I can deal with the boolean “active” for example.Another option:If you are looking for an over engineered CSV Bazooka, you can give this a shot! I did this for a hackathon and it’s capable to handle any CSV you throw at it. It can regroup column together (if you have a date split on multiple columns day, month, year for example) and you can basically execute python code for each column so the parsing can be a lot smarter.Smart data import for CSV files into MongoDB. Contribute to MaBeuLux88/MongoSlurp development by creating an account on GitHub.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "class Depot: Object {\n @objc dynamic var state = \"\"\n @objc dynamic var postcode = \"\"\n @objc dynamic var manager = \"\"\n}\n",
"text": "I am attempting to import a CSV into an existing classIs this a question about importing data using Realm Studio or something else?I am asking because I took your data and condensed it to this to isolate the postal codestate,postcode,manager\n“Vic”,“3803”,“Barney Rubble”\n“NSW”,“2304”,“Fred Flintstone”I then created a matching Realm object in my Swift projectLastly, I ran the project with created the Realm file with no Depot objects, then opened Realm Studio and imported the file (named Depot.csv) and it worked correctly.import2236×400 56.8 KBSo there may be some other issue in the file you’re importing",
"username": "Jay"
},
{
"code": "",
"text": "My bad - I should have noted that I am using Realm Studio on Windows to import the data.",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "Oops I didn’t notice the “MongoDB Realm” topic .\nThat being said, if you can script with mongoimport, it will be faster I think.",
"username": "MaBeuLux88"
}
] | Import CSV Data | 2021-02-08T00:53:56.918Z | Import CSV Data | 4,793 |
null | [
"on-premises"
] | [
{
"code": "",
"text": "Hi\nI try it install MongDB Charts on my Centos 7.\nMy docker version is 19.03.6\nAfter run docker stack deploy -c charts-docker-swarm-19.12.2.yml mongodb-charts\ndocker service ls always show this:\nID NAME MODE REPLICAS IMAGE PORTS\nattyf7r08cx0 mongodb-charts_charts replicated 0/1 Quay *:80->80/tcp, *:443->443/tcpand docker service log attyf7r08cx0 print nothing!\nso I try to run docker run Quay\ngot this:\n parsedArgs\n installDir (‘/mongodb-charts’)\n log\n salt\n productNameAndVersion ({ productName: ‘MongoDB Charts Frontend’, version: ‘1.9.1’ })\n gitHash (undefined)\n supportWidgetAndMetrics (undefined)\n tileServer (undefined)\n tileAttributionMessage (undefined)\n rawFeatureFlags (undefined)\n stitchMigrationsLog ({ completedStitchMigrations: })\n featureFlags ({})\n chartsMongoDBUri failure: ENOENT: no such file or directory, open ‘/run/secrets/charts-mongodb-uri’\n tokens failure: ENOENT: no such file or directory, open ‘/mongodb-charts/volumes/keys/charts-tokens.json’\n encryptionKeyPath failure: ENOENT: no such file or directory, open ‘/mongodb-charts/volumes/keys/mongodb-charts.key’\n lastAppJson ({})\n stitchConfigTemplate\n libMongoIsInPath (true)Have anyone got the same errors? And how to fix it ?\nThanks!",
"username": "He_Qingwei"
},
{
"code": "",
"text": "Looks like you missed a step:\nhttps://docs.mongodb.com/charts/19.12/installation#create-a-docker-secret-for-the-metadata-databaseAlso as you may have seen on the compose download page:MongoDB Charts On-Premises will be end of life on September 1, 2021",
"username": "chris"
},
{
"code": "",
"text": "hi, @chris, thanks for replying.\nI do remember to execute the Step 6:\ndocker run --rm Quay charts-cli test-connection ‘mongodb://172.17.0.1’\nMongoDB connection URI successfully verified.Before starting Charts, please create a Docker Secret containing this connection URI using the following command:\necho “mongodb://172.17.0.1” | docker secret create charts-mongodb-uri -\n[root@VM-186-157-centos /data/mongodb/mongodb-charts]# echo “mongodb://172.17.0.1” | docker secret create charts-mongodb-uri -\npv92cnbt4hlf9otoqx6ghngedI followed every step guide and this worked on my Ubuntu18.04LTS.\nBut it failed on Centos 7.",
"username": "He_Qingwei"
},
{
"code": "docker rundocker stack deploy",
"text": "Hi @He-Qingwei -The errors you saw when you started using docker run are expected, as the volumes and secrets are not mounted when you start this way. It is possible to mount the volumes with additional parameters, but it would likely be easier to figure out why docker stack deploy didn’t work.The most common cause of the 0/1 replicas is that the image is still in the process of being pulled down. Did you try this again once the image was definitely available locally?Tom",
"username": "tomhollander"
},
{
"code": "stack deploydocker pull quay.io/mongodb/charts:19.12.2docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE\nmongo latest ca8e14b1fda6 2 weeks ago 493MB\nquay.io/mongodb/charts 19.12.2 bfd64537eef0 6 months ago 714MB\ndocker stack deploy",
"text": "Hi, @tomhollander\nThanks for replying.\nBefore stack deploy, I run docker pull quay.io/mongodb/charts:19.12.2 for getting charts image manually.\nHere is my docker images outputs:Do any else images are required for docker stack deploy ?Best wishes, Qingwei",
"username": "He_Qingwei"
},
{
"code": "error msg=\"Failed creating ingress network: network sandbox join failed: subnet sandbox join failed for \\\"10.255.0.0/16\\\": overlay subnet 10.255.0.0/16 failed check with host route table: requested network overlaps wit\nFeb 08 10:54:23 VM-14-228-centos dockerd[1346]: time=\"2021-02-08T10:54:23.665071455+08:00\" level=warning msg=\"Peer operation failed:Unable to find the peerDB for nid:m66sf37g0wcbfxpu11rp6ykrh op:&{3 m66sf37g0wcbfxpu11rp6ykrh [] [] [] [] false false false func1}\"\nFeb 08 10:54:27 VM-14-228-centos dockerd[1346]: time=\"2021-02-08T10:54:27.315458585+08:00\" level=info msg=\"initialized VXLAN UDP port to 4789 \"\nFeb 08 10:54:27 VM-14-228-centos dockerd[1346]: time=\"2021-02-08T10:54:27.607537671+08:00\" level=warning msg=\"failed to deactivate service binding for container mongodb-charts_charts.1.wlvlodd7th7z25koq5og3dh4y\" error=\"No such container: mongodb-charts_charts.1.wlvlodd7th7z25koq5og3dh4y\" module=node/agent node.id=\nFeb 08 10:54:27 VM-14-228-centos dockerd[1346]: time=\"2021-02-08T10:54:27.786356910+08:00\" level=warning msg=\"Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count m66sf37g0wcbfxpu11rp6ykrh], retrying....\"\nFeb 08 10:54:27 VM-14-228-centos dockerd[1346]: time=\"2021-02-08T10:54:27.786408737+08:00\" level=error msg=\"Failed creating ingress network: network sandbox join failed: subnet sandbox join failed for \\\"10.255.0.0/16\\\": overlay subnet 10.255.0.0/16 failed check with host route table: requested network overlaps wit\nFeb 08 10:54:27 VM-14-228-centos dockerd[1346]: time=\"2021-02-08T10:54:27.786431349+08:00\" level=warning msg=\"Peer operation failed:Unable to find the peerDB for nid:m66sf37g0wcbfxpu11rp6ykrh op:&{3 m66sf37g0wcbfxpu11rp6ykrh [] [] [] [] false false false func1}\"\nFeb 08 10:54:27 VM-14-228-centos dockerd[1346]: time=\"2021-02-08T10:54:27.897323149+08:00\" level=warning msg=\"Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count 80ujoovfx0qf5t6h89rqvy2dc], retrying....\"\n",
"text": "And here is some logs @tomhollander :",
"username": "He_Qingwei"
}
] | Fail to install mongodb charts on Centos | 2021-02-06T13:11:02.500Z | Fail to install mongodb charts on Centos | 5,389 |
null | [] | [
{
"code": "",
"text": "I’m Brock, nice to meet you all, I’m a Realm TSE and I’m looking forward to learning from you all, and maybe even being of help.I’ve worked in the industry roughly 12 years, somewhat around 16 if you include moseying around with tech as a teen.My background in the past has typically been infosec, DevOps, a whole bunch of Linux (and Unix/Mac) with some large influences pertaining to Unreal Engine, Unity3D, GODOT, and working with development teams on iOS and Android applications.I look forward to supporting the community!Thank you,Brock.",
"username": "Brock_GL"
},
{
"code": "",
"text": "Hi @Brock_GL,\nWelcome to the MongoDB Developer Community Forum!\nNice to meet you! Have a great day!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "Thank you @Soumyadeep_Mandal, it’s appreciated.I’m learning a lot and I hope to grow as a stronger source of information as well for y’all.",
"username": "Brock_GL"
}
] | 🌱 Hello! Pleased to meet everyone! | 2021-02-05T02:03:31.919Z | :seedling: Hello! Pleased to meet everyone! | 3,454 |
null | [] | [
{
"code": "",
"text": "Can anyone tell me how to connect using mongo shell?",
"username": "Pritam_Das"
},
{
"code": "",
"text": "Hi @Pritam_Das,\nWelcome to the MongoDB Developer Community Forum!\nPlease read and follow these documentations & guides\nhttps://docs.mongodb.com/manual/mongo/\nI hope these resources will be helpful to solve your issue!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "Thank you so much @Soumyadeep_Mandal",
"username": "Pritam_Das"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | connect mongo shell | 2021-02-08T17:34:14.602Z | connect mongo shell | 1,681 |
null | [
"installation"
] | [
{
"code": "",
"text": "I cant uninstall mongoDB because mongodb-windows-x86_64-4.4.1-signed.msi not found.How can i download this file?I just founded 4.4.3 version.Please help\nThanks",
"username": "Gergely_Karl"
},
{
"code": "",
"text": "Hi @Gergely_Karl,\nWelcome to the MongoDB Developer Community Forum!\nPlease visit this link and your device will be automatically detected and after visiting the page just click the download button to download and you can customize among the available versionsDownload MongoDB Community Server non-relational database to take your next big project to a higher level!\nOR\nClick on this link to download the latest version as of today is 4.4.3 Windows MSI\nhttps://fastdl.mongodb.org/windows/mongodb-windows-x86_64-4.4.3-signed.msiI hope it will help you to solve your issue!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "The latest patch version 4.4.3 should be fine as a replacement for 4.4.1 as the only changes are bug fixes.But if you have stringent requirements for this version or many previous ones you can find them in the archived releases. But that is only the zip versions.\nimage1436×544 73.8 KB\nFor the msi have a look in:Try MongoDB Atlas products free. Developers can choose to use in the cloud or download locally. Either way, our software makes it easy to work with data.",
"username": "chris"
}
] | Cant un/reinstall mongoDB.mongodb-windows-x86_64-4.4.1-signed.msi not found | 2021-02-08T17:33:32.690Z | Cant un/reinstall mongoDB.mongodb-windows-x86_64-4.4.1-signed.msi not found | 2,815 |
null | [
"crud",
"indexes"
] | [
{
"code": " \"Type\" : 1.0,\n\n\"app\" : 1.0,\n\n\"event\" : 1.0,\n\n\"createdAt\" : -1.0\ndb.getCollection('<collection name>').update(\n\n{\"Type\" : \"<Type>\",\n\n \"app\" : \"<App>\",\n\n \"event\" : \"<testEvent>\",\n\n $and:[{\"createdAt\":{$gte: ISODate(\"2020-10-26T00:00:00.000Z\")}},\n\n {\"createdAt\":{$lte: ISODate(\"2020-12-26T00:00:00.000Z\")}}]},\n\n{$set:{\"isRead\" : true}},\n\n{multi:true}\n\n)\n",
"text": "I’m looking to execute an update query in a Production Mongo DB. Version 3.0.14. DB cluster with Primary and 4 Replicas with i3.2xlarge.\nThere are 2.2 M records to be updated. Can this have a considerable performer impact on the DB instances .Following are the indexes.This is a similar sample query",
"username": "Amalka_Pathirage"
},
{
"code": "",
"text": "Hello @Amalka_Pathirage, welcome to the MongoDB Community forum.The update operation has a query filter with multiple fields. To access the documents efficiently based upon these fields a Compound Index is required. Indexing on individual fields (single field indexes) wont be of that much use - at most one or maybe even two indexes will be used.You can verify if the query uses an index (or indexes) by generating a query plan using the db.collection.explain on the update method. For the details of the explain’s output see Explain Results.Also see Indexing Strategies. This has multiple topics, and Create Queries that Ensure Selectivity is a topic of interest, that how you can organize the fields in a compound index. With a large dataset, a compound index with four fields occupies space on disk and memory (RAM) while in operation - see the topic Ensure Indexes Fit in RAM.I suggest you try some indexing (as suggested above) and tests in a test environment with test datasets and see what the explain results show. Based upon the findings you can figure what indexing will be of use.",
"username": "Prasad_Saya"
}
] | Index Usage for Update Query Performance | 2021-02-08T17:33:05.801Z | Index Usage for Update Query Performance | 4,944 |
null | [
"queries",
"upgrading"
] | [
{
"code": "",
"text": "how to upgrade 3.4 to 3.6 version MongoDB",
"username": "Hemanth_perepi"
},
{
"code": "",
"text": "Hi @Hemanth_perepi\nWelcome to the MongoDB Developer Community Forum!\nPlease read and follow these documentations & guides\nI hope these resources will help you to solve your issue with upgrade.",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to upgrade 3.4.1 to 3.6 version mongodb in windows | 2021-02-08T18:20:25.222Z | How to upgrade 3.4.1 to 3.6 version mongodb in windows | 2,237 |
null | [] | [
{
"code": "",
"text": "Can anyone tell me how to connect using mongo shell?",
"username": "Pritam_Das"
},
{
"code": "",
"text": "Hi @Pritam_Das,\nWelcome to the MongoDB Developer Community Forum!\nPlease read and follow these documentations & guides\nhttps://docs.mongodb.com/manual/mongo/\nI hope these resources will be helpful to solve your issue!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "Thank you so much @Soumyadeep_Mandal",
"username": "Pritam_Das"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connect Mongo Shell | 2021-02-08T17:34:38.227Z | Connect Mongo Shell | 2,152 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 3.6.22 is out and is ready for production deployment. This release contains only fixes since 3.6.21, and is a recommended upgrade for all 3.6 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "Hi @Jon_Streets,\nThanks for sharing the new release update!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 3.6.22 is released | 2021-02-08T16:16:07.232Z | MongoDB 3.6.22 is released | 1,544 |
null | [] | [
{
"code": "",
"text": "Hii m new to mongo db i am doing a poc for mongo db with redis is there a in built in way that i can do sync between mongo and redis",
"username": "anubhav_tarar"
},
{
"code": "",
"text": "MongoDB does not have a built-in way to sync with Redis, however there are a number of ways to accomplish this. You could write your own service/application that does this, use Kafka as an intermediate layer and configure both the Redis and MongoDB Connectors, or use third party ETL tools.",
"username": "Robert_Walters"
}
] | How to keep redis in synchronization with mongo db | 2021-02-08T06:18:41.788Z | How to keep redis in synchronization with mongo db | 4,726 |
null | [
"queries",
"golang"
] | [
{
"code": "",
"text": "Hi there\nWe would like to use the new driver for golang, and one of our needs is to have apply a default filter to a query before the actual find\\etc functions are being used.\nfor instance if a developer would like to create some custom query, once the query is evaluated, the driver will add the default query (i.e. $and:[{email:“x”}, userQuery]) to it.Is this something possible to do? or for this to happen we must use a wrapper code that enforces it?\nThanks ",
"username": "Oded_Valtzer"
},
{
"code": "",
"text": "Hello @Oded_Valtzer, welcome to the MongoDB Community forum.You can create a View on a collection - and on this view you can apply other queries.While creating the view is easy, it has some restrictions. The documentation I had linked has all the details, but from the definition:",
"username": "Prasad_Saya"
}
] | Default Query filter in the driver | 2021-02-07T10:07:11.389Z | Default Query filter in the driver | 2,129 |
null | [
"xamarin"
] | [
{
"code": " try\n {\n\n var user = App.realmApp.CurrentUser;\n\n //if user is not logged on yet log on the user and sync\n if (user == null)\n {\n\n\n var CurrentUser = await App.realmApp.LogInAsync(Credentials.Anonymous());\n var config = new SyncConfiguration(\"Hirschs\", CurrentUser);\n _realm = await Realm.GetInstanceAsync(config);\n\n return _realm;\n\n }\n else\n {\n \n return _realm = Realm.GetInstance();\n\n }\n\n\n }\n catch (Exception ex)\n {\n await UserDialogs.Instance.AlertAsync(new AlertConfig\n {\n Title = \"An error has occurred\",\n Message = $\"An error occurred while trying to open the Realm: {ex.Message}\"\n });\n\n // Try again\n return await OpenRealm();\n }\n\n}",
"text": "I’m reading data from remote mongodb realm which sync’s to my local realm, but it seems I can’t read from my local realm after sync.This is the message I get when I try to read from my local realm:“Unable to open a realm at path ‘/data/user/0/com.companyname.appname/files/default.realm’: Incompatible histories. Expected a Realm with no or in-realm history, but found history type 3 Path:Exception backtrace:\\n.”here is my code:private async Task OpenRealm()\n{",
"username": "Govern_Goodbuy"
},
{
"code": "private async Task<Realm> OpenRealm()\n{\n try\n {\n var currentUser = App.realmApp.CurrentUser;\n\n if (currentUser == null)\n {\n var currentUser = await App.realmApp.LogInAsync(Credentials.Anonymous());\n var config = new SyncConfiguration(\"Hirschs\", currentUser);\n _realm = await Realm.GetInstanceAsync(config);\n\n return _realm;\n }\n else\n {\n var config = new SyncConfiguration(\"Hirschs\", currentUser);\n _realm = Realm.GetInstance(config);\n\n return _realm;\n }\n }\n catch (Exception ex)\n {\n await UserDialogs.Instance.AlertAsync(new AlertConfig\n {\n Title = \"An error has occurred\",\n Message = $\"An error occurred while trying to open the Realm: {ex.Message}\"\n });\n }\n}\n",
"text": "The problem here is that you are trying to create a new local realm in the same path where the synced realm already is.I suppose that you would like to open the same realm synchronously (that is necessary if the device is offline). In this case you would just need to use the same configuration for both the sync and async calls, as reported in the documentation here.You could do something like:",
"username": "papafe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to open a realm at path in Xamarin forms | 2021-02-07T11:14:24.195Z | Unable to open a realm at path in Xamarin forms | 4,106 |
null | [
"aggregation",
"php"
] | [
{
"code": "",
"text": "I am using mongodb with ODM & PHP symfony,Retrieving a set of data with aggregate query builder and group function, using ->first operator , i’m retrieving first elements of grouped data wit DESC of ceated date, i need sum of one of the fields in this grouped data in a single query.",
"username": "Irshad_Muhammed"
},
{
"code": "$first$sum$sum$group$group$group$sumfirst()group()sum()",
"text": "It would be helpful to share a few example documents from the collection (feel free to redact irrelevant fields) and another example of the output you’re trying to produce.I’m not sure what “using ->first operator” refers to. Are you referring to the Doctrine ODM builder method that corresponds to a $first accumulator in the MongoDB query language?The $sum documentation in the MongoDB server manual has some examples of using $sum with $group. See: Use in $group Stage.If you’re using Doctrine MongoDB ODM, there is also an example of using $group and $sum in the Aggregation Builder documentation. The examples don’t happen to show first(), but it can be used within a group() context similar to sum() and other accumulator methods.Before you attempt to create this query via the ODM’s aggregation builder I think it would be more helpful/educational to try and construct the raw aggregation pipeline and execute it via the MongoDB shell. This will allow you to work alongside examples in the MongoDB server manual. Doctrine ODM’s documentation is not a replacement for server manual, especially when it comes to MongoDB’s query language.Alternatively, MongoDB Compass includes a visual Aggregation Pipeline Builder that may be helpful. That should allow you to iteratively construct an aggregation pipeline while connected to your database, which you can then port over to ODM’s builder methods once you’re satisfied.",
"username": "jmikola"
}
] | How to get sum of results got from aggegate group query with first operator | 2021-02-03T11:25:20.332Z | How to get sum of results got from aggegate group query with first operator | 3,757 |
null | [
"java"
] | [
{
"code": "07-10-2020 16:26:52.739 [main] INFO org.mongodb.driver.cluster.info - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}\n07-10-2020 16:26:52.740 [main] INFO org.mongodb.driver.cluster.info - Cluster description not yet available. Waiting for 30000 ms before timing out\n07-10-2020 16:26:52.777 [cluster-ClusterId{value='5f7d9ef436efac69854c9165', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster.info - Exception in monitor thread while connecting to server localhost:27017\ncom.mongodb.MongoSocketOpenException: Exception opening socket\n\tat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70)\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:127)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect\n\tat java.net.DualStackPlainSocketImpl.connect0(Native Method)\n\tat java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:83)\n\tat java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)\n\tat java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)\n",
"text": "Hi We are trying to insert around 10000 Documents to a MongoDB’s Collection. The data is basically read from the CSV file and stored into MongoDB local instance. Near to 8300 records got inserted and beyond this, we get the following error",
"username": "Sriram_Muralidharan"
},
{
"code": "",
"text": "Was this the only thing connecting to your database at the time ?",
"username": "chris"
},
{
"code": "",
"text": "Thanks for the reply Chris, Actually trying to read an Excel which has around 40000 records or rows, we wanted to insert these rows into MongoDB as a document and while doing so I am getting this error after storing 8100 records…",
"username": "Sriram_Muralidharan"
},
{
"code": "",
"text": "Was this the only thing connecting to your database at the time ?",
"username": "chris"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | No buffer space available (maximum connections reached?): connect | 2020-10-07T12:07:25.546Z | No buffer space available (maximum connections reached?): connect | 4,012 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "The documentation talks about uploading external dependencies (https://docs.mongodb.com/realm/functions/upload-external-dependencies) but does not seem to address removing all external dependencies from a Realm app. What is the correct approach for that either using Realm UI and/or CLI?",
"username": "Ilya_Sytchev"
},
{
"code": "",
"text": "Hi Ilya,Unfortunately there is no delete option per se.\nCan I ask why you would like to remove all dependencies?I will mention that there is no cost/performance impact to yourself for keeping around dependencies.If you’re looking to replace them with other dependencies, you can simply upload a new tar file with only the dependencies you want to use and it will replace what is already there effectively removing the packages that were not included in the new upload.Should you wish to provide feedback to our team regarding the value of being able to delete dependencies explicitly, please visit our feedback forum for Realm and enter this as an idea that you would like to see developed.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Thanks for the update. It would be great to add the info about replacing dependencies to the docs.I think that if an application no longer relies on dependencies it should be possible to remove them. Deleting unused code is a good idea in general.",
"username": "Ilya_Sytchev"
},
{
"code": "",
"text": "Hi Ilya,I have requested our documentation team to update the article with this information.Kind Regards\nManny",
"username": "Mansoor_Omar"
}
] | Remove all uploaded external dependencies | 2021-02-03T22:35:05.297Z | Remove all uploaded external dependencies | 2,128 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "I am experiencing a problem with Realm.asyncOpen this morning. I usually call this function at startupRealm.asyncOpen(configuration: user.configuration(partitionValue: “user_id=(uid)”),on my iOS program. About half the time this function just never comes back. I used work fine up until this morning. I tried this on a new app and new cluster - same problem.Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "Can you share server or client Logs please?",
"username": "Ian_Ward"
},
{
"code": "func initRealms(onCompletion completion: @escaping (Error?) -> Void) {\n if let user = self.app.currentUser,\n let uid = self.currentUserId {\n \n // open user realm\n Realm.asyncOpen(configuration: user.configuration(partitionValue: \"user_id=\\(uid)\")) { result in\n \n switch result {\n case .success(let realm):\n self.userRealm = realm\n \n Realm.asyncOpen(configuration: user.configuration(partitionValue: \"public\")) { result in\n \n switch result {\n case .success(let realm):\n self.publicRealm = realm\n completion(nil)\n\n case .failure(let error):\n fatalError(\"Failed to open public realm: \\(error)\")\n }\n \n }\n\n case .failure(let error):\n fatalError(\"Failed to open user realm: \\(error)\")\n }\n \n }\n }\n}\n2021-02-05 13:45:38.003134-0500 CosyncStorageSample[8391:315745] [] nw_protocol_get_quic_image_block_invoke dlopen libquic failed\n",
"text": "Ian,My Realm Id is cosyncstoragetest-agmjuThe login works fine and I get this logOKFeb 05 13:45:38-05:0057msAuthenticationArrow Right Iconloginlocal-userpass60141f2266abe05ca0f416c5601d92529daad1506b74f461The function to open the Realms is thisThe output logs from Xcode is this2021-02-05 13:45:40.709796-0500 CosyncStorageSample[8391:315899] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false\n2021-02-05 13:45:40.754705-0500 CosyncStorageSample[8391:315899] Sync: Connection[1]: Connected to endpoint ‘3.210.32.164:443’ (from ‘192.168.86.22:63744’)\n2021-02-05 13:47:52.670704-0500 CosyncStorageSample[8391:315899] Sync: Connection[1]: Connection closed due to error\n2021-02-05 13:47:52.751124-0500 CosyncStorageSample[8391:315899] Sync: Connection[1]: Connected to endpoint ‘3.210.32.164:443’ (from ‘192.168.86.22:63835’)Up until this morning, this code worked absolutely fine. It now hangs on the first asyncOpen() and never comes back intermittently, but about 80% of the time.I have bundled Swift Package Dependencies Realm 10.5.1 and RealmDataBase 10.3.3I have also tried this on a brand new cluster and application - same problem.Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "if let user = self.app.currentUser,\n let uid = self.currentUserId {\n",
"text": "Not a critique or even related to the issue but this section of code can fail silently and you’ll never know about it or why.We were using similar code and spent a day trying to figure out why our app would never log in ‘out of the blue’ and it was a uid issue.",
"username": "Jay"
},
{
"code": "",
"text": "Yes it could fail if you can’t unwrap user, but right now it’s failing on the asyncOpen() - it just does not come back. This only started happening this week, and it’s intermittent to boot.",
"username": "Richard_Krueger"
},
{
"code": "if let user = self.app.currentUser,\n let uid = self.currentUserId {\n var currentUserId: String? {\n if let user = self.app.currentUser {\n let uid = user.id\n return uid\n }\n return nil\n }\n",
"text": "My issue with asyncOpen() has magically stopped happening - maybe the server gods have decided to smile on me again. It might have something to do with the fact that it is 2AM here on East Coast - in the pandemic time is but a mere illusion. Speaking of your topic Jay, what was your issue with?Now you have peaked my curiosity. In my code, I define currentUserId asRichard",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Richard_KruegerThe issue we had was due to another coding error, which was caused by the app crashing somewhere ‘in the middle’ and currentUser was still populated but was not authenticated.It would fail silently and we had to figure out why - just adding some else statements with error handling took care of that - no biggie.The comment was more for troubleshooting than anything.",
"username": "Jay"
},
{
"code": "Realm.GetInstanceAsync",
"text": "I’m having the same issue. Without any modifications on the client or the server, Realm.GetInstanceAsync (C#) is hanging (it actually returns after 5 minutes or so if I remove the timeout).This happened last Friday, yesterday and now today.",
"username": "Luccas_Clezar"
},
{
"code": "",
"text": "I spoke too soon, it happening to me again tonight at 7PM EST. I will wait 5 minutes to see if it comes back. I am going to build a super simple Realm app, where this happens, and post it to the forum for further examination.Richard",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Richard_KruegerIt was the same for me, yesterday I couldn’t connect, then for some hours it was normal again, then today about 2PM UTC the issue reappeared.I already opened a support ticket about this issue but there’s no information about it for now.",
"username": "Luccas_Clezar"
},
{
"code": "REALM_APP_ID = \"hangingapp-lxnaw\"\n",
"text": "So I have essentially condensed this down a 10 line Realm Swift program on a fresh MongoDB Realm app and Atlas cluster. It hangs every time on asyncOpen. I published a public GitHub repo for anyone to try this out.Contribute to rckrueger/HangingApp development by creating an account on GitHub.Currently this is built against a MongoDB app with a Realm Id ofThere is one user called [email protected] with a password “mongodb”. The app is configured with a simple Email/Password authentication.This definitely seems like a server problem from the latest deploy.Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "I tested the project a few times now and I see the same issue. Sometimes it actually works as it should returning in 4 or 5 seconds, but most of the time it takes more than 3 minutes.",
"username": "Luccas_Clezar"
},
{
"code": "",
"text": "@Luccas_Clezar there is definitely something going on that’s new. This was not happening until Friday consistently - then again I wasn’t programming Wednesday or Thursday - doing documentation work instead. The example that I published is as basic as it gets.",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "It is 19:49 EST - the problem seems to be fixed. I will try again in the morning.Richard",
"username": "Richard_Krueger"
}
] | Problem with asyncOpen | 2021-02-05T15:52:52.051Z | Problem with asyncOpen | 3,432 |
null | [
"replication",
"monitoring"
] | [
{
"code": "",
"text": "Hi Team,Can we capture state changes in MongoDB replica set.\nFor Ex.\nNode1 was primary yesterday.\nToday it became secondary due to weekly maintenance on Node1.Is there a way If we would like to track these changes such as previous state of Node1 and current state of Node1 and when the state was changed.Thanks",
"username": "venkata_reddy"
},
{
"code": "",
"text": "Hi @venkata_reddy,Replica set member history is typically tracked by way of a monitoring solution which will generally also include proactive alerting for changes of interest (for example, increasing replication lag or election of a new primary).See Monitoring for MongoDB for a list of some common tools, services, and server commands used.You can also derive replication history like replica member state changes and reconfiguration from the MongoDB server logs, but I would usually reserve that effort for diagnostic investigation when changes cannot be adequately explained via your monitoring solution.Regards,\nStennie",
"username": "Stennie_X"
}
] | Is there a way to track Node state change in MongoDB Replica set | 2021-02-07T17:54:45.107Z | Is there a way to track Node state change in MongoDB Replica set | 2,667 |
[
"security"
] | [
{
"code": "db.getSiblingDB(\"$external\").runCommand({createUser:\"CN=192.168.31.100,OU=KernelUser,O=MongoDB,ST=New York,C=US\",roles:[{role:'root',db:'admin'}]})",
"text": "when I send\ndb.getSiblingDB(\"$external\").runCommand({createUser:\"CN=192.168.31.100,OU=KernelUser,O=MongoDB,ST=New York,C=US\",roles:[{role:'root',db:'admin'}]}),return the error:Cannot create an x.509 user with a subjectname that would be recognized as an internal cluster member.why?\nI do all follow “Cloud: MongoDB Cloud”\n\nc145966×224 75.5 KB\n\n\nc145899×238 34 KB\n",
"username": "join_mic"
},
{
"code": "",
"text": "May be the server and client certificates are using same values for O,OU fields\nDN(distinguished name) from client certificate subject should differ from that of server",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Yeah! Is solved! I change the ‘CN=127.0.0.1’ to ‘CN=192.168.31.100’,\nand ready another visualserver for client ,Server.pem(192.168.31.100):\nsubject= CN=192.168.31.100,OU=KernelUser,O=MongoDB,ST=New York,C=USClient.pem(192.168.31.110):\nsubject= CN=customer,OU=customer,O=MongoDB,ST=New York,C=USthank you very much!\n ",
"username": "join_mic"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cannot create an x.509 user with a subjectname that would be recognized as an internal cluster member | 2021-02-06T12:35:50.187Z | Cannot create an x.509 user with a subjectname that would be recognized as an internal cluster member | 3,198 |
|
null | [
"queries"
] | [
{
"code": "_id:601d08fcffc1a9cb3c104b5d\ncanIpForward:true\ndisks:0\ndiskSizeGb:\"125\"\n",
"text": "HiHow do I find all documents where “diskSizeGb” > 200 (integer)Document:Thanks\nFelix",
"username": "Felix_Nielsen"
},
{
"code": "{\"$match\" : {\"$expr\" : {\"$gt\" : [{\"$toInt\" :\"$diskSizeGb\"} , 200]}}}\nfind({\"$expr\" : {\"$gt\" : [{\"$toInt\" :\"$diskSizeGb\"} , 200]}})\n",
"text": "HelloThere is an aggregation operator https://docs.mongodb.com/manual/reference/operator/aggregation/toInt/You can convert the string field to int and then compare.If aggregation add a match stageIf find*$expr allows us to use aggregate operators to query",
"username": "Takis"
},
{
"code": "",
"text": "Thanks a lot, mongo is really cool - my first real work with itdiskSizeGb is part of an array, so the find query don’t quite work, see screenshot",
"username": "Felix_Nielsen"
},
{
"code": "{\"$expr\" :\n {\"$allElementsTrue\" :\n {\"$map\" :\n {\"input\" : \"$disks\",\n \"as\" : \"disk\",\n \"in\" : {\"$gt\" : [{\"$toInt\" : \"$$disk.diskSizeGb\"} , 200]}}}}}\n{\"$expr\" :\n {\"$anyElementsTrue\" :\n {\"$map\" :\n {\"input\" : \"$disks\",\n \"as\" : \"disk\",\n \"in\" : {\"$gt\" : [{\"$toInt\" : \"$$disk.diskSizeGb\"} , 200]}}}}}\n",
"text": "You can use map and anyTrue or allTrue,put the expression in match or find.This keeps a document,if all disks >200 GB sizeThis keeps a document,if at least one disk >200 GB sizeYou can do it with reduce,also,reduce in true or false.",
"username": "Takis"
},
{
"code": "diskSizeGb$toIntdiskSizeGb",
"text": "Welcome to the MongoDB Community @Felix_Nielsen!If this is going to be a common query, the most efficient approach would be to either change diskSizeGb to an integer (recommended) or to add another field which has the string value converted to an integer. The integer field should also be indexed.You can convert from string to integer via aggregation processing with $toInt (MongoDB 4.0+) as @Takis suggested, but that approach will result in evaluating matches for every document in your collection.An indexed range query on diskSizeGb without type conversion will only have to examine matching index keys. For more insight into how queries are processed, have a look at the Explain Results from the query planner. MongoDB Compass (a free desktop UI) has a helpful Explain Plan feature with a visual summary of the raw explain metrics.For some great tips & tricks on understanding query performance, I also recommend watching Let’s .explain() Performance from last year’s MongoDB.live conference.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "anyElementsTrue",
"text": "anyElementsTrueThanks @Takis works great!I did have some regex changing the json when I imported data, but wanted to see how it could be done without any pre-conversion.Is there an easy way to change the string to int on the data schema? - so I can run it ones after a fresh importThanks again!\nFelix",
"username": "Felix_Nielsen"
}
] | Find with string greater than | 2021-02-05T20:47:28.329Z | Find with string greater than | 23,733 |
null | [
"licensing"
] | [
{
"code": "",
"text": "Hi,I am interested in deploying MongoDB on target workstations for our software (possible licence issues and best practices)I am working on a goverment funded project for an educational facility. We are planning to ship some kind of installer that sets up the software and its dependencies on the target workstations.Would it be possible to distribute some kind of minimal MongoDB-backend with our software. I don’t see any problem in setting up the base dataset, once there is some kind of mongodb server installed. I do worry about licence issues and I wonder what would be the best practice to provide the users of our software with a lean and unobstusive db backend.I admit that I did not have much time to search the documention and this forum for answers to my question. If this topic is already covered, please provide a link.Best regards",
"username": "bugblatterbeast"
},
{
"code": "",
"text": "I’ve covered all the deployment issues.I am still worrying a bit about license issues because sadly we can’t avoid to include a closed-source shared library in our project.",
"username": "bugblatterbeast"
},
{
"code": "",
"text": "Hello , This is an issue i am facing also - have you got an answer on this?",
"username": "Eyal_Zamir"
},
{
"code": "systemLog:\n destination: file\n logAppend: false # we want to keep the logs short\n path: /PORTABLE_MONGODB_FOLDER/mongod.log\n\nstorage:\n dbPath: /PORTABLE_MONGODB_FOLDER/data\n journal:\n enabled: true\n\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /PORTABLE_MONGODB_FOLDER/mongod.pid\n\nnet:\n port: CUSTOM_PORT\n bindIp: 127.0.0.1\n/PORTABLE_MONGODB_FOLDER/bin/mongod -f /PORTABLE_MONGODB_FOLDER/mongod.confkill $(cat /PORTABLE_MONGODB_FOLDER/mongod.pid)",
"text": "Hi Eyal, I am still a beginner regarding MongoDB. But deployment was surprisingly easy.We are still in development/testing and for our test workstations we are using this (I don’t know if it’s best practice, but it’s working perfectly fine):We are shipping the tgz/zip versions of the MongoDB software with a small setup script, that generates a minimal configuration file with absolute paths for system-log, storage and pid-file.Additionally some kind of init-file is created, that provides the database path to our software. When our software launches we start the mongodb daemon binary with the -f option so that it uses our custom configuration and not searches for a global one./PORTABLE_MONGODB_FOLDER/bin/mongod -f /PORTABLE_MONGODB_FOLDER/mongod.confAnd killing it with the process ID provided in the pid file.kill $(cat /PORTABLE_MONGODB_FOLDER/mongod.pid)This is working like a charm on linux and windows workstations (with small adjustments for windows). It is very lean and totally unubstrusive. We don’t need any installation. We can also access the db at a custom port and prevent interference with another mongodb that the customer might have already installed. All mongodb-related files are in one directory.I don’t know, if it would even be possible to use relative paths in the configuration. Then we wouldn’t even need the setup script.I haven’t figured out all the licensing issues. Seems like it wouldn’t be any problem at all if our software was completely open source. Unfortunately, we’re depending on at least one closed source static library. We couldn’t find out, if it would be OK to ship the mongodb binaries together with this closed source library. Therefore we plan to make the install script download the mongodb binaries instead of shipping them once we deliver to customers.",
"username": "bugblatterbeast"
}
] | Shipping MongoDB with our project | 2020-11-18T18:35:30.176Z | Shipping MongoDB with our project | 3,863 |
null | [] | [
{
"code": "",
"text": "Hi all,I dropped an entire collections and db (sample_training.inspections) by mistake and need this data set now for Logic Operators Quiz: chapter 4 question of :How many businesses in the sample_training.inspections dataset have the inspection result “Out of Business” and belong to the “Home Improvement Contractor - 100” sector?Is there a way to get this data set back?",
"username": "fiopwk_N_A"
},
{
"code": "",
"text": "Check this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi I am sarath",
"username": "Sarath_babu_1"
},
{
"code": "",
"text": "Hi @fiopwk_N_A,Hope your issues are resolved now. Let me know if you are still facing any issue.",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Hi @Sarath_babu,Welcome to MongoDB University discussion forum. Hope you are liking the M001 course. Let me know if you have any questions regarding this course.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "@Sarath_babu yes, my issue is resolved. I found a thread in the forum that had a link to repo with the clusters I needed to restore. Should I close this as resolved?",
"username": "fiopwk_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Regenerate sample_training.inspections | 2021-01-17T11:33:57.858Z | Regenerate sample_training.inspections | 2,357 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "Is it possible to have some realm objects that are not synced and other realm objects that are synced in the same app? Or do they all have to be either synced or not? Thanks for any help that you can offer.",
"username": "Deji_Apps"
},
{
"code": "",
"text": "Hello Deji_Apps! Welcome to the community!So short answer, yes.Long answer, you need to have two separate realms, one realm that is synced which is going to have the objects linked that you want to have synced, and another realm that’s local only and not synced, which will hold the objects that are not intended to be synced to the cloud.Does that make sense?Regards,Brock.",
"username": "Brock_GL"
},
{
"code": "let realm = try! Realm()let app = App(id: YOUR_REALM_APP_ID)\n// Log in...\nlet user = app.currentUser\nlet partitionValue = \"some partition value\"\nvar configuration = user!.configuration(partitionValue: partitionValue)\nRealm.asyncOpen(configuration: configuration) { result in...\nlet realm = try! Realm(configuration: configuration(partitionValue: partitionValue))",
"text": "Yes, easy. here’s some high level code.A non-sync’d realm will be accessed like thislet realm = try! Realm()That line will talk to the local realm file called default.realm. You can add a config to point to a file other than default.realm; like this Realm(config: some config) where config contains a fileUrl you want to use as your local storage.For sync’d realms, accessing Realm is a combination of a couple of steps, including defining the partition key of the Sync’d Realm you want to access, and a user var containing the user you authenticating with… At a high level it will look more like this for the initial connection - .async open gets the ball rollingthen later on in your app, you’ll access the realm via a config containing the partition (instead of the fileUrl for local files)let realm = try! Realm(configuration: configuration(partitionValue: partitionValue))Partition vs Realm is a little confusing to start with but with Sync’ing a Realm is defined by its Partition value whereas with local files a Realm file is a Realm file and does not necessarily contain a partition.",
"username": "Jay"
},
{
"code": "",
"text": "Thank you for the reply!!!",
"username": "Deji_Apps"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Non-synced objects alongside synced objects | 2021-02-05T20:00:43.040Z | Non-synced objects alongside synced objects | 1,927 |
null | [
"node-js"
] | [
{
"code": "doc.save().then(savedDoc => { \n\nconsole.log(savedDoc) // THE UPDATES ARE SHOWING HERE!! All good till here!\n\nsavedDoc === doc; // true });\n",
"text": "i have a document with one of the fields is an array of objects.i updated one of the objects and called .save() on the document.the promise is resolved and the returned document shows the updates (code below)now here is the strange part - the updates are not written down into the database!what am I missing??",
"username": "Sangram_Chahal"
},
{
"code": "client.db(\"testdb\").collection(\"testcoll\")\n .save(doc).then(writeResult => {\n console.log(writeResult.result, writeResult.ops);\n });\n{ n: 1, ok: 1 } [ <the inserted document> ]insertOne",
"text": "Hello @Sangram_Chahal, welcome to the MongoDB Community forum.You need to pass the document to be inserted to the collection.save method. For example:The output, in addition to inserting the document into the collection, prints:{ n: 1, ok: 1 } [ <the inserted document> ]Also, note the documentation says:save is - Deprecated: use insertOne, insertMany, updateOne or updateManyIn your case, use insertOne.Also see these topics from the NodeJS Driver documentation:",
"username": "Prasad_Saya"
}
] | Peculiar behavior when using collection.save() | 2021-02-06T06:53:50.387Z | Peculiar behavior when using collection.save() | 1,652 |
[] | [
{
"code": "",
"text": "Hi everyone!I’m new with mongodb and charts and I can’t figure out somethings that seem to be basic stuff. The main thing on this question is that I to sort the “priceToday” as shown in the picture Screenshot from 2020-12-01 11-14-311438×737 387 KB. Also I’d like to put more than one value on the Tooltip Detail. Can anyone help with these questions???",
"username": "Guilherme_de_Carvalh"
},
{
"code": "priceToday",
"text": "Hi @Guilherme_de_Carvalh -Regarding the sort order: are your prices stored as strings or numbers? Strings are supposed to sort in alphabetical order (which will work for numbers only if they all have the same number of digits). However it looks like there may be a bug with the geo scatter chart where the values are not sorted correctly. We’ll look to get this fixed, however it should work today if your prices are numbers. If you are storing them as strings for some reason, you can change the type in Charts by clicking the … button on the priceToday field in the left pane and choosing Convert Type.Regarding multiple Tooltip Detail fields - this is on our backlog, but unfortunately not supported today.HTH\nTom",
"username": "tomhollander"
},
{
"code": "$toDoublepriceToday",
"text": "Hello @tomhollander. You’re right, I wrongly saved the prices as strings. However, I converted them on the aggregation (with $toDouble) and on the Convert Type and the problem persists. It’s a little weird, because it’s able to bin, and when it does, it sorts correctly, but if I toogle it off, it becomes random. Moreover, in the Fields column priceToday is shown as Number, but on the color it’s shown as String. I’m sending another print to show it.Screenshot from 2020-12-01 18-15-38492×504 32.6 KB .Also, is there a roadmap I can follow to see when the multiple Tootip Detail Fields will be ready? Finally, are the plans to implement tooltip on the heat maps too?",
"username": "Guilherme_de_Carvalh"
},
{
"code": "$toDecimal$sort",
"text": "Thanks @Guilherme_de_Carvalh.You’re right, they don’t seem to sort correctly as doubles either. However (for whatever reason) they do seem to sort as decimals (convert using $toDecimal) which is the best type to use for prices anyway. Can you try that and see if it works? Failing that you could also try adding an explicit $sort stage to the pipeline. We’ll also work to fix all of these sorting issues on geo scatter charts.The use of the “A” symbol on the encoding panel was a deliberate design choice, but one that we’ve seen has confused people so we’re planning on changing it. Basically the icon shows “A” for strings since Color is a category channel - similarly if you put a string in an aggregation channel it will show a “#” since the count of the string is numeric - but we’re planning to show the original type icon to avoid the confusion.Unfortunately we haven’t scheduled the work for the multiple tooltip detail fields, but I’ll see if we can sneak it in when we have some spare capacity.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi @tomhollander, unfortunatly it didn’t work… I’m sending a full screenshot, maybe it’ll help you to see what’s going on.Screenshot from 2020-12-01 19-01-001691×861 483 KB",
"username": "Guilherme_de_Carvalh"
},
{
"code": "",
"text": "Interesting. A similar scenario with decimal types is working for me, but maybe it’s just luck / data dependent. We’ve basically had a regression for this chart type where sorting isn’t being applied to the series, so the behaviour is likely non-deterministic. We’ll look at the bug ASAP and should be able to get it in our release in ~3 weeks time.Tom",
"username": "tomhollander"
},
{
"code": "docker run quay.io/mongodb/charts:19.12.2docker stack deploy -c charts-docker-swarm-19.12.2.yml mongodb-charts",
"text": "Hi, @tomhollander\nI dont know how to create new post, so I replay you here .\nI try to install MongoDB Charts, but got errors!\nSo I run docker run quay.io/mongodb/charts:19.12.2 and got these outputs:\n parsedArgs\n installDir (’/mongodb-charts’)\n log\n salt\n productNameAndVersion ({ productName: ‘MongoDB Charts Frontend’, version: ‘1.9.1’ })\n gitHash (undefined)\n supportWidgetAndMetrics (undefined)\n tileServer (undefined)\n tileAttributionMessage (undefined)\n rawFeatureFlags (undefined)\n stitchMigrationsLog ({ completedStitchMigrations: })\n featureFlags ({})\n chartsMongoDBUri failure: ENOENT: no such file or directory, open ‘/run/secrets/charts-mongodb-uri’\n tokens failure: ENOENT: no such file or directory, open ‘/mongodb-charts/volumes/keys/charts-tokens.json’\n encryptionKeyPath failure: ENOENT: no such file or directory, open ‘/mongodb-charts/volumes/keys/mongodb-charts.key’\n lastAppJson ({})\n stitchConfigTemplate\n libMongoIsInPath (true)The error only happends on my Centos 7 system\ndocker stack deploy -c charts-docker-swarm-19.12.2.yml mongodb-charts works fine on Ubuntu18.04LTS.\nmy docker version is 19.03.6Thanks you very much for helping!",
"username": "He_Qingwei"
}
] | Customise on scatter chart | 2020-12-01T19:33:03.674Z | Customise on scatter chart | 2,881 |
|
null | [] | [
{
"code": "",
"text": "Hello,I recently got a notification about too many open connection to the primary in my Atlas hosted DB. Upon checking the logs, I found a repeating connection in this pattern:[listener] connection accepted from 192.168.xxx.4:50372 #502401 (2694 connections now open)\n2021-01-28T11:18:15.735+0000 I NETWORK [conn502401] received client metadata from 192.168.248.4:50372 conn502401: { driver: { name: “mongo-go-driver”, version: “v1.3.4” }, os: { type: “linux”, architecture: “amd64” }, platform: “go1.14.7”, application: { name: “MongoDB Automation Agent v10.23.3.6702 (git: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)” } }Why is the MongoDB Automation Agent making so many connections. How can I resolve this issue. Out of the 3000 max connections, it seems like almost 2500 are being used by the agent. Kindly help.Thanks!",
"username": "Sagar_Setu"
},
{
"code": "",
"text": "Hi Sagar,This definitely sounds unexpected: please open a case with the MongoDB support team (or in-UI chat help desk) – the team should investigate and ensure there isn’t something unexpected here.-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Thank you for the reply Andrew! I did contact the support team and was suggested to do a failover test. I decided to wait instead. Since then, the number of connections have been stable. Will post updates here in case anything new happens.",
"username": "Sagar_Setu"
},
{
"code": "",
"text": "Hi,\nI am facing the issue again. This time I noticed that the logs downloaded from atlas show a different number of open connections (around 1800) vs what is shown in atlas console (around 3000). When it it close to 3000 the connections begin to drop. Any clues as to what could be the reason for these extra 1200 connections not shown in the log?Thanks",
"username": "Sagar_Setu"
},
{
"code": "",
"text": "Hi Sagar have you engaged with the support team in the UI?",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hello,Yes I contacted them. And they helped me understand how to analyze the logs using mtools. I was able to track the application that was causing the issue and resolve it.\nNewbie mistakes but here is the summary:\nFor anyone else reading this, make sure to download the correct logs for primary replica set. I wasted a lot of time because I was downloading logs for secondary replica set which was not reflecting the real cause of the problem. mloginfo command in mtools allows you to see the connections opened/closed. That will help you see the IP address of the source of the connection and you can track the source of problem from there.Thanks!",
"username": "Sagar_Setu"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Too many connections from MongoDB agent | 2021-01-28T18:36:04.262Z | Too many connections from MongoDB agent | 3,641 |
null | [
"queries",
"dot-net"
] | [
{
"code": "",
"text": "I have 20 fields that users can put any information in them. They can specify if any of those fields are dates. This allows in the user interface to use a date picker and filter and apply date filters.Is there anyway with the C# driver to a LINQ date filter query on these fields?\nIdeally, I would ant to simply do:\nresult.Where(x => x.A1 != null && searchMin <= DateTime.Parse(x.A1) && DateTime.Parse(x.A1) <= searchMax);I saw that there are the $toDate and $dateFromString functions, but I have not found any implementation of this with the driver.Any help on this and what is possible would be greatly appreciated.Thanks!",
"username": "Christian_Longtin"
},
{
"code": "[BsonDateTimeOptions(Kind = DateTimeKind.Local)]_collection.Find(ltr => ltr.MailDetails[0].ReceivedDate >= strtDte && ltr.MailDetails[0].ReceivedDate <= endDte).ToList();",
"text": "I was dealing with something like this yesterday. Tried a whole range of things, like converting the DateTime to Ticks, new Dates and other things. I ended up just using a combination of[BsonDateTimeOptions(Kind = DateTimeKind.Local)]above the property in the class and it worked like normal in the Linq._collection.Find(ltr => ltr.MailDetails[0].ReceivedDate >= strtDte && ltr.MailDetails[0].ReceivedDate <= endDte).ToList();strtDte and endDte are C# DateTime objects.Hope this helps.",
"username": "Jeff_Kling"
},
{
"code": "",
"text": "I can’t change my fields to DateTime fields, because they are used to store any type of information. It can be a date, a number, a list of options. These are custom fields that can be used anyway needed, so I need to store them as strings.I really need to be able to Parse/Convert the field to a date to run my query.\nThe driver does not support everything Linq does, but it seems like this should be implemented. Particularly, since there is the $dateFromString function.The ticks is the only viable option I have found so far, but that requires a lot of overhead to manage this.\nWhat I am doing now is doing the date range filters in memory. This is not clean at all, but I do not see any other option.Any other ideas?",
"username": "Christian_Longtin"
}
] | C# LINQ date filter on string field | 2021-01-26T19:54:55.533Z | C# LINQ date filter on string field | 9,637 |
null | [
"compass"
] | [
{
"code": "connect ECONNREFUSED 127.0.0.1:27017\n",
"text": "Since the recent update to MongoDB Compass (now 1.21.0, macOS Catalina 10.15.4), it is not able to connect to my remote server using an SSH tunnel.I use simple username/password authentication for both ssh and mongoDB. The very same parameters worked (and still work) with older version of Compass. Also using pymongo with SSH tunnel forwarder works without any issues - so this seems to be limited to the most recent version of compass in my case. I tried both community edition and the “normal” one.After clicking on connect and waiting for ca. a minute, I receive the message:Is it a bug, or does the new version requires any special settings?",
"username": "Peter_Hevesi"
},
{
"code": "",
"text": "I also have the same problem here, can’t login anymore after last update.",
"username": "William_Rufino"
},
{
"code": "",
"text": "I am facing this issue too.\nBoth with a connection with user/pass SSH Tunnel and other with SSH Identity File.It’s astonishing frustrating how Compass can manage to get something broken on every update…",
"username": "hmaesta"
},
{
"code": "",
"text": "Hi @Peter_Hevesi, @William_Rufino, @hmaesta,Welcome to the MongoDB Community.I see that Peter has already raised a bug report for this: COMPASS-4268: Not able to connect over SSH Tunnel any more. The report mentions macOS Catalina 10.15.4; if you are using a different O/S it would be helpful to comment on the Jira issue to help the team identify if this affects ofter platforms.Please upvote & watch COMPASS-4268 for updates.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi,\nI’m experimenting the same issue and I was wondering if we could rollback to the previous compass version while waiting for a fix.\nIs it downloadable somewhere?",
"username": "Marco_Mocca"
},
{
"code": "https://downloads.mongodb.com/compass/",
"text": "Hi @Marco_Mocca, if you know the version that you are looking for you can put that at the end of https://downloads.mongodb.com/compass/ to get an older version.Let’s say you wanted Compass 1.16.0 for Mac the correct location would be https://downloads.mongodb.com/compass/mongodb-compass-1.16.0-darwin-x64.dmg.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "You can find all the releases of Compass on Github: Releases · mongodb-js/compass · GitHub",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Issue seems to be resolved since 1.21.2 - I tried and on my end and it was working.\nThanks for the quick fix.",
"username": "Peter_Hevesi"
},
{
"code": "",
"text": "I’m still getting this on Windows 10. After 1.21.0 I can’t connect using ssh from Compass, but if I make a ssh tunnel using putty or something else and connect to localhost 27017 for example it works.In Compass the error is connect ECONNREFUSED 127.0.0.1:29468 I don’t specify that port anywhere I’m trying to connect to a default 27017 port but it looks like when using ssh Compass ignores the provided port.Or am I missing something here?",
"username": "Lucian_Izvor"
},
{
"code": "",
"text": "All of you should switch to NoSQLBooster,\nat least that’s what I did (SSH TUNNEL connection with SSH KEYS is still broken for all debian distros)",
"username": "Naresh_Bansal"
}
] | SSH Tunnel not working with Compass 1.21.0 | 2020-05-04T11:08:26.536Z | SSH Tunnel not working with Compass 1.21.0 | 10,032 |
[
"react-native"
] | [
{
"code": "Schema validation failed due to the following erro…hronized Realm but none was found for type 'Item'\"Schema validation failed due to the following errors: -There must be a primary key property named '_id' on a synchronized Realm but none was found for type 'Item'\"export const ItemSchema = { name: \"Item\", properties: { _id: \"objectId\", itemCode: \"string\", itemDescription: \"string\", itemPrice: \"string\", partition: \"string\", }, }; ",
"text": "I’m getting this error when trying to save items in the database\nSchema validation failed due to the following erro…hronized Realm but none was found for type 'Item'and\n\"Schema validation failed due to the following errors: -There must be a primary key property named '_id' on a synchronized Realm but none was found for type 'Item'\"this is how I wrote my schema:\nexport const ItemSchema = { name: \"Item\", properties: { _id: \"objectId\", itemCode: \"string\", itemDescription: \"string\", itemPrice: \"string\", partition: \"string\", }, }; \nand the function I’m expecting everything from:\n",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "Maybe nothing but… Item vs Items?",
"username": "Jay"
},
{
"code": "",
"text": "oh sorry I managed to fix everything",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Schema Validation failed | 2021-02-04T21:16:03.500Z | Schema Validation failed | 4,926 |
|
null | [
"mongoose-odm",
"connecting"
] | [
{
"code": "var db = \"mongodb://localhost:27017/example\";\nmongoose.connect(db, { useNewUrlParser: true, useUnifiedTopology: true });\n\nconst conSuccess = mongoose.connection\nconSuccess.once('open', _ => {\n console.log('Database connected:', db)\n})\n",
"text": "Hi - I am trying to view mongodb collections (just to view) in browser URL by accessing localhost:27017/example.The mongod server started in cmg prompt.this is the code writtem:in the terminal hen I ran it says\n“database connected”when I access the browser localhost as\nlocalhost:27017/exampleIt looks like you are trying to access MongoDB over HTTP on the native driver port.please help me how I can view the db collections in browser. Thanks",
"username": "PP_SS"
},
{
"code": "mongo ...\n",
"text": "Hi @PP_SS,There used to be a brief management http interface but it was removed:https://docs.mongodb.com/manual/core/security-hardening/#http-status-interface-and-rest-apiTo access the database via UI we recommend connecting via compass product:MongoDB Compass, the GUI for MongoDB, is the easiest way to explore and manipulate your data. Download for free for dev environments.You can also connect this instance to our Cloud Manager service and use data explorer via the automation agent:Otherwise a driver or a mongo shell can show collections and query them.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi - Thanks for the quick reply. MongoDb compass is up and running. but not the UI, i am able to access the localhost on example db to view it.",
"username": "PP_SS"
},
{
"code": "",
"text": "@PP_SS, if I understand correctly you are good to go right?",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I am stuck why is the localhost:27017/example in UI saying still error asIt looks like you are trying to access MongoDB over HTTP on the native driver port.mongodb compass inside i am able to see the database and its collections",
"username": "PP_SS"
},
{
"code": " localhost:27017/example",
"text": "@PP_SS,You can’t use localhost:27017/example as url.It only works via a MongoDB native driver.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I see that Udemy course that accessed localhost:4000/books but not the localhosst:27017/exampleand I accessed like\nhttp://localhost:4000/books it says errorReferenceError: exec is not defined.FYI Example is a database and books collection is created in it. in mongodb compas, i am able to access example database and i see books collection in it. not syre why is this error comes. do you have any idea?",
"username": "PP_SS"
},
{
"code": "",
"text": "They definitely have an application and web server on 4000 port showing web pages.Its not something coming built-in with MongoDB.",
"username": "Pavel_Duchovny"
},
{
"code": "var express = require(\"express\");\nvar app = express();\nvar bodyParser = require(\"body-parser\");\nvar mongoose = require(\"mongoose\");\nvar Book = require(\"./book.model\");\n\n\nvar db = \"mongodb://localhost:27017/example\";\nmongoose.connect(db, { useNewUrlParser: true, useUnifiedTopology: true });\n\nconst conSuccess = mongoose.connection\nconSuccess.once('open', _ => {\n console.log('Database connected:', db)\n})\n\nconSuccess.on('error', err => {\n console.error('connection error:', err)\n})\n\nvar port = 4000;\n//REST get\napp.get('/', function(req,res){\n res.send(\"happy to be here\");\n});\n\n\n//get all \"books\" collections\napp.get('/books', function(req,res) {\n console.log(\"get all books\");\n Book.find({})\n exec(function(err, books){\n if(err){\n res.send(\"error has occured\");\n } else {\n console.log(books);\n res.json(books);\n }\n }); \n});\n\n\napp.listen(port, function () {\n console.log(\"app listening on port \" + port);\n});\nvar mongoose = require(\"mongoose\");\n\nvar Schema = mongoose.Schema;\n\nvar BookSchema = new Schema({\n\n title: String,\n\n author: String,\n\n category: String,\n\n});\n\nmodule.exports = mongoose.model(\"book\", BookSchema);",
"text": "I have created that application. after that only i am running the server.Here is the code:and book.model.js code",
"username": "PP_SS"
},
{
"code": "",
"text": "can anyone please help me?",
"username": "PP_SS"
},
{
"code": "",
"text": "Could you share a screenshot of the error and status of the MongoDB server on this local host?",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Sure, I have attached three screenshots onfile saved folders, mongodb server start and url entered port.VSCode_censored1139×602 55.9 KB url_censored (1)1359×340 56.6 KB Uploading: mongodb-server.PNG…Uploading: mongodb-server.PNG(1)…",
"username": "PP_SS"
},
{
"code": "",
"text": "Sorry that screenshot uploading takes always reading and not able to sent thrid one.",
"username": "PP_SS"
},
{
"code": "",
"text": "mongodb-server1345×459 40.1 KBthis screenshot is server started by entering mongod in cmd propmpt. i have no idea why it is not working in URL.",
"username": "PP_SS"
},
{
"code": "",
"text": "@PP_SS,Error seems to be a js error and not MongoDB related.Are you sure this mongoose version is compatible with 4.4 MongoDB server?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for your reply, Pavel. I will check with javascript code again.I am using mongoose version 5.11.13 is that compatible with 4.4 mongodb server? I will google on it as well. thanks",
"username": "PP_SS"
},
{
"code": "",
"text": "Hmmm might be A node bug:I have in my gulpfile\n\n``` javascript\n/*globals require*/\n'use strict';\n\nvar gul…p = require('gulp'),\n gutil = require('gulp-util'),\n minifyCss = require('gulp-minify-css'),\n autoPrefixer = require('gulp-autoprefixer'),\n notify = require('gulp-notify'),\n less = require('gulp-less'),\n browserSync = require('browser-sync'),\n rename = require('gulp-rename'),\n concat = require('gulp-concat'),\n lessAssetDir = 'app/assets/less',\n cssDir = 'public/css';\n\ngulp.src(lessAssetDir + '/styles.less')\n .pipe(notify(\"Found file: <%= file.relative %>!\"));\n```\n\nand when I run gulp I get\n\n```\n/Users/xxx/node_modules/gulp-notify/node_modules/node-notifier/lib/utils.js:14\n var notifyApp = exec(shellwords.escape(notifier) + ' ' + options.join(' '),\n ^\nReferenceError: exec is not defined\n at Object.module.exports.command (/Users/xxx/node_modules/gulp-notify/node_modules/node-notifier/lib/utils.js:14:19)\n at /Users/xxx/node_modules/gulp-notify/node_modules/node-notifier/lib/notifiers/terminal-notifier.js:38:13\n at /Users/xxx/node_modules/gulp-notify/node_modules/node-notifier/lib/utils.js:63:14\n at ChildProcess.exithandler (child_process.js:645:7)\n at ChildProcess.EventEmitter.emit (events.js:98:17)\n at maybeClose (child_process.js:753:16)\n at Socket.<anonymous> (child_process.js:966:11)\n at Socket.EventEmitter.emit (events.js:95:17)\n at Pipe.close (net.js:465:12)\n```\n\nPackage versions\n\n```\n$ npm -v\n1.4.14\n\n$ node -v\nv0.10.28\n\n$ gulp -v\n[23:01:33] CLI version 3.7.0\n[23:01:33] Local version 3.7.0\n\[email protected]\n\nOSX 10.9.3\n```\n\nWhat am I doing wrong?Please try to update node and npm versions…Thanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | Mongodb localhost connection | 2021-02-03T16:58:35.883Z | Mongodb localhost connection | 115,132 |
[
"atlas-functions",
"react-js"
] | [
{
"code": "",
"text": "Hi\nI’ve been following this guide (created by a Mongodb developer advocate) to upload files from my to s3 from ReactAn Instagram-like application created using MongoDB Stitch and React.js - GitHub - aydrian/stitchcraft-picstream: An Instagram-like application created using MongoDB Stitch and React.jsSince this uses stitch and I’m using Realm, I found the guide below from the Realm SDK to upload to s3 (which uses a Realm function as opposed to directly using the service from the client side as in the stitch example above):This works fine for small file but I hit a limit when uploading larger files (>100 Mb) as the Realm function has a limit in size of arguments passed.I could not find any example of using the Realm SDK to access the AWS service directly (like in the stitch example), which I’m thinking might have a larger file limitAny ideas/suggestions on how to do this with Realm?Thanks",
"username": "Keanen_McCarthy"
},
{
"code": "",
"text": "Hey @Keanen_McCarthy -At the moment you would have to upload directly from the client(React) to S3 if your file is too large for Realm - this feature is actually not available in the new Realm SDKs.",
"username": "Sumedha_Mehta1"
}
] | Uploading file using AWS s3 service | 2020-11-18T19:19:24.394Z | Uploading file using AWS s3 service | 5,246 |
|
null | [
"backup"
] | [
{
"code": "",
"text": "Is it possible to restore a single database from MongoDB Atlas Cloud backup? I found documentation only for the entire cluster restore: https://docs.atlas.mongodb.com/backup/cloud-backup/restoreIt seems that this is possible with legacy backups: https://docs.atlas.mongodb.com/backup/legacy-backup/restore-by-query",
"username": "Anshu_Avinash"
},
{
"code": "",
"text": "Hi @Anshu_Avinash,Welcome to MongoDB community.Cloud backup does not allow a queryable backup like legacy could.BUT you can easily restore your backup to a temp cluster and use mongodump to get the collection you need.Let me know if that works for you.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks @Pavel_Duchovny for your prompt reply and for the workaround.\nI was also curious that if there is a technical reason for this being missing in Atlas, since the same feature is present in Cloud Manager: Restore a Single Database or Collection — MongoDB Cloud Manager.",
"username": "Anshu_Avinash"
},
{
"code": "",
"text": "Yes @Anshu_Avinash, this feature can be implemented by having the mechanism of old backup which was slower and had many potential break points. Cloud Manager still uses this method.Now we use cloud provider snapshots which allow you to restore faster and cheaper storage. So in case of need having a provisioned cluster to dump and restore based on your need and size makes much more sense than using this queryable backup…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks @Pavel_Duchovny for the detailed explanation.",
"username": "Anshu_Avinash"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Restore a single DB from MongoDB Atlas Cloud backup | 2021-02-05T11:05:52.083Z | Restore a single DB from MongoDB Atlas Cloud backup | 4,458 |
null | [] | [
{
"code": "",
"text": "Hi Everyone,This is Arda, founder, and CEO of Sertifier Inc.At Sertifier Inc., we have two different products being certificate automation app Sertifier - www.sertifier.com and skills-based microlearning app Verified - verified.cvYes, we are based in Istanbul, Turkey, and have this marvelous team of 13 individuals. Recently joined the Mongo DB for Startups program and willing the acquire a great network where we can create with collaboration.Best",
"username": "Arda_Helvacilar"
},
{
"code": "",
"text": "Hello @Arda_Helvacilar, welcome to the MongoDB Community forum!This page has a Menu at the top where you will find useful resources like documentation, videos, articles, and learning. Cheers!",
"username": "Prasad_Saya"
},
{
"code": "",
"text": " Glad to have you and your team as part of our community, Arda!",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "@Prasad_Saya thank you so much for your kind answer!",
"username": "Arda_Helvacilar"
},
{
"code": "",
"text": "@Lauren_Schaefer on behalf of my team, we are honored ",
"username": "Arda_Helvacilar"
}
] | Greetings from Turkey! | 2021-02-04T21:43:19.197Z | Greetings from Turkey! | 2,952 |
null | [] | [
{
"code": "",
"text": "Hi Team,We are having ebook reading application developed in IOS platform. We want to use swiftLint tool to check whether accessibility enabled for all UIControls which are in storyboard as well as programmatically initialised controls.Our requirement: We are supporting voice over for visually disabled people. Accessibility should be enabled for all UI controls (eg : UIButton, UILabel… etc) in application to support voice over.Use case : When we make a build, if the controls doesn’t have accessibility enabled true property, then the build should fail in Xcode. SwiftLint tool should fail builds if accessibility tags are not present for UIControls in the application.Kindly guide us how to get all UIElements in the application and how to check the property values of the UIElement instance.For example, get all the UILabels and check whether the UILabels are set as IsAccessibilityElement is True. Likewise, we need to get all UIElements and check the IsAccessibilityElement property value for each UIElement instance.Could you please suggest on the above so that, we can try our best to implement it in SwiftLint.",
"username": "karthika_beulah"
},
{
"code": "",
"text": "Hi @karthika_beulah. SwiftLint is a community-run open-source project. It would be best to repost your message as an issue at Issues · realm/SwiftLint · GitHub to get attention from the community.",
"username": "Sergey_Gerasimenko"
},
{
"code": "",
"text": "Thanks for your response. I posted in Issues · realm/SwiftLint · GitHub too. But i didn’t get any response. Is there any SwiftLint person to contact. it will be helpful if i get ant contact mail id to clarify it.",
"username": "karthika_beulah"
}
] | SwiftLint tool should fail builds if accessibility tags are not present for UIControls in iOS | 2021-02-02T05:32:36.808Z | SwiftLint tool should fail builds if accessibility tags are not present for UIControls in iOS | 2,105 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "HI,i want to create create a custom user defined aggregation function in mongo db how to do it??",
"username": "anubhav_tarar"
},
{
"code": "",
"text": "Hi @anubhav_tarar,Welcome to MongoDB community.I think you are looking for $function or $accumulator:https://docs.mongodb.com/manual/reference/operator/aggregation/function/Those are available from 4.4 .Otherwise there is a mapReduce commandThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to create a custom udaf in mongo db? | 2021-02-05T09:12:46.017Z | How to create a custom udaf in mongo db? | 1,503 |
null | [
"crud",
"schema-validation"
] | [
{
"code": "shopCollection.updateOne({_id:ObjectId(\"601c12ae89e49906305488b5\")},\n {$push:{products:{$ref: \"products\",$id: ObjectId(\"601c0a532da6b110a06d4b0a\")}}},function(err,result){\n if(err){\n console.log(err);\n }\n })\nproducts:{\n bsonType:\"array\",\n items:{\n bsonType:\"object\",\n properties:{\n $ref:{\n bsonType:\"products\",\n description: \"required\"\n },\n $id:ObjectId\n }\n }\n }\n",
"text": "Getting the following error while updating an array of objects with $ref field.\nThe DBRef $ref field must be followed by a $id fieldUsing the following query:products array validation schema:",
"username": "Gurneet_Singh"
},
{
"code": "",
"text": "Hi @Gurneet_Singh,Welcome to MongoDB community.Not sure what driver you are using but not certain that what you are performing is valid outside of the shell.In mongoose I see a different syntax:\nhttps://mongoosejs.com/docs/guide.html#selectPopulatedPathsIn node js as well:\nhttp://mongodb.github.io/node-mongodb-native/3.6/api/DBRef.htmlThanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | Reference fields not updating | 2021-02-04T19:00:30.329Z | Reference fields not updating | 4,958 |
[
"mongoose-odm",
"indexes"
] | [
{
"code": "const mongoose = require('mongoose');\nconst Schema = mongoose.Schema;\n\nconst userSchema = new Schema({\n username: { type: String, unique: true},\n allTimeStats: [{\n amountOfQuestions: Number,\n amountOfCorrectAnswers: Number,\n category: String\n }],\n ongoingGames: [{\n opponent: String,\n correctAnswers: Number,\n opponentCorrectAnswers: Number,\n dateOfOpponentsLastTurn: String,\n img: String,\n language: String\n }],\n searchingForGame: Boolean\n});\n\nconst User = mongoose.model('User', userSchema);\nmodule.exports = User;\nconst express = require('express');\nconst router = express.Router();\nconst User = require('../models/user');\n\n\n// Add user\nrouter.post('/', async (req, res) => {\n const user = new User({\n username: req.body.username,\n allTimeStats: req.body.allTimeStats,\n ongoingGames: req.body.ongoingGames,\n searchingForGame: req.body.searchingForGame\n });\n\n try {\n const savedUser = await user.save();\n res.json(savedUser);\n } catch(err){\n res.json({message: err})\n }\n});\n",
"text": "Hi, I am struggling to understand why Mongoose is allowing me to create multiple documents with the same username, despite having set the “unique: true” on my username field. Here is my model:And here is the call to the database:And here is a screenshot of multiple users having the same name I just dropped the collection and tried again, and when I go to MongoDB Atlas and the collection and into Indexes, I don’t see an index for unique:true for username. Any help is appreciated.",
"username": "atk3"
},
{
"code": "uniqueunique{ name: { type: String, unique: true } }namenameunique",
"text": "Hello atk3,Welcome to the MongoDB forums.Has the unique index been created?Can you run the db.collection.getIndexes() and send a screenshot of the results? Overall Mongoose isn’t really much of a solution for index management.Due to this you may want to consider using the MongoDB Shell to handle these things, but that said indexes only have to be built if they are fresh, or you ran db.dropDatabase().Mongoose v7.0.0: FAQ has an explanation on the unique section: Q . I declared a schema property as unique but I can still save duplicates. What gives?A . Mongoose doesn’t handle unique on its own: { name: { type: String, unique: true } } is just a shorthand for creating a MongoDB unique index on name . For example, if MongoDB doesn’t already have a unique index on name , the below code will not error despite the fact that unique is true.Regards,Brock_GL",
"username": "Brock_GL"
},
{
"code": "",
"text": "Hi Brock,Thank you for taking time out of your day to help me. The problem was that the index hadn’t been created, which is what I suspected, as it wasn’t showing up in MongoDB Atlas. The problem here was sleep deprivation, and not MongoDB or Mongoose.",
"username": "atk3"
},
{
"code": "",
"text": "Hello atk3,No worries at all! Should you need any further assistance, just let us know!Regards,Brock_GL",
"username": "Brock_GL"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why is Mongoose letting me create several documents with same username? | 2021-02-05T01:17:34.800Z | Why is Mongoose letting me create several documents with same username? | 7,730 |
|
null | [
"security"
] | [
{
"code": "",
"text": "I’d like to limit my Atlas cluster to just my Kubernetes cluster instead of the default 0.0.0.0/0 range.Should I add access entries for the External IPs of every node in the cluster, or is there something more to it?",
"username": "Michael_Pratt"
},
{
"code": "",
"text": "default 0.0.0.0/0 rangeThere is no default allowlist, someone added this. And it is contrary to good practice.Should I add access entries for the External IPs of every node in the cluster, or is there something more to it?That is it. Unless you’re doing something fancy with your outbound traffic like nat-ing it to one particular ip.",
"username": "chris"
},
{
"code": "",
"text": "Thanks. Adding the ExternalIP for all my nodes worked, and removing the 0.0.0.0/0 entry didn’t cause any adverse effects.",
"username": "Michael_Pratt"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Limit network access to Kubernetes Cluster | 2021-02-03T18:29:04.289Z | Limit network access to Kubernetes Cluster | 1,538 |
null | [
"realm-studio"
] | [
{
"code": "",
"text": "With other databases there are plenty of ways to quickly look at the data, such as a GUI or command line tool. So far though, I have no clue how to do this with a local Realm database. Anyone know the best way to quickly poke around in the existing data?Thanks!",
"username": "Annie_Sexton"
},
{
"code": "",
"text": "Hello Annie,The appropriate guide to do this can be found here:\nRealm Studio is the official developer tool for working with local & synced realms.Regards,Brock.",
"username": "Brock_GL"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to view all data in Realm DB? (via GUI, console, CLI, client etc) | 2021-02-05T01:52:04.585Z | How to view all data in Realm DB? (via GUI, console, CLI, client etc) | 5,682 |
null | [
"replication",
"security"
] | [
{
"code": "",
"text": "Is it possible to set a mongodb replica set, where an authentication is going to be required from any coming user (client), but without a key-file and authentication between the members of the set?When I try to set this up, I am having some problems.\nBut I am not sure if it is because I did something wrong or because it is just not possible.",
"username": "Michel_Bouchet"
},
{
"code": "",
"text": "When I try to set this up, I am having some problems.It will be easier for us to help if you describe your problems in greater details. What are the steps taken and where it fails? Screenshot and log files are all helpful.",
"username": "steevej"
},
{
"code": "",
"text": "Hello @Michel_Bouchet, according to the documentation, enabling authorization to access replia-set requires enabling internal security between the members of the replica-set. Authorization allows creation of users and assign roles to them.See Update Replica Set to Keyfile Authentication:Enforcing access control on an existing replica set requires configuring:Also, see Enable Access Control - Replica Sets and Sharded clusters:Replica sets and sharded clusters require internal authentication between members when access control is enabled.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@steevej. Thanks for the reply. You’re almost right, except that if the answer to my question is NO, there is nothing of interest to describe and share.\nIf the answer is YES, I could indeed share a few things to help anyone spot an issue in what I did.\nIn other words I first need a YES or NO in order to move to the next step.",
"username": "Michel_Bouchet"
},
{
"code": "",
"text": "@Prasad_Saya. Your answer seems to mean NO. But it is not clear, “authorization” is not part of my question, I am only concerned about “authentication”.",
"username": "Michel_Bouchet"
},
{
"code": "",
"text": "@Prasad_Saya May have switched AuthN and AuthZ but the links and quotes they posted clearly state that enabling access control requires configuring the security between replica set members.",
"username": "chris"
},
{
"code": "",
"text": "OK, thanks. So to put it short and simple your answer is NO.\nThat seems to match what I have also found by searching on my side and by trying out.",
"username": "Michel_Bouchet"
}
] | Replica set authentication question | 2021-02-03T15:02:14.424Z | Replica set authentication question | 2,043 |
null | [] | [
{
"code": "Error: error adding MongoDB Private Service Endpoint Connection(AWS) to a Private Endpoint (XXXXXXXXXXXXXXXXXXXXXXXX): POST https://cloud.mongodb.com/api/atlas/v1.0/groups/XXXXXXXXXXXXXXXXXXXXXXXX/privateEndpoint/AWS/endpointService/vpce-XXXXXXXXXXXXXXXXX/endpoint: 400 (request \"PATH_PARAM_PARSE_ERROR\") One or more path parameter in the request URI /api/atlas/v1.0/groups/XXXXXXXXXXXXXXXXXXXXXXXX/privateEndpoint/AWS/endpointService/vpce-XXXXXXXXXXXXXXXXX/endpoint is malformed.",
"text": "I’m probably missing something obvious, but when trying to create a mongodbatlas_privatelink_endpoint_service (Terraform Registry) resource with Terraform, I get the following:Error: error adding MongoDB Private Service Endpoint Connection(AWS) to a Private Endpoint (XXXXXXXXXXXXXXXXXXXXXXXX): POST https://cloud.mongodb.com/api/atlas/v1.0/groups/XXXXXXXXXXXXXXXXXXXXXXXX/privateEndpoint/AWS/endpointService/vpce-XXXXXXXXXXXXXXXXX/endpoint: 400 (request \"PATH_PARAM_PARSE_ERROR\") One or more path parameter in the request URI /api/atlas/v1.0/groups/XXXXXXXXXXXXXXXXXXXXXXXX/privateEndpoint/AWS/endpointService/vpce-XXXXXXXXXXXXXXXXX/endpoint is malformed.Comparing to the API docs (https://docs.atlas.mongodb.com/reference/api/private-endpoints-endpoint-create-one), I see nothing wrong. The debugging output from Terraform (TF_LOG=TRACE) shows that the request includes the correct endpoint ID in the payload.What am I missing?",
"username": "Aaron_Santo"
},
{
"code": "",
"text": "Hi Aaron_Santo, I’m the Product Manager for our Terraform provider. We had a bug we discovered after launching v0.8.0 that I think you may have hit. We have a fix completed (INTMDB-163: Wrong order for PrivateLink Endpoint Service and detects unnecessary changes by coderGo93 · Pull Request #388 · mongodb/terraform-provider-mongodbatlas · GitHub) and will release soon once a few other fixes are completed. Without more information I can’t say for certain but this looks the same. You can also file an issue (Issues · mongodb/terraform-provider-mongodbatlas · GitHub) or reach out to support for additional assistance. Thanks!",
"username": "Melissa_Plunkett"
},
{
"code": "",
"text": "Thanks Melissa for the update, Initially i had the same error when i was trying to use mongodbatlas_privatelink_endpoint_service then i temporarily used\nmongodbatlas_private_endpoint_interface_link the old one and keep following the page and now I switched back to mongodbatlas_privatelink_endpoint_service after the fix and now its completely fine.",
"username": "jai_kumar_kunduru"
}
] | Private Endpoint API parameter malformed | 2021-01-21T20:26:34.432Z | Private Endpoint API parameter malformed | 2,287 |
null | [
"kafka-connector"
] | [
{
"code": "kafka_topickafka_keykafka_valuekafka_header_keyskafka_header_valueskafka_topic",
"text": "I want to implement Outbox Pattern in our microservices using Mongo-Kafka connector, in my outbox(a MongoDB collection) I store topic data using these fields:\nkafka_topic, kafka_key, kafka_value, kafka_header_keys, kafka_header_values.But how should I config mongo-kafka connector to dynamically choose the topic from the kafka_topic field and also its values and headers from other outbox fields?\nI can not find settings to do this from its config reference.I’m also thinking about getting a fork from mongo-kafka connector repository and extend it with my custom implementation of source class or any other way.I would greatly appreciate it if you could help me.",
"username": "Mehran_prs"
},
{
"code": "",
"text": "Hi @Mehran_prs,Sounds interesting and perhaps should be something we add to the roadmap.Could I ask you to add a ticket to the Kafka connector jira project outlining your requirements?I’m still a little unclear if MongoDB is the Sink or the Source in this scenario? So could you also add some more detail, so I can ensure I have the correct mental model of the outbox pattern.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Hello,I implemented this pattern through Atlas Mongo DB Source Connector with Kafka. Because the Mongo-Kafka Connector on Confluent Cloud does not support mapping specific documents to different topics, I ended up creating a collection for each event that I have.So, collection “event1” has a corresponding topic “event1” in Kafka. I use the Change Stream functionality to relay information to the topics. Once a new document is inserted into “event1” collection (through a transaction, hence the Outbox Pattern), the same document is relay to the “event1” Kafka topic, thanks to the Change Stream. This ensures an at-least-once delivery of messages.I have to say, this approach works when you have a limited set of events. Managing them is also not that easy. But at least this is a workaround.",
"username": "Brusk_Awat"
},
{
"code": "",
"text": "Hi @Brusk_Awat,Glad you were able to find a work around. Just to let you know in the upcoming 1.4.0 release you will be able to map SinkRecords to collections and change stream documents to topics based on the content of the data. I hope that should help out more.Also just to remind users please post feature requests to our Jira project. We always welcome feedback and want to ensure the connector meets our users needs Ross",
"username": "Ross_Lawley"
},
{
"code": "outbox# outbox collection fields:\nid string \ntopic string \nkey string # The kafka partitioning key\nvalue string # The event value\nheaders []Header # event headers\nemitted_at time.Time\n",
"text": "Hi @Ross_Lawley and @Brusk_Awat,I got a hard fork from the Mongo-kafka-connector project and implemented the outbox pattern.We have a collection named outbox in our DB with these fields:Now we have full control over our event’s topic, value, partitioning key, and its headers.I’ll push it to our public git repo and post its link at this topic.I would be really glad if you could help me to maintain it as outbox pattern implementation for MongoDB and Kafka or maybe we could merge it into the Kafka connector.",
"username": "Mehran_prs"
}
] | Outbox Pattern using mongo-kafka connector | 2020-12-25T19:04:05.831Z | Outbox Pattern using mongo-kafka connector | 4,743 |
null | [] | [
{
"code": "",
"text": "Hi,In Windows environment can I run MongoDB under a Windows Managed Service Account? the main reason is there is a company policy about PWD expiration each 60 days, and we want to use Managed Service Account as this allow password updates transparent for the services using this type of accountBest regards",
"username": "mohamed_bouarroudj"
},
{
"code": "",
"text": "Yes you can, just give that service account the rigth permissions to manage resources that it will need",
"username": "Oscar_Cervantes"
},
{
"code": "",
"text": "Hi,Can you guide the basic steps required to access MongoDB using a service/domain account. Currently I am accessing it using my local user/pwd.Is there any config file changes required. What would be the connection string format to use, lets say connecting using compass UI.",
"username": "vibhuti_sharan"
}
] | Using Managed Service Accounts with MongoDB | 2020-07-08T14:18:09.497Z | Using Managed Service Accounts with MongoDB | 2,723 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hi, i’d like to know if there is a way to use something similar to $geonear and $text:$search in the same stage.The alternative that i’m using is a regex, but it’s not the same.",
"username": "Frank_Pena"
},
{
"code": "",
"text": "@Frank_Pena,Welcome to MongoDB community.You can’t use both of those stages in the same aggregation as they use different index types (geo and text).What we do offer is the Atlas search product if you willing to run your database on atlas, and you should:https://docs.atlas.mongodb.com/reference/atlas-search/near#compound-exampleHere you can run near and text search in a compound expression.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | How to use $geonear and $search in the same stage | 2021-02-03T18:29:12.124Z | How to use $geonear and $search in the same stage | 2,354 |
null | [
"performance",
"monitoring"
] | [
{
"code": "{\"t\":{\"$date\":\"2021-01-26T12:21:07.922+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn410437\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"zzzzz.$cmd\",\"command\":{\"update\":\"xxxx.yyyy\",... \"storage\":{\"data\":{\"bytesRead\":20458,\"bytesWritten\":2135043,\"timeReadingMicros\":270,\"timeWritingMicros\":12256},\"timeWaitingMicros\":{\"cache\":626588}},\"protocol\":\"op_msg\",\"durationMillis\":698}}",
"text": "Hi,I see a high value of the timeWaitingMicros.cache metric in log messages during periods of high load.For example:{\"t\":{\"$date\":\"2021-01-26T12:21:07.922+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn410437\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"zzzzz.$cmd\",\"command\":{\"update\":\"xxxx.yyyy\",... \"storage\":{\"data\":{\"bytesRead\":20458,\"bytesWritten\":2135043,\"timeReadingMicros\":270,\"timeWritingMicros\":12256},\"timeWaitingMicros\":{\"cache\":626588}},\"protocol\":\"op_msg\",\"durationMillis\":698}}What could be the reason for the high value of the timeWaitingMicros.cache metric?",
"username": "Konstantin"
},
{
"code": "",
"text": "Hi @Konstantin,What’s the state of your hardware during these “high load”? CPU / RAM / IOPS ?\nIs one of them saturated?Just based on the definition of this metric, I would suppose that your RAM is full and eventually swapping or can’t evict fast enough?Also this looks like a “slow query” - is it using an index correctly? Did you look at the explain plan?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "{\"t\":{\"$date\":\"2021-01-26T12:21:07.922+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn410437\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"zzz.$cmd\",\"command\":{\"update\":\"xxx.yyy\",\"ordered\":false,\"lsid\":{\"id\":{\"$uuid\":\"2fbe313b-2d87-4f18-8a1b-36321d373c32\"}},\"txnNumber\":156218,\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1611652867,\"i\":1835}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\"subType\":\"0\"}},\"keyId\":0}},\"$db\":\"irkkt\"},\"numYields\":15,\"reslen\":245,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":115}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":116}},\"Global\":{\"acquireCount\":{\"w\":115}},\"Database\":{\"acquireCount\":{\"w\":115}},\"Collection\":{\"acquireCount\":{\"w\":115}},\"Mutex\":{\"acquireCount\":{\"r\":100}}},\"flowControl\":{\"acquireCount\":65,\"timeAcquiringMicros\":65},\"storage\":{\"data\":{\"bytesRead\":20458,\"bytesWritten\":2135043,\"timeReadingMicros\":270,\"timeWritingMicros\":12256},\"timeWaitingMicros\":{\"cache\":626588}},\"protocol\":\"op_msg\",\"durationMillis\":698}}",
"text": "Hi @MaBeuLux88,I don’t observe saturation for CPU, RAM, IOPS.\nSwapping disabled.The developers claim they don’t use such queries .The full log message:{\"t\":{\"$date\":\"2021-01-26T12:21:07.922+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn410437\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"zzz.$cmd\",\"command\":{\"update\":\"xxx.yyy\",\"ordered\":false,\"lsid\":{\"id\":{\"$uuid\":\"2fbe313b-2d87-4f18-8a1b-36321d373c32\"}},\"txnNumber\":156218,\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1611652867,\"i\":1835}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\"subType\":\"0\"}},\"keyId\":0}},\"$db\":\"irkkt\"},\"numYields\":15,\"reslen\":245,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":115}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":116}},\"Global\":{\"acquireCount\":{\"w\":115}},\"Database\":{\"acquireCount\":{\"w\":115}},\"Collection\":{\"acquireCount\":{\"w\":115}},\"Mutex\":{\"acquireCount\":{\"r\":100}}},\"flowControl\":{\"acquireCount\":65,\"timeAcquiringMicros\":65},\"storage\":{\"data\":{\"bytesRead\":20458,\"bytesWritten\":2135043,\"timeReadingMicros\":270,\"timeWritingMicros\":12256},\"timeWaitingMicros\":{\"cache\":626588}},\"protocol\":\"op_msg\",\"durationMillis\":698}}In Studio 3T, I see a collection scan, but I’m not sure if I looked right.Best regards,\nKonstantin",
"username": "Konstantin"
},
{
"code": "db.yourCollection.explain(\"executionStats\").update(<your actual operation>)",
"text": "Looks like about 2MB of data was written in 698ms during this update operation. Could start to be a lot for your disks if you have many concurrent write operations at that time. But that depends a lot on your hardware here.You need to identify which update operation exactly is triggering this and run a db.yourCollection.explain(\"executionStats\").update(<your actual operation>) on it to see what is happening exactly. Double check that you are using an index for the query part of the operation, etc.Putting the explain right after the collection name will “fake” the operation and not affect your documents. Putting it after the update operation will actually do it.Please provide the explain plan for this operation so I can help more.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I can’t find the update operation. There is no operation in the log message.How can I find the update operation?Best regards,\nKonstantin",
"username": "Konstantin"
},
{
"code": "\"update\":\"xxx.yyy\"slowmssystem.profile",
"text": "I thought you redacted the output like \"update\":\"xxx.yyy\" and remove the part that contains the actual query.Else what you could do is activate the profiler at level 1 with slowms at 600. This will store all the slow operations (>600ms) in the system.profile collection. Keep that running for some time. Especially during a busy moment for your cluster would be great. But then don’t forget to turn it off when you are done.You can then analyse the content of this collection to find the worst queries to focus on first, identify missing indexes, etc.If you are running on MongoDB Atlas, this is fully automated, it’s called the Performance Advisor. It will recommend indexes automatically.",
"username": "MaBeuLux88"
},
{
"code": "db.HLQ.state.explain(\"executionStats\").update(...){ \"op\" : \"update\", \"ns\" : \"dbname.HLQ.state\", \"command\" : { \"q\" : { \"hlq.sign\" : \"33333421035679142344444\" }, \"u\" : { \"$set\" : { \"hlq\" : { \"sign\" : \"33333421035679142344444\", \"sign_type\" : 0, ... }, \"multi\" : false, \"upsert\" : false }, \"keysExamined\" : 1, \"docsExamined\" : 1, \"nMatched\" : 1, \"nModified\" : 1, \"keysInserted\" : 7, \"keysDeleted\" : 7, \"numYield\" : 1, \"queryHash\" : \"7DA4FEF0\", \"planCacheKey\" : \"538C9C8D\", \"locks\" : { \"ParallelBatchWriterMode\" : { \"acquireCount\" : { \"r\" : NumberLong(3) } }, \"ReplicationStateTransition\" : { \"acquireCount\" : { \"w\" : NumberLong(4) } }, \"Global\" : { \"acquireCount\" : { \"r\" : NumberLong(1), \"w\" : NumberLong(3) } }, \"Database\" : { \"acquireCount\" : { \"w\" : NumberLong(3) } }, \"Collection\" : { \"acquireCount\" : { \"w\" : NumberLong(3) } }, \"Mutex\" : { \"acquireCount\" : { \"r\" : NumberLong(2) } } }, \"flowControl\" : { \"acquireCount\" : NumberLong(17), \"timeAcquiringMicros\" : NumberLong(21) }, \"storage\" : { \"data\" : { \"bytesRead\" : NumberLong(489844), \"bytesWritten\" : NumberLong(672696), \"timeReadingMicros\" : NumberLong(15626), \"timeWritingMicros\" : NumberLong(13037) }, \"timeWaitingMicros\" : { \"cache\" : NumberLong(915655) } }, \"millis\" : 1402, \"planSummary\" : \"IXSCAN { hlq.sign: 1 }\", \"execStats\" : { \"stage\" : \"UPDATE\", \"nReturned\" : 0, \"executionTimeMillisEstimate\" : 70, \"works\" : 2, \"advanced\" : 0, \"needTime\" : 1, \"needYield\" : 0, \"saveState\" : 1, \"restoreState\" : 1, \"isEOF\" : 1, \"nMatched\" : 1, \"nWouldModify\" : 1, \"wouldInsert\" : false, \"inputStage\" : { \"stage\" : \"FETCH\", \"nReturned\" : 1, \"executionTimeMillisEstimate\" : 11, \"works\" : 1, \"advanced\" : 1, \"needTime\" : 0, \"needYield\" : 0, \"saveState\" : 2, \"restoreState\" : 2, \"isEOF\" : 0, \"docsExamined\" : 1, \"alreadyHasObj\" : 0, \"inputStage\" : { \"stage\" : \"IXSCAN\", \"nReturned\" : 1, \"executionTimeMillisEstimate\" : 11, \"works\" : 1, \"advanced\" : 1, \"needTime\" : 0, \"needYield\" : 0, \"saveState\" : 2, \"restoreState\" : 2, \"isEOF\" : 0, \"keyPattern\" : { \"hlq.sign\" : 1 }, \"indexName\" : \"hlq.sign_1\", \"isMultiKey\" : false, \"multiKeyPaths\" : { \"hlq.sign\" : [ ] }, \"isUnique\" : false, \"isSparse\" : false, \"isPartial\" : false, \"indexVersion\" : 2, \"direction\" : \"forward\", \"indexBounds\" : { \"hlq.sign\" : [ \"[\\\"33333421035679142344444\\\", \\\"33333421035679142344444\\\"]\" ] }, \"keysExamined\" : 1, \"seeks\" : 1, \"dupsTested\" : 0, \"dupsDropped\" : 0 } } }, \"ts\" : ISODate(\"2021-02-01T11:29:51.591Z\"), \"client\" : \"10.0.0.146\", \"allUsers\" : [ ], \"user\" : \"\" }\n{\n\n \"queryPlanner\" : {\n\n \"plannerVersion\" : 1,\n\n \"namespace\" : \"dbname.HLQ.state\",\n\n \"indexFilterSet\" : false,\n\n \"parsedQuery\" : {\n\n \"hlq.sign\" : {\n\n \"$eq\" : \"33333421035679142344444\"\n\n }\n\n },\n\n \"winningPlan\" : {\n\n \"stage\" : \"UPDATE\",\n\n \"inputStage\" : {\n\n \"stage\" : \"FETCH\",\n\n \"inputStage\" : {\n\n \"stage\" : \"IXSCAN\",\n\n \"keyPattern\" : {\n\n \"hlq.sign\" : 1\n\n },\n\n \"indexName\" : \"hlq.sign_1\",\n\n \"isMultiKey\" : false,\n\n \"multiKeyPaths\" : {\n\n \"hlq.sign\" : [ ]\n\n },\n\n \"isUnique\" : false,\n\n \"isSparse\" : false,\n\n \"isPartial\" : false,\n\n \"indexVersion\" : 2,\n\n \"direction\" : \"forward\",\n\n \"indexBounds\" : {\n\n \"hlq.sign\" : [\n\n \"[\\\"33333421035679142344444\\\", \\\"33333421035679142344444\\\"]\"\n\n ]\n\n }\n\n }\n\n }\n\n },\n\n \"rejectedPlans\" : [ ]\n\n },\n\n \"executionStats\" : {\n\n \"executionSuccess\" : true,\n\n \"nReturned\" : 0,\n\n \"executionTimeMillis\" : 55,\n\n \"totalKeysExamined\" : 1,\n\n \"totalDocsExamined\" : 1,\n\n \"executionStages\" : {\n\n \"stage\" : \"UPDATE\",\n\n \"nReturned\" : 0,\n\n \"executionTimeMillisEstimate\" : 1,\n\n \"works\" : 2,\n\n \"advanced\" : 0,\n\n \"needTime\" : 1,\n\n \"needYield\" : 0,\n\n \"saveState\" : 0,\n\n \"restoreState\" : 0,\n\n \"isEOF\" : 1,\n\n \"nMatched\" : 1,\n\n \"nWouldModify\" : 1,\n\n \"wouldInsert\" : false,\n\n \"inputStage\" : {\n\n \"stage\" : \"FETCH\",\n\n \"nReturned\" : 1,\n\n \"executionTimeMillisEstimate\" : 1,\n\n \"works\" : 1,\n\n \"advanced\" : 1,\n\n \"needTime\" : 0,\n\n \"needYield\" : 0,\n\n \"saveState\" : 1,\n\n \"restoreState\" : 1,\n\n \"isEOF\" : 0,\n\n \"docsExamined\" : 1,\n\n \"alreadyHasObj\" : 0,\n\n \"inputStage\" : {\n\n \"stage\" : \"IXSCAN\",\n\n \"nReturned\" : 1,\n\n \"executionTimeMillisEstimate\" : 1,\n\n \"works\" : 1,\n\n \"advanced\" : 1,\n\n \"needTime\" : 0,\n\n \"needYield\" : 0,\n\n \"saveState\" : 1,\n\n \"restoreState\" : 1,\n\n \"isEOF\" : 0,\n\n \"keyPattern\" : {\n\n \"hlq.sign\" : 1\n\n },\n\n \"indexName\" : \"hlq.sign_1\",\n\n \"isMultiKey\" : false,\n\n \"multiKeyPaths\" : {\n\n \"hlq.sign\" : [ ]\n\n },\n\n \"isUnique\" : false,\n\n \"isSparse\" : false,\n\n \"isPartial\" : false,\n\n \"indexVersion\" : 2,\n\n \"direction\" : \"forward\",\n\n \"indexBounds\" : {\n\n \"hlq.sign\" : [\n\n \"[\\\"33333421035679142344444\\\", \\\"33333421035679142344444\\\"]\"\n\n ]\n\n },\n\n \"keysExamined\" : 1,\n\n \"seeks\" : 1,\n\n \"dupsTested\" : 0,\n\n \"dupsDropped\" : 0\n\n }\n\n }\n\n }\n\n },\n\n \"serverInfo\" : {\n\n \"host\" : \"mongo01\",\n\n \"port\" : 27017,\n\n \"version\" : \"4.4.2\",\n\n \"gitVersion\" : \"15e73dc5738d2278b688f8929aee605fe4279b0e\"\n\n },\n\n \"ok\" : 1,\n\n \"$clusterTime\" : {\n\n \"clusterTime\" : Timestamp(1612202001, 2215),\n\n \"signature\" : {\n\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\n \"keyId\" : NumberLong(0)\n\n }\n\n },\n\n \"operationTime\" : Timestamp(1612202001, 2121)\n\n}\nmongostat\ninsert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time\n 112 10089 2353 5 383 2303|0 20.8% 85.1% 0 283G 197G 0|786 61|128 7.41m 78.7m 16276 xxxxxx PRI Feb 1 16:00:04.057\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 133940, \"client\" : \"10.0.0.77\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 96181, \"client\" : \"10.0.0.76\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 73975, \"client\" : \"10.0.0.76\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 62515, \"client\" : \"10.0.0.77\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 61380, \"client\" : \"10.0.0.76\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 55350, \"client\" : \"10.0.0.77\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 52383, \"client\" : \"10.0.0.77\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 48277, \"client\" : \"10.0.0.78\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 47941, \"client\" : \"10.0.0.76\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 47846, \"client\" : \"10.0.0.76\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 47621, \"client\" : \"10.0.0.77\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 45234, \"client\" : \"10.0.0.76\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 43663, \"client\" : \"10.0.0.78\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 39867, \"client\" : \"10.0.0.78\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 39616, \"client\" : \"10.0.0.78\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 36962, \"client\" : \"10.0.0.77\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 36874, \"client\" : \"10.0.0.77\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 23735, \"client\" : \"10.0.0.76\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1784, \"client\" : \"10.0.0.152\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1739, \"client\" : \"10.0.0.126\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1714, \"client\" : \"10.0.0.77\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1687, \"client\" : \"10.0.0.125\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 1659, \"client\" : \"10.0.0.127\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1637, \"client\" : \"10.0.0.116\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1623, \"client\" : \"10.0.0.78\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1614, \"client\" : \"10.0.0.151\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1611, \"client\" : \"10.0.0.150\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1556, \"client\" : \"10.0.0.76\" }\n{ \"op\" : \"insert\", \"command\" : { }, \"millis\" : 1527, \"client\" : \"10.0.0.146\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1523, \"client\" : \"10.0.0.114\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1508, \"client\" : \"10.0.0.144\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1500, \"client\" : \"10.0.0.125\" }\n{ \"op\" : \"update\", \"command\" : { }, \"millis\" : 1472, \"client\" : \"10.0.0.109\" }\n\nmany updates\n",
"text": "Hi @MaBeuLux88,I turned on the profiler and ran explain db.HLQ.state.explain(\"executionStats\").update(...).The request completed quickly. He has a good access plan.We have a queue of write queries:Top queries:We also have a lot of global locks and no write tickets available.\nCan insert prevent update, or vice versa?Best regards,\nKonstantin",
"username": "Konstantin"
},
{
"code": "",
"text": "Hey @Konstantin,I have a few questions:Thanks,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hey again @Konstantin,From what I see from your mongostats, you are queuing a lot.You have 786 write operation queuing and 128 currently happening. I think your disk is saturating. Too many write operations probably and your disk can’t keep up with the load.If you have many indexes maybe you could offload the indexes to another SSD by using the wiredTigerDirectoryForIndexes option. You could also offload the logs.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | High value of timeWaitingMicros.cache | 2021-01-26T14:02:53.830Z | High value of timeWaitingMicros.cache | 4,884 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.4-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.3. The next stable release 4.4.4 will be a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.4.4-rc0 is released | 2021-02-04T15:50:33.052Z | MongoDB 4.4.4-rc0 is released | 2,256 |
null | [
"cxx"
] | [
{
"code": "CMake Error at src/bsoncxx/CMakeLists.txt:98 (find_package):\n By not providing \"Findlibbson-1.0.cmake\" in CMAKE_MODULE_PATH this project\n has asked CMake to find a package configuration file provided by\n \"libbson-1.0\", but CMake did not find one.\n\n Could not find a package configuration file provided by \"libbson-1.0\"\n (requested version 1.13.0) with any of the following names:\n\n libbson-1.0Config.cmake\n libbson-1.0-config.cmake\n\n Add the installation prefix of \"libbson-1.0\" to CMAKE_PREFIX_PATH or set\n \"libbson-1.0_DIR\" to a directory containing one of the above files. If\n \"libbson-1.0\" provides a separate development package or SDK, be sure it\n has been installed.\n\n\n-- Configuring incomplete, errors occurred!\nSee also \"C:/mongo-cxx-driver-r3.6.2/CMakeFiles/CMakeOutput.log\".\n.in",
"text": "Hey, I have been been attempting to install the mong0-cxx-r3.6.2 driver but ran into an issue during the installation process. Following the given windows installation guide I am stuck on step 4 where I have to configure the driver with the given dependencies. I have already built the mongo-c driver successfully but when I attempt to do the same for c++ I am given the following error:I do see a a file, libbson-1.0Config.cmake.in, in C:\\mongo-c-driver-1.17.3\\src\\libbson\\build\\cmake but is is has the .in extension. I am not to familiar with cmake so any tips would be apprecitated. Thanks!",
"username": "lee_pruissen"
},
{
"code": ".insrc",
"text": "Based on the fact that you see the file with a .in extension and that it is in a sub-directory of src, it seems that you have simply unpacked the C driver sources without actually building and installing them. You will need to follow the instructions for building and installing the C driver, then you will need to make sure to pass the location of the C driver installation to the C++ driver build (as detailed in the C++ driver installation instructions).",
"username": "Roberto_Sanchez"
}
] | CMake issue mongocxx driver on windows | 2021-01-30T07:47:12.483Z | CMake issue mongocxx driver on windows | 2,435 |
null | [
"aggregation"
] | [
{
"code": " db.OrderAttributes.aggregate([\n {\n $lookup:\n {\n from: \"Orders\",\n localField: \"OrderId\",\n foreignField: \"OrderId\",\n as: \"orderdata\"\n }\n },\n {\n $match:{\n RulesetInfluencedAttributes: {$elemMatch: {\n LexisAttributes:{$elemMatch: {\n \"myattributes.myvalue\":{$ne:null}\n }\n \n }\n }} \n } \n },\n { \n $addFields: { \n test: \"$RulesetInfluencedAttributes.0.LexisAttributes.0.myattributes.myvalue\" <-- this returns empty array \n }}\n } \n \n },\n",
"text": "In my pipeline I’m trying to connect documents that have a deeply nested values that I wanted to get on top by saving then in new added fields. But since they are stored inside multidimensional arrays I’m unable to get them thu numeric indexes.Not sure what am i doing wrong:",
"username": "Andrey_Smirnov"
},
{
"code": "{\"$addFields\" {\"test\" {\"$let\" {\"vars\" {\"v0\" {\"$let\" {\"vars\" {\"v0\" {\"$arrayElemAt\" [{\"$let\" {\"vars\" {\"v0\" {\"$arrayElemAt\" [\"$RulesetInfluencedAttributes\" 0]}}, \"in\" \"$$v0.LexisAttributes\"}} 0]}}, \"in\" \"$$v0.myattributes\"}}}, \"in\" \"$$v0.myvalue\"}}}}\n{\"$addFields\" {\"test\" {\"$let\" {\"vars\" {\"v0\" {\"$arrayElemAt\" [\"$RulesetInfluencedAttributes\" 0]}}, \"in\" {\"$let\" {\"vars\" {\"v1\" {\"$arrayElemAt\" [\"$$v0.LexisAttributes\" 0]}}, \"in\" \"$$v1.myattributes.myvalue\"}}}}}}\n",
"text": "It will be helpful to give the data,and the wanted final result.But if the only problem is that paht in $addFields , you can fix it.\nYou need to use arrayElemAt.fieldName is ok\n.index doesnt work in aggregation (use arrayElemAt)You can do it by hand using temporal variables.\nOr you can make a driver function that generates that MQL code.My generator function for your path gaveBy hand its a bit smaller",
"username": "Takis"
}
] | Add fields based on deeply nested values | 2021-02-04T05:29:57.316Z | Add fields based on deeply nested values | 3,124 |
null | [
"crud"
] | [
{
"code": "{\"id\": \"name0\", \"team\": \"engineer\", \"manager\": \"boss1\", ....}\n{\"id\": \"name1\", \"team\": \"engineer\", \"manager\": \"boss1\", ....}\n{\"id\": \"name2\", \"team\": \"engineer\", \"manager\": \"boss1\", ....}\ndb.testInsertion.update({\"id\": \"name0\"}, {\"$setOnInsert\": {\"team\": \"engineer\", \"manager\": \"boss1\"}}, {\"upsert\":true}) is adding \"id\" field even when it is not mentioned in $setOnInsert. But following does not work: \ndb.testInsertion.update({\"id\": {\"$in\": [\"name0\", \"name1\", \"name2\"]}, {\"$setOnInsert\": {\"team\": \"engineer\", \"manager\": \"boss1\"}}, {\"upsert\":true}, {\"$multiple\": true}) \n",
"text": "Hi there, I have a question on insert operation.My goal is to insert multiple documents. All of these documents are sharing the same default value for all fields except id. For example:Is there an operation that can help me achieve this insertion by not duplicate all default value multiple times? I can think of an approach with two steps, 1. inject id field only 2. update field to default value by filtering on “id” field. But is there a way to achieve this in single step?I find $setOnInsert function on update operation interesting, becauseAm I on the right direction?",
"username": "Sunan_Jiang"
},
{
"code": "",
"text": "Hi @Sunan_Jiang! Welcome to the community!I’m not exactly sure what you’re trying to accomplish, so my apologies if you’ve already considered the following:MongoDB will automatically create an _id field for you if you do not specify it. If you want a meaningful _id (perhaps you want to take advantage of the default index on _id so you want _id to be something unique like an employee id), you should set it. If you don’t care with the _id is, you don’t have to set it.My instinct based on what you have written would be to write a script in my favorite programming language to generate the documents to be inserted and then use insertMany() to insert them. This would allow you to programmatically set the default values.Is there a reason you’re only searching for those 3 documents in your second example? It’s unclear to me if you’re trying to set a permanent default or you’re trying to set defaults for an initial set of documents.",
"username": "Lauren_Schaefer"
},
{
"code": "mongovar insert_array = [ ]\nvar ids_to_insert = [ \"name0\", \"name1\", \"\"name2\" ]\n\nfor (let id of ids_to_insert) {\n let doc = { \"team\": \"engineer\", \"manager\": \"boss1\" } // shared default fields/values\n doc[\"_id\"] = id\n insert_array.push(doc)\n}\n\ndb.test.insertMany(insert_array)",
"text": "Hello @Sunan_Jiang, here is an approach you can try. This is from the mongo shell:",
"username": "Prasad_Saya"
}
] | Insert multiple documents with same default fields | 2021-02-04T07:38:33.734Z | Insert multiple documents with same default fields | 6,106 |
null | [
"replication"
] | [
{
"code": "mongodmongod.exemongod.cfgreplication:\n replSetName: rs0\n",
"text": "Hi, I am a Windows user and I am using transactions with mongodb and I have to stop the mongodb service by:\nnet stop mongodb\nand run as a replica set member like this:\nmongod --replSet rs0\nmongod is an alias for my file path to mongod.exeI was just wondering whether there is a more efficient way to start mongodb service as a replica set member by default without being required to stop and run again because I use transactions quite offenly.I tried to edit the mongod.cfg by adding the following lines which I found in the docs:but that did not work at all. When I edit this, I have to restart my PC because mongod also stops working after I edit this, and when I restart my PC, it gets fixed by itself.Really stuck!!",
"username": "Tayyab_Ferozi"
},
{
"code": "",
"text": "You will need to share the logs so that we can help. There is not enough information in your current post.",
"username": "steevej"
},
{
"code": "net stop mongodbmongod --replSet rs0",
"text": "Basically there is no issue in my current flow but I just want to make things more efficient. All I want to do is basically want to start the mongodb server as a replica set always so that I don’t have to net stop mongodb and mongod --replSet rs0 every time I turn my machine on!",
"username": "Tayyab_Ferozi"
},
{
"code": "<install directory>\\bin\\mongod.cfg",
"text": "@Tayyab_FeroziYou want to enable mongodb to run as a service when you install. Then the mongod.cfg in the install path will be used. If you did not do this you may be able to create your own with sc.exe.Starting in version 4.0, you can install and configure MongoDB as a Windows Service during the install, and the MongoDB service is started upon successful installation. MongoDB is configured using the configuration file <install directory>\\bin\\mongod.cfg .",
"username": "chris"
},
{
"code": "mongodb.cfg# mongod.conf\n\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: C:\\Program Files\\MongoDB\\Server\\4.4\\data\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: C:\\Program Files\\MongoDB\\Server\\4.4\\log\\mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n#processManagement:\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\nreplication:\n replSetName: rs0\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n",
"text": "Yes I enabled that option but still having issues. This is my mongodb.cfg:Please can you help me?? It doesn’t start correctly",
"username": "Tayyab_Ferozi"
},
{
"code": "",
"text": "but still having issues.What are the issues? We cannot help without more details.",
"username": "steevej"
},
{
"code": "sc.exe qc MongoDBPS C:\\Users\\admin\\Downloads> sc.exe qc MongoDB\n[SC] QueryServiceConfig SUCCESS\n\nSERVICE_NAME: MongoDB\n TYPE : 10 WIN32_OWN_PROCESS\n START_TYPE : 2 AUTO_START\n ERROR_CONTROL : 1 NORMAL\n BINARY_PATH_NAME : \"C:\\Program Files\\MongoDB\\Server\\4.4\\bin\\mongod.exe\" --config \"C:\\Program Files\\MongoDB\\Server\\4.4\\bin\\mongod.cfg\" --service\n LOAD_ORDER_GROUP :\n TAG : 0\n DISPLAY_NAME : MongoDB Server (MongoDB)\n DEPENDENCIES :\n SERVICE_START_NAME : NT AUTHORITY\\NetworkService\nnet:\n port: 27017\n bindIp: 127.0.0.1\n",
"text": "Sure. But if your mongod is not running as a service and/or referencing this configuration then no amount of changing this file is going to make a difference.Get the configuration of the service with sc.exe qc MongoDB :This setting will not be conducive to a working replicaset as other members will not be able to connect.",
"username": "chris"
}
] | Run mongodb as a replica set member by default when Window starts | 2021-01-12T02:28:47.917Z | Run mongodb as a replica set member by default when Window starts | 7,173 |
null | [] | [
{
"code": "",
"text": "I have 15-20 millions of records in a single collection and I want to delete about 90% of documents.\nI cant use deleteMany cuz I think my other code dependent on this db will crash. So is there any way to delete these records in chunks and faster?To add more to this, I dont have index on the field with which I want to filter results.",
"username": "Rebel"
},
{
"code": "",
"text": "Welcome to the community @Rebel.As per your description, Looks like the useful data for you in that collection is only 10%. So easiest way I would suggest is: (Considering your data is on standalone and your collection isn’t sharded)Apply index on the fields with which you want to filter the results. (If it’s one or two fields with which you want to filter the results, You can apply indexes on it. If it’s confusing, Please post the sample record and the query that you’re intending to use)Write your filtered result to the new collection (I believe that is the 10% of the data that is really useful for you)Drop the older collection.",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "My 2 cents.The idea of writing the kept documents in a new collection is very very nice. However, I would skip building the index. As all documents have to be read in RAM to build the index, I would read them only once in the filtering code. With building the index, the chances that the 10% are read twice are high unless it is the last 10%.The filtering idea of @viraj_thakrar has the added benifit that you reclaim the disk space of the old collection.",
"username": "steevej"
}
] | How can I bulk delete documents? | 2021-02-04T10:25:14.071Z | How can I bulk delete documents? | 6,029 |
null | [
"dot-net",
"replication",
"connecting"
] | [
{
"code": "No connection could be made because the target machine actively refused it. 11.85.70.129:27018\n at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)\n at System.Net.Sockets.Socket.<>c.<ConnectAsync>b__274_0(IAsyncResult iar)\n--- End of stack trace from previous location where exception was thrown ---\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2021-02-03T08:38:34.8749076Z\", LastUpdateTimestamp: \"2021-02-03T08:38:34.8749079Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"11.85.70.129:27019\" }\", EndPoint: \"11.85.70.129:27019\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (10061): No connection could be made because the target machine actively refused it. 11.85.70.129:27019\n at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)\n at System.Net.Sockets.Socket.<>c.<ConnectAsync>b__274_0(IAsyncResult iar)\n--- End of stack trace from previous location where exception was thrown ---\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2021-02-03T08:38:33.3437327Z\", LastUpdateTimestamp: \"2021-02-03T08:38:33.3437330Z\" }] }.\n at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChanged(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Clusters.Cluster.SelectServer(IServerSelector selector, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Clusters.IClusterExtensions.SelectServerAndPinIfNeeded(ICluster cluster, ICoreSessionHandle session, IServerSelector selector, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Bindings.WritableServerBinding.GetWriteChannelSource(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Bindings.ReadWriteBindingHandle.GetWriteChannelSource(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteContext.Create(IWriteBinding binding, Boolean retryRequested, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.Execute(IWriteBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteWriteOperation[TResult](IWriteBinding binding, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteWriteOperation[TResult](IClientSessionHandle session, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IClientSessionHandle session, IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.<>c__DisplayClass27_0.<BulkWrite>b__0(IClientSessionHandle session)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionBase`1.<>c__DisplayClass68_0.<InsertOne>b__0(IEnumerable`1 requests, BulkWriteOptions bulkWriteOptions)\n at MongoDB.Driver.MongoCollectionBase`1.InsertOne(TDocument document, InsertOneOptions options, Action`2 bulkWrite)\n at MongoDB.Driver.MongoCollectionBase`1.InsertOne(TDocument document, InsertOneOptions options, CancellationToken cancellationToken)\n",
"text": "Hello!\nMaybe .NET Driver Version 2.12 will fix this problem, but still it…\nI have a .net core project that uses mongodb with 3 replicaset. When there is some network problem the connection is fail, but the mongo csharp driver doesn’t reconnect also after that the network is fixed and broken with memory leak.here the log:There is any solution for this problem?Thanks a lot",
"username": "Zoltan_Kovacs"
},
{
"code": "",
"text": "Hi @Zoltan_Kovacs,Welcome to MongoDB community.Can you share your connection builder settings , pacifically if you set a replicaset name?Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I use this simple connection string:“mongodb://TestWR:[email protected]:27017,11.85.70.129:27018,11.85.70.129:27019/?replicaSet=FdApps_Dev&authSource=TestDb”All of replica members set as priority:1\nMaybe I need to add “readPreference=nearest” also?",
"username": "Zoltan_Kovacs"
},
{
"code": "",
"text": "@Zoltan_Kovacs,According to the the exception both ports refused it.I think that the machine is not available on all nodes as its the same. machine. This creates a condition there no host is available.I would suggest to use latest driver and increase socket timeout and server selection time.Additionally, add retry logic to your application.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | Mongo Csharp Driver doesn't reconnect to MongoDB ReplicaSet | 2021-02-03T11:24:54.851Z | Mongo Csharp Driver doesn’t reconnect to MongoDB ReplicaSet | 5,313 |
null | [] | [
{
"code": "",
"text": "Hi.Can I use MongoDB Realm without sync but instead use it just as a local/offline db only?",
"username": "Mooli_Morano"
},
{
"code": "",
"text": "Hi - and welcome to the community!\nYou sure can! It’s perfect for that. And should you later figure out you want to sync something, you can add a few lines and off you go \nCheers",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "That’s brilliant thanks.",
"username": "Mooli_Morano"
}
] | MongoDB Realm as local db only | 2021-02-04T10:07:12.382Z | MongoDB Realm as local db only | 1,773 |
null | [
"graphql"
] | [
{
"code": "query {\n users {\n documents {\n name\n }\n }\n}\nconst requesterOpportunitiesResolver = (_, source) => context.services.get('mongodb-atlas').db('xxxl').collection('documents').find({ user: source._id }).toArray()\ndocumentsquery {\n users {\n documents (query: {tags_in: [\"tag1\", \"tag2\"]}) {\n name\n }\n }\n}\ninput",
"text": "I have a simple query resolver that finds all Documents created by a User.\ne.g.Under the hood it does smth like this:Now the question is how do I support all the filtering options that the generated documents query supports?\nManually map everything from the input to some mongodb queries??E.g. I want smth like this:Basically the way I imagine it could work is if there was a function to generate mongodb query based on input argument.",
"username": "dimaip"
},
{
"code": " \"bsonType\": \"object\",\n \"title\": \"requesterOpportunitiesResolver\",\n \"properties\": {\n \"tags_in\": {\n \"bsonType\": \"array\",\n \"items\": \n { \n \"bsonType\": \"string\"\n }\n \n }\n }\n}\n",
"text": "Hi! You should be able to do this by defining a “custom input type” (ex here - https://docs.mongodb.com/realm/graphql/custom-resolvers#define-the-input-type)You would probably define the input like this:Is this what you’re looking for?",
"username": "Sumedha_Mehta1"
},
{
"code": "\"input_type_format\": \"UserQueryInput\",context.services.get('mongodb-atlas').db('xxxl').collection('documents').find(QueryHelper.build(input))",
"text": "Manually map everything from the input to some mongodb queries??Hi @Sumedha_Mehta1! Thanks for your reply!\nI understand that I can implement by hand all possible filter types. However it would be a lot of work to support all possible filter types consistently with how generated field filters work.How I imagined it could work: you mark an input with some existing type, e.g. \"input_type_format\": \"UserQueryInput\", and then in the resolver function you’d have some function exposed that would turn your input into a mongodb query object, e.g. a kind of query builder: context.services.get('mongodb-atlas').db('xxxl').collection('documents').find(QueryHelper.build(input))",
"username": "dimaip"
}
] | Custom query resolvers supporting the generated query input | 2021-01-13T12:33:20.210Z | Custom query resolvers supporting the generated query input | 3,120 |
null | [
"replication",
"backup"
] | [
{
"code": "2021-01-06T10:09:12.674+0000 I CONTROL [main] ***** SERVER RESTARTED *****\n2021-01-06T10:09:12.679+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2021-01-06T10:09:12.681+0000 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] MongoDB starting : pid=14056 port=37017 dbpath=/avfm-fo/mongodb-linux-x86_64-amazon-4.2.10/data 64-bit host=ip-10-122-8-237.us-west-2.compute.internal\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] db version v4.2.10\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] git version: 88276238fa97b47c0ef14362b343c5317ecbd739\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] allocator: tcmalloc\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] modules: none\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] build environment:\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] distmod: amazon\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] distarch: x86_64\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] target_arch: x86_64\n2021-01-06T10:09:12.750+0000 I CONTROL [initandlisten] options: { config: \"/avfm-fo/mongodb-linux-x86_64-amazon-4.2.10/mongod.conf\", net: { bindIp: \"localhost,10.122.8.237\", port: 37017 }, processManagement: { fork: true }, replication: { enableMajorityReadConcern: false, oplogSizeMB: 100000, replSetName: \"avfmrs\" }, security: { authorization: \"enabled\", keyFile: \"/avfm-fo/mongodb-linux-x86_64-amazon-4.2.10/key/mongoKey\" }, storage: { dbPath: \"/avfm-fo/mongodb-linux-x86_64-amazon-4.2.10/data\", engine: \"wiredTiger\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, logRotate: \"reopen\", path: \"/avfm-fo/mongodb-linux-x86_64-amazon-4.2.10/logs/mongo.log\" } }\n2021-01-06T10:09:12.753+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3292M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],\n2021-01-06T10:09:13.318+0000 I STORAGE [initandlisten] WiredTiger message [1609927753:318114][14056:0x7fd076be1c40], txn-recover: Recovering log 8909 through 8910\n2021-01-06T10:09:13.377+0000 I STORAGE [initandlisten] WiredTiger message [1609927753:377076][14056:0x7fd076be1c40], txn-recover: Recovering log 8910 through 8910\n2021-01-06T10:09:13.513+0000 I STORAGE [initandlisten] WiredTiger message [1609927753:513533][14056:0x7fd076be1c40], txn-recover: Main recovery loop: starting at 8909/39040 to 8910/256\n2021-01-06T10:09:13.514+0000 I STORAGE [initandlisten] WiredTiger message [1609927753:514945][14056:0x7fd076be1c40], txn-recover: Recovering log 8909 through 8910\n2021-01-06T10:09:13.605+0000 I STORAGE [initandlisten] WiredTiger message [1609927753:605598][14056:0x7fd076be1c40], txn-recover: Recovering log 8910 through 8910\n2021-01-06T10:09:13.662+0000 I STORAGE [initandlisten] WiredTiger message [1609927753:662821][14056:0x7fd076be1c40], txn-recover: Set global recovery timestamp: (1609910372, 1)\n2021-01-06T10:09:13.676+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1609910372, 1)\n2021-01-06T10:09:17.294+0000 I STORAGE [initandlisten] Starting OplogTruncaterThread local.oplog.rs\n2021-01-06T10:09:17.294+0000 I STORAGE [initandlisten] The size storer reports that the oplog contains 56 records totaling to 7124 bytes\n2021-01-06T10:09:17.294+0000 I STORAGE [initandlisten] Scanning the oplog to determine where to place markers for truncation\n2021-01-06T10:09:17.295+0000 I STORAGE [initandlisten] WiredTiger record store oplog processing took 0ms\n2021-01-06T10:09:17.305+0000 I STORAGE [initandlisten] Timestamp monitor starting\n2021-01-06T10:09:41.883+0000 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>\n2021-01-06T10:09:41.886+0000 I STORAGE [initandlisten] Flow Control is enabled on this deployment.\n2021-01-06T10:09:41.886+0000 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>\n2021-01-06T10:09:41.887+0000 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>\n2021-01-06T10:09:41.888+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>\n2021-01-06T10:09:41.889+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/avfm-fo/mongodb-linux-x86_64-amazon-4.2.10/data/diagnostic.data'\n2021-01-06T10:09:41.890+0000 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>\n2021-01-06T10:09:41.890+0000 I SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>\n2021-01-06T10:09:41.891+0000 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.\n2021-01-06T10:09:41.891+0000 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: fb814c0d-2acc-4b7f-b6c2-009b466da3f9 and options: {}\n2021-01-06T10:09:41.891+0000 I SHARDING [monitoring-keys-for-HMAC] Marking collection admin.system.keys as collection version: <unsharded>\n2021-01-06T10:09:42.033+0000 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id\n2021-01-06T10:09:42.033+0000 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>\n2021-01-06T10:09:42.033+0000 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version: <unsharded>\n2021-01-06T10:09:42.033+0000 I REPL [initandlisten] Initialized the rollback ID to 1\n2021-01-06T10:09:42.034+0000 I REPL [initandlisten] Recovering from stable timestamp: Timestamp(1609910372, 1) (top of oplog: { ts: Timestamp(1609927746, 1), t: 1 }, appliedThrough: { ts: Timestamp(0, 0), t: -1 }, TruncateAfter: Timestamp(0, 0))\n2021-01-06T10:09:42.034+0000 I REPL [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1609910372, 1)\n2021-01-06T10:09:42.034+0000 I REPL [initandlisten] Replaying stored operations from Timestamp(1609910372, 1) (inclusive) to Timestamp(1609927746, 1) (inclusive).\n2021-01-06T10:09:42.034+0000 F REPL [initandlisten] Oplog entry at { : Timestamp(1609910372, 1) } is missing; actual entry found is { : Timestamp(1609927216, 1) }\n2021-01-06T10:09:42.034+0000 F - [initandlisten] Fatal Assertion 40292 at src/mongo/db/repl/replication_recovery.cpp 149\n2021-01-06T10:09:42.034+0000 F - [initandlisten]\n\n***aborting after fassert() failure\n",
"text": "Facing following error while configuring the replica set.",
"username": "Amit_G"
},
{
"code": "mongoddbPath",
"text": "Welcome to the MongoDB community @Amit_G!Can you provide some more information on the steps you have taken leading up to this error? For example, are you adding this replica set member using a backup of files from another member?The startup log messages indicate recovery from existing data files, so I would suspect an inconsistent file copy backup (perhaps from a running mongod?) or an unexpected shutdown.The most straightforward approach to resolve this would be to remove the contents of the dbPath from this member to automatically sync using the initial sync process.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello @Stennie_X,Thanks for looking into it.Issue/Scenario:\nI am using the filesystem backup/snapshot and configuring the mongodb relica set (PSA).steps:\n0. drop the local database from all three nodes (PSA)are you adding this replica set member using a backup of files from another member?\nYes,automatically sync process is too slow and takes longer time. Hence that is not the viable option for us.\nI use the same steps with 3.6.8 and it is all working fine since more than a year now.\nNow I am using the same steps for 4.2.10 and it is failing.Screenshot of the error:Kindly advise.Thanks in advance",
"username": "Amit_G"
},
{
"code": "dbPathmongodumpcprsyncdb.syncLock()mongodmongod",
"text": "Hi @Amit_G,This is likely where things are going astray: if you take a backup of files that are actively being written to, you may get an inconsistent copy of the dbPath which is not usable.Can you provide more detail on the exact actions you are taking between step 3 & 4, and in step 4?You mentioned using filesystem backup/snapshot, so I’m interpreting your description as taking a filesystem snapshot rather than a mongodump.If you are backing up with cp or rsync (i.e. a file copy rather than a filesystem snapshot), you need to stop any writes using db.syncLock() or by shutting mongod before copying the data files:If your storage system does not support snapshots, you can copy the files directly using cp, rsync, or a similar tool. Since copying multiple files is not an atomic operation, you must stop all writes to the mongod before copying the files. Otherwise, you will copy the files in an invalid state.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X ,Thanks for your response and apologise for late reply.\nThis is what I do on step3 & 4 -\nAfter configuring the Primary and Arbiter as PA cluster. i take the copy of the local DB from Primary instance and restore it on Secondary, so that secondary can inherit the cluster details and become part of the cluster.on secondary I am getting Oplog timestamp mismatch and because of that mongod service not getting started and failing with error shared above.Question how to setup PSA cluster without going for initial sync on replica set?Thanks in advance\nAmit",
"username": "Amit_G"
},
{
"code": "cprsyncmongodmongod",
"text": "This is what I do on step3 & 4 -\nAfter configuring the Primary and Arbiter as PA cluster. i take the copy of the local DB from Primary instance and restore it on Secondary, so that secondary can inherit the cluster details and become part of the cluster.Hi @Amit_G,It is definitely possible to seed a new replica set member by copying data files from another member, but you have to ensure you are taking a consistent backup of the primary by following the documentation advice referenced in my previous comment.For example, if you use cp or rsync while files are actively being used by mongod you will likely end up with an unusable backup.What approach are you using to take a copy of data from the primary? Is the mongod still running when you take your copy?Regards,\nStennie",
"username": "Stennie_X"
}
] | Fatal Assertion 40292 while configuring replica set (v4.2.10) | 2021-01-06T18:48:46.054Z | Fatal Assertion 40292 while configuring replica set (v4.2.10) | 4,498 |
null | [
"replication",
"connecting",
"security"
] | [
{
"code": "",
"text": "Hi,I have configured a mongo DB with replicaSet clustering 3 servers ( 1 primary and 2 secondary). This is working pretty much good and my only concern is after configure the replicaSet I am able to access / connect the mongo DB without replicaset. When pushing the data how can I restrict the access to the DB with mandatory replicaset name. If is not the exact same replicaSet name metioned, it should drop the connection. Currently without provide any replicaset also I am able to connect and push data.I am a beginner in replicaSet configuration and really dont know actually is that a security problem or not.",
"username": "Sreejith_G"
},
{
"code": "",
"text": "Hello @Sreejith_G, welcome to the MongoDB Community forum.Currently without provide any replicaset also I am able to connect and push dataYou can connect to your replica-set using the Connection String URI for replica-set. When you use this URI format, you will specify the replica-set members and the replica-set name in the URI. This will let you connect to the replica-set and perform the database operations like reading, writing, etc.MongoDB Security is the topic you should get familiar with as this has all the features to secure your deployment. The two main aspects of the security are the Authentication and the Authorization.When you Deploy a Replica Set it is without access control (restrictions to access the database resources). To deploy a replica set with access control enabled, Deploy Replica Set With Keyfile Authentication. Since you already have a replica set deployed, see Update Replica Set to Keyfile Authentication.",
"username": "Prasad_Saya"
}
] | How to restrict access to Replica Set Database | 2021-02-04T06:03:33.884Z | How to restrict access to Replica Set Database | 3,042 |
null | [] | [
{
"code": "",
"text": "Hi all,\nYesterday we pushed out a new menu on developer.mongodb.com (including forums). Now you can easily get to lots of different developer-focused places just from the nav bar!Let me know what you think…",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "When I’m in a topic in the Community Forum ─ like where we are right now ─ I don’t really have an easy way to get back to the default landing page: MongoDB Developer Community Forums - A place to discover, learn, and grow with MongoDB technologies.My workaround before this new update was to click directly on the “Community” link at the top - but know there is an extra step to get there. Not a big deal - but I think it should be a little easier to navigate in the Community Forum different places.I miss having this at the top at all time for example.\nimage954×192 17.8 KB\n",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "It is very helpful to have all the useful links at one place in a user-friendly way!",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This is a great point, there are breadcrumbs on other pages but not on specific topic pages (like this one)…cc @Jamie",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "What I do to stay on Latest of the forum is to always open a thread in a new tab.",
"username": "steevej"
},
{
"code": "",
"text": "By extra step, you mean hovering over the Community header, then clicking the Forums link?Yeah, I think the breadcrumbs idea is a good solution here. @Stennie_X thoughts?",
"username": "Jamie"
},
{
"code": "",
"text": "Yup that’s it. It’s buried now a little :-). Just to be clear ─ it’s not a big issue at all humm !",
"username": "MaBeuLux88"
},
{
"code": "?",
"text": "Hi @Sheeri_Cabral,Loving the embiggened menu, but I have a few suggestions for more consistent interaction:The mouse cursor should be different for clickable versus non-clickable items. The non-clickable category headings (Learn, Community, …) currently use the same “hand pointer” cursor as the MongoDB logo and the dropdown menu items.All of the category headings are non-clickable with a dropdown, except for Documentation which is currently clickable with no dropdown. I was “waiting for the drop” but it didn’t happen . I’d like to see a dropdown menu including Documentation Home and the high level Documentation groupings: MongoDB Server, MongoDB Drivers, MongoDB Cloud, and MongoDB Tools.I think the menu headings (eg “Programs”) should remain highlighted when the associated dropdown menu is visible. This may be a bug in the styling for the forum menu, since DevHub works like I’d expect.My workaround before this new update was to click directly on the “Community” link at the top - but know there is an extra step to get there. Not a big deal - but I think it should be a little easier to navigate in the Community Forum different places.I agree that including the Discourse top nav or something like an “All Categories” link before the current category would be a helpful UX improvement.We used to have a link via the MongoDB logo at the top left of the page, but that went away with the transition to a unified menu for Developer Hub & Community Forums last June (resulting in similar discussion about a quick way back to the forum front page). Our workaround at the time was to link “Community” in the header nav to the forums, but in this latest menu update we also wanted to highlight other community activities like events & conferences.there are breadcrumbs on other pages but not on specific topic pages (like this one)Unfortunately Discourse doesn’t include the top nav on some pages by default, but I will look into options for adding this into our theme.What I do to stay on Latest of the forum is to always open a thread in a new tab.I also prefer the Latest view, which is an expanded version of the content on the right side of the forum homepage. I find Discourse’s keyboard shortcuts super handy (but I appreciate that some will always prefer visual options).For example, I rely on jump shortcuts:To see a full list, choose “Keyboard Shortcuts” from the ☰ menu near the top right of the page, or hit ? without a text input field selected.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi I assume that @steevej does it like I do too:\nwhen ever I “branch” out to follow a post and potential sublinks I open a new tab and do everything in there.\nThan I go back to the point from where I “branched” -> I go back to the initial tab.\nSo latest would be in this notion not equivalent to g,l it would be the point from where I “branched”\nIn almost all cases the shortcut “U” does not help since it not always brings you back over several steps. Neither “g, …” is fine since this will jump you kind of back but not to the exact point from where I “branched”\nRegards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "u",
"text": "when ever I “branch” out to follow a post and potential sublinks I open a new tab and do everything in there.\nThan I go back to the point from where I “branched” → I go back to the initial tab.\nSo latest would be in this notion not equivalent to g,l it would be the point from where I “branched”Hi @michael_hoeller,I don’t think there is an obvious Discourse shortcut for the multi-tab workflow you are describing. The history for going “back” (with u) is associated with a browser tab, and other navigation options like Latest are references to preset views.I also open many many browser tabs (and windows with groups of tabs ). Discourse saves some state in tabs that is eventually consistent across sessions (for example, if you open the same draft post in two windows they may have conflicting versions) but generally tolerates this workflow.If I’m working my way through a starting page that has a lot of links, I usually move the starting page to a new browser window and then Cmd+Click links to open further links in new tabs. As I work through my tab reading list, I close each tab when done until the surviving tab is my starting page.I often end up going back to the Latest view so I can see recent discussion activity. I also use the “Bookmarks” feature to keep track of topics of interest to visit later (with an optional reminder).Hopefully we can work out how to make the top navbar appear consistently and add a Home link (or equivalent name) to return to the forum landing page. More advanced workflow might require some creative use of browser features.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | New Navigation menu! (27 Jan 2021) | 2021-01-27T17:19:12.361Z | New Navigation menu! (27 Jan 2021) | 4,655 |
null | [
"python",
"production",
"motor-driver"
] | [
{
"code": "",
"text": "We are pleased to announce the 2.3.1 release of Motor - MongoDB’s Asynchronous Python Driver. This release fixes a couple of bugs related to Change Streams.See the changelog for a high-level summary of what is in this release or see the Motor 2.3.1 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Motor 2.3.1 Released | 2021-02-04T01:23:58.481Z | Motor 2.3.1 Released | 2,894 |
null | [
"python",
"production"
] | [
{
"code": "writeConcernError",
"text": "We are pleased to announce the 3.11.3 release of PyMongo - MongoDB’s Python Driver. This release fixes a bug that prevented PyMongo from retrying writes after writeConcernErrors on MongoDB 4.4+.See the changelog for a high-level summary of what is in this release or see the PyMongo 3.11.3 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "",
"username": "system"
}
] | PyMongo 3.11.3 Released | 2021-02-04T01:17:39.930Z | PyMongo 3.11.3 Released | 2,230 |
null | [] | [
{
"code": "Task 1 - {Id: XXXX, person: AAA, cycle: 1, date: 2021-02-03T13: 57: 25.590 + 00: 00}\nTask 2 - {Id: XXXX, person: AAA, cycle: 1, date: 2021-02-03T13: 58: 25.590 + 00: 00}\nTask 3 - {Id: XXXX, person: AAA, cycle: 1, date: 2021-02-03T14: 58: 25.590 + 00: 00}\nTask 4 - {Id: XXXX, person: AAA, cycle: 2, date: 2021-02-03T13: 57: 25.590 + 00: 00}\nTask 5 - {Id: XXXX, person: AAA, cycle: 2, date: 2021-02-03T14: 57: 25.590 + 00: 00}\n",
"text": "I need to perform an aggregation with time, using mongodb go driver + golangI have the following document: Task {_id, person, cycle, date}.\nI need to know the number of tasks per cycle in the hour interval.\nEx:I need the result like this:\ncycle 1 - between 13 and 14 - two tasks\ncycle 1 - between 14 and 15 - a task\ncycle 2 - between 13 and 14 - a task\ncycle 2 - between 14 and 15 - a taskCan you help me?",
"username": "Diego_Lopes"
},
{
"code": "stageProject := bson.D{\n\n {\"$project\", bson.D{\n\n {\"projHour\", bson.D{\n\n {\"$hour\", \"$date\"},\n\n }},\n\n }},\n\n }\n\n groupStage := bson.D{\n\n {\"$group\", bson.D{\n\n {\"_id\", bson.D{\n\n {\"hour\", \"$projHour\"},\n\n {\"total_tasks\", bson.D{\n\n {\"$sum\", 1},\n\n }},\n\n }},\n\n \n\n }},\n\n }\n\n opts := options.Aggregate().SetMaxTime(2 * time.Second)\n\n cursor, err := taskCollection.Aggregate(context.TODO(), mongo.Pipeline{stageProject,groupStage}, opts)",
"text": "I managed to do the aggregation per hour.\nNow what is missing is to group by cyclefollow as it was",
"username": "Diego_Lopes"
}
] | Aggregation with time - Mongodb go driver + golang | 2021-02-03T20:04:12.864Z | Aggregation with time - Mongodb go driver + golang | 2,878 |
null | [
"atlas-device-sync",
"react-native",
"typescript"
] | [
{
"code": "addItemTask is not a function",
"text": "For the past 6 months i’ve being trying to implement Realm on my react native app with typescript. I followed tutorials and read the documentation multiple times, i even cloned there original tutorial to try it out and to my surprise the app didn’t work, it throw addItemTask is not a function.I’ve tried stack-overflow for help and no one came to the rescue. I tried github yet no one help me there. I’ve given up on trying to integrate my app using realm or mongo technologies as the community and support is scares.Now i don’t know what to do as my app requires an offline database and sync capabilities.if there’s anyone out the who can help understand realm with react native please help me, i’m out of options and my anxiety levels have went up 100%.",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "Sure, we’d be happy to help - what is your problem/error message that you’re seeing and what are you trying to do?",
"username": "nirinchev"
},
{
"code": " ERROR: Connection[1]: Websocket: Expected HTTP response 101 Switching Protocols, but received: HTTP/1.1 401 Unauthorized cache-control: no-cache, no-store, must-revalidate connection: close content-length: 190 content-type: application/json date: Mon, 01 Feb 2021 17:06:02 GMT server: envoy vary: Origin x-envoy-max-retries: 0 x-frame-options: DENY",
"text": "I’m trying to add/save inventory items on atlas whenever the user has an internet connection, however if the user is offline i want the data to be stored on the phone and synced to realm whenever the user goes online.i created my getRealmApp.ts as follows:\nand wrote my schema file as follows:\nand in my component i imported the realm app like this:const app = getRealmApp()and the fuction below was to test the login and to try and save those items on the database:\nScreenshot 2021-02-01 at 19.03.02520×629 47.1 KBwhen i run the save function i get:\n ERROR: Connection[1]: Websocket: Expected HTTP response 101 Switching Protocols, but received: HTTP/1.1 401 Unauthorized cache-control: no-cache, no-store, must-revalidate connection: close content-length: 190 content-type: application/json date: Mon, 01 Feb 2021 17:06:02 GMT server: envoy vary: Origin x-envoy-max-retries: 0 x-frame-options: DENYand the other error isError opening realm: {}and now when i anonymously i can not find the user createdplease feel free to let me know if i’m implement things wrongfully",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "Interesting… The code you’ve posted seems correct - can you check if there are any error logs on the server? Those might shed more light as to why the user is considered unauthorized. Additionally - do you have any access rules defined on the server? It could be that the user doesn’t have permission to access that partition.",
"username": "nirinchev"
},
{
"code": "Error opening realm: {} Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = falseERROR: Connection[1]: Websocket: Expected HTTP response 101 Switching Protocols, but received: HTTP/1.1 401 Unauthorized cache-control: no-cache, no-store, must-revalidate connection: close content-length: 190 content-type: application/json date: Wed, 03 Feb 2021 21:59:57 GMT server: envoy vary: Origin x-envoy-max-retries: 0 x-frame-options: DENY Connection[1]: Connection closed due to erroronSaveItem",
"text": "I’ve enable the this rules for sync\nScreenshot 2021-02-03 at 14.07.001112×503 19.4 KBI was trying to see what was happening when i opened realm, I got this:\nError opening realm: {}and\n Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = falseand finally:\nERROR: Connection[1]: Websocket: Expected HTTP response 101 Switching Protocols, but received: HTTP/1.1 401 Unauthorized cache-control: no-cache, no-store, must-revalidate connection: close content-length: 190 content-type: application/json date: Wed, 03 Feb 2021 21:59:57 GMT server: envoy vary: Origin x-envoy-max-retries: 0 x-frame-options: DENY Connection[1]: Connection closed due to errorand all of this is coming from the onSaveItem function:\n",
"username": "Tony_Ngomana"
}
] | Problems with mongo db realm | 2021-01-31T21:54:08.398Z | Problems with mongo db realm | 4,800 |
null | [
"java",
"spring-data-odm"
] | [
{
"code": "",
"text": "Hi. Couldn’t solve the issue with CodecConfigurationExcpetion throwing \nThe issue is described here org.bson.codecs.configuration.CodecConfigurationException: The uuidRepresentation has not been specified, so the UUID cannot be encoded. · Issue #3546 · spring-projects/spring-data-mongodb · GitHub\nIt seems like the uuidRepresentation setting is not picked up by the registry.Driver version 4.1.1.\nSpring Mongo Data version 3.1.3\nSpring Boot version 2.4.0",
"username": "Max_Doroshenko"
},
{
"code": " MongoClientSettings clientSettings = MongoClientSettings.builder().uuidRepresentation(UuidRepresentation.STANDARD).build();\n CodecRegistry codecRegistry = MongoClients.create(clientSettings).getDatabase(\"test\").getCodecRegistry();\n\n Document document = new Document(\"id\", UUID.randomUUID());\n\n System.out.println(document.toJson(codecRegistry.get(Document.class)));\n{\"id\": {\"$binary\": {\"base64\": \"Er5BaasHQaaYOdEyv9wphg==\", \"subType\": \"04\"}}}\n",
"text": "Hi there.I tried the workaround suggested by Christoph and it produced the expected result:The output is:Are you seeing something different, or are you looking for a more elegant solution?Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Hi, Jeffrey! Thanks for the quick response \nThe problem is that I want to customize UuidCodec to not use default UuidRepresentation.UNSPECIFIED value but use UuidRepresentation.STANDARD or UuidRepresentation.JAVA_LEGACY…\nI found that the reason is that my application is using Document#toJson method while trying to convert the value to POJO. And this method invocation causes this error \nIf there is a default value for default DocumentCodec() which is used by toJson(jsonWritter) method I would be grateful if there are some ways to customize it.",
"username": "Max_Doroshenko"
},
{
"code": "",
"text": "I’m not sure I understand. The code I included above is the most straightforward way to customize the behavior to use STANDARD UuidRepresentation.Perhaps if you provide a code sample indicating how you would like it to work, we can go from there.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "protected <T> List<T> executeAggregation( final Aggregation aggregation, final Class collectionType, final Class<T> clazz ) {\n\n def collectionName = MongoCollectionUtils.getPreferredCollectionName( collectionType )\n def options = new AggregationOptions( true, false, null )\n def updatedAggregation = aggregation.withOptions( options )\n def results = theMongoOperations.aggregate( updatedAggregation, collectionName, clazz )\n results.mappedResults\n}\ngroup( USER_ID, KNOWN, LEARNING ).sum( TIME ).as( TIME )",
"text": "Thanks, Jeff. I appreciate this. Your solution is good for me, I will use it in unit tests. Also, I have the next code to aggregate results:When I tried to execute aggregation with default grouping like group( USER_ID, KNOWN, LEARNING ).sum( TIME ).as( TIME ) my integration tests is blowing up with the next exception:org.springframework.core.convert.ConversionFailedException: Failed to convert from type [org.bson.Document] to type [java.lang.String] for value ‘Document{{user-id=d340a0eb-647d-4187-9a69-134579ec2539, known=SPANISH, learning=ENGLISH}}’; nested exception is org.bson.codecs.configuration.CodecConfigurationException: The uuidRepresentation has not been specified, so the UUID cannot be encoded.\nCaused by: org.bson.codecs.configuration.CodecConfigurationException: The uuidRepresentation has not been specified, so the UUID cannot be encoded.\nat org.bson.codecs.UuidCodec.encode(UuidCodec.java:72)\nat org.bson.codecs.UuidCodec.encode(UuidCodec.java:37)\nat org.bson.codecs.EncoderContext.encodeWithChildContext(EncoderContext.java:91)\nat org.bson.codecs.DocumentCodec.writeValue(DocumentCodec.java:203)\nat org.bson.codecs.DocumentCodec.writeMap(DocumentCodec.java:217)\nat org.bson.codecs.DocumentCodec.encode(DocumentCodec.java:159)\nat org.bson.codecs.DocumentCodec.encode(DocumentCodec.java:46)\nat org.bson.Document.toJson(Document.java:440)\nat org.bson.Document.toJson(Document.java:413)\nat org.bson.Document.toJson(Document.java:400)\nat org.springframework.data.mongodb.core.convert.MongoConverters$DocumentToStringConverter.convert(MongoConverters.java:241)\nat org.springframework.data.mongodb.core.convert.MongoConverters$DocumentToStringConverter.convert(MongoConverters.java:234)\nat org.springframework.core.convert.support.GenericConversionService$ConverterAdapter.convert(GenericConversionService.java:386)\nat org.springframework.core.convert.support.ConversionUtils.invokeConverter(ConversionUtils.java:41)\n… 41 moreIt seems like when document is trying to be converted from string using the next lines inside:Codec codec = registry.get(value.getClass());\nencoderContext.encodeWithChildContext(codec, writer, value);The UuidCodec returned with UNSPECIFIED UuidRepresentation value…",
"username": "Max_Doroshenko"
},
{
"code": "",
"text": "Eventually, I found the cause… So the problem is that in Spring Data 3.x when I use group like:Aggregation.group( FieldOne, FieldTwo, FieldThree )I will have as result the next aggregation:Document { _id: Document { [FieldOne: X, FieldTwo: Y, FieldThree: Z] }, FieldOne: X, FieldTwo: Y, FieldThree: Z }I have CodecConfigurationExcpetion because _id: [FieldOne: X, FieldTwo: Y, FieldThree: Z] can not be cast to UUID…Composite ids from Aggregation.Group() are no longer mapped to their respective fields. Jeff, could you please provide some info on how to deal with such a case, and is there any way to reverse/configure this?",
"username": "Max_Doroshenko"
}
] | The uuidRepresentation has not been specified, so the UUID cannot be encoded | 2021-02-01T19:28:26.554Z | The uuidRepresentation has not been specified, so the UUID cannot be encoded | 20,617 |
null | [] | [
{
"code": "createWindow()app.whenReady()Failed to open realm: Error: illegal operation on a directory\n at run (renderer.js:26)\n at renderer.js:39\nconst realm = new Realm(config);\nmain.jspath.join(app.getPath(\"home\"),\"my.realm\")\nrenderer.js",
"text": "This is crossposted from this StackOverflow post.I’ve started the Realm-provided realm-electron-advanced-quickstart found here.Everything works fine when running in dev, but when I package this with electron-builder, the app does not work. I moved createWindow() to the top of the app.whenReady() function so I could see the debugger, and it showed the following message:Line 26 isI suspect this is some sort of path/permissions issue with the .realm file.In main.js , I changed the path in the realm config toBut I could not do so on renderer.js .I’m running Node v14.15.4.I would appreciate any ideas or examples of how this has been implemented before. I could not find any official guidance on the support docs about how .realm files should be handled in packaged apps. It would be great if the readme on realm-electron-advanced-quickstart explained best practices for packaging the app.Thanks!",
"username": "samaa"
},
{
"code": "userDatamongodb-realmrealm-object-server",
"text": "Hi Sam. Welcome to the forum! Just wanted to make sure you know that Realm Studio is an open source electron app using the Realm JS SDK: GitHub - realm/realm-studio: Realm Studio … perhaps you can find some inspiration in that.More specifically you might be interested in the code that changes the current working directory of the renderer process to be a sub-directory of the userData directory: realm-studio/process-directories.ts at channel/major-13 · realm/realm-studio · GitHub … Realm Studio does this, since opening a synced Realm will create a directory named mongodb-realm (or realm-object-server pre v10 of the SDK) relative to the current working directory, containing metadata about the currently authenticated users.",
"username": "kraenhansen"
},
{
"code": "",
"text": "Hi Kræn,Thanks a bunch for that resource. I changed the working directory to userData, and that solved the problem.I noticed that in the Realm Studio example, the main process and renderer processes each use a separate directory, and thus use separate .realm files. Does this approach have any advantages over having the main and renderer share a working directory and .realm file? It seems like it would be inefficient to have the same data stored in two places.",
"username": "samaa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Not Working with Packaged Electron App | 2021-02-02T19:10:54.632Z | Realm Not Working with Packaged Electron App | 4,351 |
null | [
"node-js"
] | [
{
"code": "const url = process.env.MONGO_URL;\nconst mongoClient = await MongoClient.connect(url);\nMONGO_URL=\"mongodb://_:<apiKey>@realm.mongodb.com:27020/?authMechanism=PLAIN&authSource=%24external&ssl=true&appName=<realm-app-id>:mongodb-atlas:api-key\"\nMongoServerSelectionError: failed to handle command request \"ismaster\": error processing client metadata: expected app name to be composed of: <appID>:<svc>:<authProvider>, got [object Object]appNamemongo",
"text": "The JS code:The .env file:These codes cause the following error:MongoServerSelectionError: failed to handle command request \"ismaster\": error processing client metadata: expected app name to be composed of: <appID>:<svc>:<authProvider>, got [object Object]I think the error shows something wrong with the appName parameter.I first thought it was a Realm problem, but the connection string properly works in CLI (i.e. the mongo command), so I’m now thinking this is related to JS Driver. (FYI: I’m using ver 3.6.2)Is this an expected behavior or a bug?",
"username": "Toshi"
},
{
"code": "",
"text": "Hi @Toshi,The Node JS driver with latest versions should work with Realm wire protocols:https://docs.mongodb.com/realm/mongodb/connect-over-the-wire-protocol/#compatible-clientsDo you see any server side logging for this attempts?Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "# in empty directory\nnpm init -y\nnpm install mongodb\ntouch index.js\nconst MONGO_URL = \"URL COMES HERE\"\nconst MongoClient = require('mongodb');\n\nconst main = async () => {\n await MongoClient.connect(MONGO_URL);\n console.log('connected!');\n}\n\nmain().catch(err => {\n console.error(err);\n});\nmongodb+srv://<username>:<password>@cluster0.xxxx.mongodb.net/....expected <appID>:<svc>:<authProvider>, got [object Object]",
"text": "Hi Pavel, thanks for the response. Firstly, I can’t find any logs recorded in the Realm UI unfortunately.Actually, you can reproduce the problem in a couple of minutes:index.js:If you use a normal Atlas url, it works properly. (i.e. mongodb+srv://<username>:<password>@cluster0.xxxx.mongodb.net/....)On the other hand, if you use a Realm url, you’ll see the\n“expected <appID>:<svc>:<authProvider>, got [object Object]” error.I’d appreciate it if you could check.",
"username": "Toshi"
},
{
"code": "appName=application-0-xxxxx:mongodb-atlas:custom-token\nexpected ... <appID>:<svc>:<authProvider>, got [object Object]appName=application-0-xxxxx%3Amongodb-atlas%3Acustom-token\nexpected ... <appID>:<svc>:<authProvider>, got [object Object]appName=application-0-xxxxx\n[object Object]expected ... <appID>:<svc>:<authProvider>, got application-0-xxxxxmongo",
"text": "UPDATE:\nI tried several more attempts with some guesswork (still not working).\nI guess there is possibly something wrong with dealing with colons.The connection string which haswill produce this error:\nexpected ... <appID>:<svc>:<authProvider>, got [object Object]will produce the same error:\nexpected ... <appID>:<svc>:<authProvider>, got [object Object]This time the param was not converted to an [object Object].\nThe error message says it got the string instead:\nexpected ... <appID>:<svc>:<authProvider>, got application-0-xxxxxAgain, the connection string correctly worked with CLI (the mongo command). It seems it is not working only with Node.js SDK.",
"username": "Toshi"
},
{
"code": "",
"text": "Is there any solution to this? We are working on migration process that will happen on our servers using old Realm GraphQL and new MongoDB using mongoose.The problem is that when we connect directly to the cluster, changes are not propagated to the realm so I guess we need to connect as this special mongodb realm url? However, it does not work in Node ",
"username": "Bartosz_Hernas"
},
{
"code": "",
"text": "Has there been any resolution to this. I am encountering this error now.",
"username": "Sean_Ainsley"
}
] | Does JS Driver Support `appName` in Connection String? | 2020-10-01T01:38:55.500Z | Does JS Driver Support `appName` in Connection String? | 4,536 |
null | [
"app-services-user-auth",
"realm-web"
] | [
{
"code": "realm-webrealm-webapiKey// client\nconst app = new Realm.App({ id: '<ID>' });\nawait app.logIn(Realm.Credentials.emailPassword('[email protected]', 'password'));\nawait app.currentUser?.apiKeys.create('testKey');\nawait app.currentUser?.apiKeys.enable('testKey');\nconst apiKey = await app.currentUser?.apiKeys.fetch('testKey');\n// pass apiKey to the server?\n// server\nconst clientKey = getKeyFromClient();\nawait app.logIn(Realm.Credentials.apiKey(clientKey));\n // invalid API key (status 401)\n",
"text": "I would like to use realm-web in the browser to simplify authentication. I also would like to use realm-web on the server to retrieve mongodb data and hide connection and query details.\nWhat is the best way to pass Realm App User credentials from browser to server?\nI am trying to experiment with apiKey, but I am not sure how to create an apiKey on the client:and then pass it to the server:What is the right way to do that?",
"username": "rkazakov"
},
{
"code": "keykeyapiKeys.create(...)key",
"text": "Hi again Ruslan,Taken from the (Realm JS) documentation:Store the API Key ValueThe SDK only returns the value of the user API key when you create it. Make sure to store the key value securely so that you can use it to log in.If you lose or do not store the key value there is no way to recover it. You will need to create a new user API key.So, you need to store (and send to the server) the result of call to apiKeys.create(...) (more specifically the key property on the object returned).With this change it looks like your code and approach would work. Please let us know how that goes.",
"username": "kraenhansen"
},
{
"code": "",
"text": "Thank you, Kræn!Going to try this approach.Regards,Ruslan",
"username": "rkazakov"
},
{
"code": "UsermockedServerStorageserverSideApp",
"text": "Hi again. I’m sorry that I didn’t realise this before posting my reply, but it turns out the User constructor actually takes the arguments needed to construct and use a user from credentials transferred from a client to a server.I’ve put together a small CodeSandbox to show this: realm-web-transfer-user-credentials - CodeSandbox (admittedly it gets a little harry around creation of the mockedServerStorage and serverSideApp, but I need this because two pieces of code is running the the same browser and will be simpler in your implementation)Hope this helps.",
"username": "kraenhansen"
},
{
"code": "",
"text": "This is amazing, @kraenhansen!It’s just what we wanted to achieve!Thanks for your help!Ruslan",
"username": "rkazakov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is the best way to pass Realm App User credentials from browser to server? | 2021-01-28T13:33:24.885Z | What is the best way to pass Realm App User credentials from browser to server? | 2,901 |
null | [
"data-modeling",
"java",
"performance"
] | [
{
"code": "",
"text": "Hey, I don’t want to deal with duplicated/not updated data. Therefore, I’d like to lock the document if there are any update query. Thus, if we try to get document while on updating, we have to wait until the update has done.How can I solve this problem?(I’ve done some search but couldn’t find any useful information.)",
"username": "Duck"
},
{
"code": "",
"text": "@Duck,See my comment here: Update embedded document - #2 by Pavel_DuchovnyLet me know if that helps.Thanks.\nPavel",
"username": "Pavel_Duchovny"
}
] | Locking Document | 2021-01-29T17:26:44.036Z | Locking Document | 2,115 |
null | [] | [
{
"code": "",
"text": "Hi,\nI have written a timed program that requires Adverts in between screens in the program.There are a few parts to setup the scenario.Then when the user requests a set of adverts (To be done in ms and up to 35 adverts) the DB needs to check for adverts that suit the criteria from multiple advertisers (We do not want repeats from he same advertiser) and send the data of the adverts to be displayed to my system.Is this something that can be done with MongoDB and is MongoDB the most suitable ?Thank You",
"username": "Peter_Lewis"
},
{
"code": "",
"text": "Yes, MongoDB is suitable for this. You need to define your data models appropriately and maybe have a rank algorithm to see the best suited ads for each user. If you need more ranks, try Atlas Search also.",
"username": "shrey_batra"
},
{
"code": "",
"text": "Hi @Peter_Lewis,Welcome to MongoDB community.This is more than possible with MongoDB.MongoDB allows you to store your flexible data in documents and also allow:MongoDB is definitely a database for such modern apps.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | Newie - Before I learn MongoDB Need to know if it is suitable for my application | 2021-02-02T00:37:32.800Z | Newie - Before I learn MongoDB Need to know if it is suitable for my application | 1,832 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "Aggregate {\n _pipeline: [ { '$group': [Object] } ],\n _model: Model { Kids },\n options: {} }\nconst provinces = Kids.aggregate([\n\t\t{\n\t\t\t$group: {\n\t\t\t\t_id: {\n\t\t\t\t\tprovince: '$province',\n\t\t\t\t},\n\t\t\t\tcount: {\n\t\t\t\t\t$sum: 1,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t]);\n",
"text": "Hi there,Im hoping for some assitance, I have been trying to run and aggregate on my data but im not sure what to do with the result as it returns like so:I’ve tried to toArray() and forEach it but it says the variable I stored it in is not a function. I’m trying to graph the results. My code below:",
"username": "Emil_C"
},
{
"code": "const groupStage = \n { \n $group: {\n _id: {\n province: '$province',\n },\n count: {\n $sum: 1,\n },\n }\n };\n\ncollectionl.aggregate( [groupStage] ).toArray( ( err, docs ) => {\n assert.equal(err, null);\n console.log(\"Aggregation output:\");\n console.log( JSON.stringify( docs ) );\n callback(docs);\n});\ndocs",
"text": "Hello @Emil_C, welcome to the MongoDB Community forum.With NodeJS driver you can try this aggregation query:The variable docs is an array with the aggregation output.Also, refer the:",
"username": "Prasad_Saya"
}
] | Aggregate giving undefined object | 2021-01-28T20:46:18.146Z | Aggregate giving undefined object | 4,281 |
null | [] | [
{
"code": "",
"text": "There are millions of documents in json, some of which are over 16 MB.\nAnd I want to import this json.However, large documents(over 16MB) will not be imported.Is there a way to import those documents?\nIs there only way to use “mongofiles”?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "If you are importing JSONs/documents over 16Mb, the only option is to use GridFS and mongofiles. If you have these big JSONs because of media content like images and all, you can reshape your data to store these media on some other service like S3/ object based storage, and then put the reference on those media in these JSONs, bringing it very low in size.If you have large JSONs because of too much array data in your JSONs, you need to model your documents properly as arrays can grow out of bound indefinite and that is not a good thing.Can you please mention why these JSONs are so big for you? What does it contain?",
"username": "shrey_batra"
},
{
"code": "",
"text": "OMG… I’ll have to model.\nOK, Thank you. They are just factory generated metadata.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Importing JSON with few large documents | 2021-02-03T04:15:31.096Z | Importing JSON with few large documents | 3,609 |
null | [
"aggregation",
"queries"
] | [
{
"code": "> db.datapoints.aggregate([\n { \"$geoNear\": {\n \"near\": {\n \"type\": \"Point\",\n \"coordinates\": [-76.79491878920412, 39.101543051442995]\n },\n \"distanceField\": \"distance\",\n \"query\": { \"locationType\": \"circle\" }\n }},\n { \"$redact\": {\n \"$cond\": {\n \"if\": { \"$gte\": [ \"$distance\", \"$radius\" ] },\n \"then\": \"$$PRUNE\",\n \"else\": \"$$KEEP\"\n }\n }}\n])\n> db.datapoints.aggregate([\n {$match : {\n \"geometry\": {\n $geoIntersects: {\n $geometry:{\n \"type\" : \"Point\",\n \"coordinates\" : [ 6.1460945755351775, 49.42089640678415]\n }\n }\n }}\n },\n {$project : {\n name : \"$name\"}\n ])\n",
"text": "Hello everyone,I am working with geolocation data and I need to determine if these data are located in a geofence defined in a collection or not.To explain, there can be two types of geofence:In order to be able to see if a point was located in a geofence I didI am now thinking about combining those 2 aggregations in order to have with only one aggregation for a given point if it is located in the circular or polygon geofence or in unknown geofence if no result returned.Do you have any idea how can I do that? Because for now I am using both queries but I am wondering if it can be done with one aggregation pipeline.Kind regards,Elisa",
"username": "Elisa_Scheer"
},
{
"code": "",
"text": "I guess you can use “similar” aggregation, not exactly the same. You can use $geoWithin which might be much better aggregations than the ones you have used.With this stage, you can define your fence either by using $center or $polygonSo you only need to have the difference on polygon and it’s parameters between the two queries. Hope this helps you.",
"username": "shrey_batra"
}
] | Working with Geofence | 2021-02-01T13:28:28.541Z | Working with Geofence | 2,596 |
null | [
"aggregation",
"crud"
] | [
{
"code": "",
"text": "I want to run an aggregation pipeline, use the result as filter to find a document and update it.My schema has three fields : name, shiftWorked, eligible.Please pardon my ignorance, i am super new to mongoDB, but I think I’ll need two aggregations, first will filter the employees based number of shifts worked and second will filter it further to makes sure employee has not worked more than 5 shifts in last three days.\nIf i can get these name in a query, I can use findOneAndUpdate to pick one such employee and mark him as eligible.Just wanted to know what is the best way to go about this?Cheers!",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "{\n \"name\": \"user 1\",\n \"shiftWorked\": ISODate(\"2020-01-01\"), // some date field.\n \"eligible\": false\n}\n[\n {\n \"$group\": {\n \"_id\": \"name\", // grouping condition.\n \"counts\": {\n \"total\": {\n \"$sum\": 1\n },\n \"last5days\": {\n \"$sum\": {\n \"$cond\": [\n {\"$gte\": [\"$shiftWorked\", \"{{3 day's before date, pre calculate this in code and insert.}}\"]}, // shift worked should be greater than equal to 3 day's old date.\n 1, // true case.\n 0 // false case.\n ]\n }\n }\n }\n }\n },\n {\n \"$match\": {\n \"counts.total\": {\n \"$gte\": 10\n },\n \"counts.last5days\": {\n \"$gte\": 5\n }\n }\n },\n {\n // limiting to first document. (no sort order)\n \"$limit\": 1\n },\n { //updating it back into collection. (MongoDB v4.4 minimum if same collection.)\n \"$merge\": {\n \"into\": \"collectionName\",\n \"on\": \"_id\",\n \"whenMatched\": \"merge\",\n // customize this stage.\n }\n }\n]\n",
"text": "You can get the results of employees in a single aggregation. To find count, you can checkout $count or $group with $sum:1 (will be given in examples of $group). The first stage is to find the counts of shifts of each user, (1 for total count, 1 for last 5 days count). You can use conditions in count when doing $group. Checkout $sum and $cond and use them together in $group.Using this, you can find the employees matching point 1 and 2. Then the next stage in the same aggregation you can limit the result to first document matching the count. use $match to filter the count threshold, then $limit to limit to 1 document.At last you can use $out to update that particular employee in the collection to mark him eligible.\nIn total you have 1 aggregation only. Sample -Assuming the data model to be -Aggregation query could look like -",
"username": "shrey_batra"
},
{
"code": "",
"text": "Thanks maestro!! ",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "Hey Shrey,Are aggregation pipelines atomic? In the sense that, while we are executing this pipeline can there be an update in the background which makes the selected employee ineligible? Reason I am asking this is because, merge with same collection cannot be used with transactions (which I just checked after reading your answer).",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "Yes, an aggregation using $merge cannot be used inside a transaction. To do that, you might skip the $merge operation, get the result back in application code and do an update operation.",
"username": "shrey_batra"
}
] | Using result of aggregation pipeline to update | 2021-02-03T07:48:46.849Z | Using result of aggregation pipeline to update | 4,203 |
null | [
"node-js",
"connecting"
] | [
{
"code": " var db;\n MongoClient.connect(connectionString, (err, database) => {\n if (err) return console.log(\"Connect error: \", err);\n db = database;\n app.listen(port, () => {\n console.log(\"listening on \" + port);\n });\n });\n",
"text": "Hi,I’m having problems connecting from AppEngine Free Tier to Atlas Free Tier, using node.js. I’m setting access to 0.0.0.0/0 as this is just a test project.First up, is it even possible? It works fine on my local node server, but uploading to GAE I get the following error:MongoNetworkError: failed to connect to server [cluster0-shard-00-02.pntaf.mongodb.net:27017] on first connect [MongoNetworkError: connection 3 to cluster0-shard-00-02.pntaf.mongodb.net:27017 closedCode:Thanks for any help!",
"username": "Yezzer"
},
{
"code": "MongoServerSelectionError: connection <monitor> to 35.224.187.131:27017 closed\n at Timeout._onTimeout (/app/node_modules/mongodb/lib/core/sdam/topology.js:438:30)\n at listOnTimeout (internal/timers.js:554:17)\n at processTimers (internal/timers.js:497:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n setName: 'atlas-loe0la-shard-0',\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map {\n 'cluster0-shard-00-00.pntaf.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-01.pntaf.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-02.pntaf.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: 30,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: 9\n }\n}",
"text": "Trying various things, but always unable to connect. I’m now using App Engine Flex, as i’ve read Standard isn’t capable (but no other explanation as to why).",
"username": "Yezzer"
},
{
"code": "",
"text": "Bit of an update: As far as I can tell, it’s not possible to allow traffic on port 27017 on AppEngine?Instead, I set up a Bitnami node.js stack on Compute Engine, set firewall rules for 27017 on Google VPC network, opened the port on the server, and set MongoDB Atlas network access to the server’s IP. This now works.I’d really like to know if I’m wrong about AppEngine and MongoDB Atlas, as I’d much prefer to use it.",
"username": "Yezzer"
}
] | Can't connect from AppEngine to Atlas free | 2021-02-01T21:23:35.584Z | Can’t connect from AppEngine to Atlas free | 2,717 |
[] | [
{
"code": "",
"text": "I got an error, the host was not available when I enter the Mongo cluster URL to the Sciences Cloud Basic settings.Connecting to MongoDB: Basic Settings\nIn the MongoDB settings area, enter the following information:\nServer: Enter IP address of the host where your MongoDB instance is running.image2348×1244 131 KB",
"username": "Ramtharan_Chandrakum"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Ramtharan_Chandrakum!Were you able to find a solution to this issue?Questions about the Sisense platform are probably better handled via Sisense Support Resources, but I would start with their Connecting to MongoDB documentation.If you are connecting to a MongoDB Atlas cluster, there’s a specific section with examples of expected values for settings at the end of the Sisense doc page.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sisense cloud ElastiCube Connecting to MongoDB | 2021-01-28T21:48:29.033Z | Sisense cloud ElastiCube Connecting to MongoDB | 2,303 |
|
null | [
"change-streams"
] | [
{
"code": "",
"text": "Hello Community,How do you account for data not captured by ChangeStream. May be for whatever reason before the Change Stream is unable captures the oplog data before it gets overwritten. Is there way to validate whatever DML operations currently captured by ChangeStream reflects the true positions the DML operations in its entirety.Your response is greatly appreciated.",
"username": "Samson_Eromonsei"
},
{
"code": "",
"text": "Hi @Samson_Eromonsei,Welcome to MongoDB Community.The change streams events for majority acknowledge data is streamed consistently and in case of a failure or an invalidation may be resumed based on the resume token provided or the last successful cluster time known:All the above is sufficient if the resume point can be located in the Oplog window available on the Primary. Therefore make sure your oplog has a sufficient time to avoid a possibility of unrecoverable gap of events and a hard restart of the change stream.Please let me know if you have any additional questions.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Accounting for data not capture by ChangeStream | 2021-02-01T23:44:55.675Z | Accounting for data not capture by ChangeStream | 1,814 |
null | [] | [
{
"code": " const aboutSchema = new Schema({\n content:{\n type: String,\n required: true\n }\n }, {timestamps: true})\n",
"text": "Hi everybody,how can I insert data like schema:I can not create timestamp. At the web view in the https://cloud.mongodb.com/ I dont se “createAt” and “updateAt”Thank you",
"username": "Vojtech_Janousek"
},
{
"code": "",
"text": "Hi @Vojtech_Janousek,Are you talking about inserting data via a driver or Data Explorer Web UI?Also are you interested in Schema validation ability or restrictions?Best regards,\nPavel[",
"username": "Pavel_Duchovny"
}
] | How to insert data like schema | 2021-02-02T19:11:32.302Z | How to insert data like schema | 1,993 |
null | [] | [
{
"code": "",
"text": "HiI’m Bostjan, founder of Space Invoices, we use MongoDB to drive our developer-oriented invoicing API (https://spaceinvoices.com).Looking forward to leveling up our setup and meeting the community!Cheers from Switzerland",
"username": "Bostjan_Pisler"
},
{
"code": "",
"text": "Hello @Bostjan_Pisler, welcome to the MongoDB Community forum!Great to know you are already a user of the MongoDB database. I am just browsing your website (I am curious as I have some background working with Accounts Receivables software).Right at the top of this page there is a menu with links to knowledge and learning resources - documentation, university, posts, videos, etc., and you will find some useful things there.Cheers to you too!",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi Bostjan! Welcome to the forums ",
"username": "Jamie"
}
] | Hi from Slovenian in Switzerland | 2021-02-02T05:01:46.731Z | Hi from Slovenian in Switzerland | 2,182 |
null | [
"queries",
"python"
] | [
{
"code": "db = client.GettingStarted\n\nx = db.warnings.find({\"user\":user.id})\n\nfor info in x:\n await ctx.send(info)\nprint (x)\n",
"text": "I just need some help here because I am so lost…\ni have my code here\nclient = pymongo.MongoClient(“mongodb+srv://[email protected]/Database?retryWrites=true&w=majority”)and whenever i try to find my data stored it gets turned into a cursor object which is fine. But whenever i do the for info in x: no information comes out of it, and I know the data is forsure in there… Is there anything visibally wrong?",
"username": "starlord_N_A"
},
{
"code": "for info in x:\n print(info)\nfor-in",
"text": "Hello @starlord_N_A, welcome to the MongoDB Community forum!In case you are trying to print the find query’s output from the cursor, the following will work:Note that Python programming has indentation rules, and the for-in syntax is as shown in the above code. Also, see this example: Querying for more than one document using PyMongo .",
"username": "Prasad_Saya"
}
] | Issue with reading from the find query cursor | 2021-02-02T19:11:48.344Z | Issue with reading from the find query cursor | 2,010 |
null | [
"python",
"atlas-data-lake"
] | [
{
"code": "{\n\"$out\": {\n \"atlas\": {\n \"clusterName\": \"Cluster\",\n \"db\": \"source\",\n \"coll\": \"collection\"\n}\ndef extract_new_movers_population():\npipeline = extract_transform\nres= collection.aggregate(pipeline=pipeline, allowDiskUse=True)\nprint(res)\npymongo.errors.OperationFailure: If an object is passed to $out it must have exactly 2 fields: \n'db' and 'coll', full error: {'operationTime': Timestamp(1612190585, 1), 'ok': 0.0, \n'errmsg': \"If an object is passed to $out it must have exactly 2 fields: \n'db' and 'coll'\", 'code': 16994, 'codeName': 'Location16994', '$clusterTime': {'clusterTime': \nTimestamp(1612190585, 1), 'signature': {'hash': b')\\xb6\\x15\\xfa\\x03\\x1cv\\x0b\\xa1\\xef\\xa5\\x0c\\x0c\\x0c^\\xe7\\x9d/\\xa2\\x1f', 'keyId': 6922836924319137794}}}\n",
"text": "Hello Everyone,So today I ran into a road block in my development bc I cant seem to find anyone having this issue online.I’m attempting to use mongo datalake to weekly import a csv file to my cluster , about 1MM records.for that i have set up a aggregate pipeline with some transformations and and $out statement at the end, like this:according to documentation herehttps://docs.mongodb.com/datalake/reference/pipeline/outas the last step. the pipeline works great on a mongo client , etc . but by doing it so on python:I get this output:with motor driver gets even worst because the driver swallows the output and never tellsHope I can find a captain",
"username": "luis_carvajal"
},
{
"code": "async for",
"text": "Hi @luis_carvajal.From the error message it appears that you are not connected to an Atlas Data Lake instance via the Python driver, but instead connected directly to a MongoDB cluster (mongod). How are you obtaining the connection string for connecting to the Data Lake? I believe you need to follow the instructions detailed here https://docs.mongodb.com/datalake/tutorial/connect and choose the ‘Connect Your Application’ option to obtain the correct string. After that, please retry your command!On the Motor front - it is possible that you are not seeing an error because you are not iterating the cursor returned by aggregate(). Motor’s cursors are instantiated lazily - i.e. the ‘aggregate’ command is only sent when the application attempts to iterate the cursor. Try iterating the Motor cursor in an async for loop and you should see error.Please let me know if this works!\n-Prashant",
"username": "Prashant_Mital"
}
] | $out from Datalake to Attlas Cluster with pymongo | 2021-02-01T19:27:16.257Z | $out from Datalake to Attlas Cluster with pymongo | 3,798 |
null | [
"rust",
"alpha"
] | [
{
"code": "tokiotokiotokiomongodb",
"text": "The MongoDB Rust driver team is pleased to announce the v2.0.0-alpha release of the driver, with support for tokio 1.x.This release is an unstable alpha which has the primary goal of introducing official support for tokio 1.x. It currently contains features that are not yet complete, but we wanted to get this release out as soon as possible to enable users to start working with the driver and tokio 1.x. The API is unstable and subject to breakages as we work towards a stable 2.0 release.You can find the source code for the release on Github, and the release is published on https://crates.io under the package name mongodb . If you run into any issues, please file an issue on JIRA.",
"username": "Patrick_Freed"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Rust Driver 2.0.0-alpha released with support for tokio 1.0+ | 2021-02-02T21:22:48.694Z | MongoDB Rust Driver 2.0.0-alpha released with support for tokio 1.0+ | 4,031 |
null | [
"connector-for-bi",
"licensing"
] | [
{
"code": "",
"text": "HelloI am running MongoDB community and want to load data for analysis via Connector.\nLooks like it works with a community, but there no straight answer about license or price for it.\nCould I use it based on community license? Is there an option for purchasing just this tool?\nOr it’s absolutely necessary to purchase Enterprise version?Regards\nRic",
"username": "Ricardas_Kunevicius"
},
{
"code": "",
"text": "Hi,Yes the BI Connector is part of the Enterprise version, however as a community user you can use the BI Connector as an evaluation. Details are in the LICENSE file when you install the BI Connector. Alternatively you can use the BI Connector through MongoDB Atlas. This might be more cost effective way to leverage the connector. With Atlas you also don’t need to worry about spinning up the infrastructure and configuring the connector it is just a checkbox in Atlas.image1612×286 42 KBThanks,\nRob",
"username": "Robert_Walters"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Connector for BI license | 2021-02-02T19:07:42.323Z | MongoDB Connector for BI license | 5,467 |
null | [
"python",
"connecting",
"atlas"
] | [
{
"code": "",
"text": "Hello\nI have an issue when trying connect to Atlas cluster on MacOS. On linux all work fine. Here is how i installed pymongo:pip3 install pymongoafter I got the error, i additionally executed:pip3 install dnspython\npip3 install pymongo[tls]Here is error:s = MongoClient(“mongodb+srv://xxx:[email protected]”)\nprint(s[‘data’][‘test’].find_one())\nTraceback (most recent call last):\nFile “”, line 1, in \nFile “venv/lib/python3.8/site-packages/pymongo/collection.py”, line 1319, in find_one\nfor result in cursor.limit(-1):\nFile “venv/lib/python3.8/site-packages/pymongo/cursor.py”, line 1207, in next\nif len(self.__data) or self._refresh():\nFile “venv/lib/python3.8/site-packages/pymongo/cursor.py”, line 1100, in _refresh\nself.__session = self.__collection.database.client._ensure_session()\nFile “venv/lib/python3.8/site-packages/pymongo/mongo_client.py”, line 1816, in _ensure_session\nreturn self.__start_session(True, causal_consistency=False)\nFile “venv/lib/python3.8/site-packages/pymongo/mongo_client.py”, line 1766, in __start_session\nserver_session = self._get_server_session()\nFile “venv/lib/python3.8/site-packages/pymongo/mongo_client.py”, line 1802, in _get_server_session\nreturn self._topology.get_server_session()\nFile “venv/lib/python3.8/site-packages/pymongo/topology.py”, line 490, in get_server_session\nself._select_servers_loop(\nFile “venv/lib/python3.8/site-packages/pymongo/topology.py”, line 215, in _select_servers_loop\nraise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-00.myfdu.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123),cluster0-shard-00-02.myfdu.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123),cluster0-shard-00-01.myfdu.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123), Timeout: 30s, Topology Description: <TopologyDescription id: 6012d175aa7e4741cc6ef1a0, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (‘cluster0-shard-00-00.myfdu.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘cluster0-shard-00-00.myfdu.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)’)>, <ServerDescription (‘cluster0-shard-00-01.myfdu.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘cluster0-shard-00-01.myfdu.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)’)>, <ServerDescription (‘cluster0-shard-00-02.myfdu.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘cluster0-shard-00-02.myfdu.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)’)>]>Query work fine if add to connection tlsAllowInvalidCertificates=True. But that is bad solution.\nIs it any issues with pymongo and MacOS, or may by i missed some dependency?Thank you in advance",
"username": "Max_Stepanov"
},
{
"code": "$ cd '/Applications/Python 3.8'\n$ ./Install\\ Certificates.command\n -- pip install --upgrade certifi\nRequirement already up-to-date: certifi in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (2020.12.5)\nWARNING: You are using pip version 20.2.3; however, version 21.0 is available.\nYou should consider upgrading via the '/Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8 -m pip install --upgrade pip' command.\n -- removing any existing file or link\n -- creating symlink to certifi certificate bundle\n -- setting permissions\n -- update complete\npip3 install certifi\n",
"text": "Run the following script which comes with Python when installed from python.org:If that does not resolve the issue then try installing certifi:",
"username": "Shane"
},
{
"code": "",
"text": "I still have the issue after updating certifi. Current version is 2020.12.05.\nIs it anything else that i can try to do?",
"username": "Max_Stepanov"
},
{
"code": "s = MongoClient(“mongodb+srv://xxx:[email protected]”, tlsCAFile=certifi.where())\n",
"text": "Try this:https://pymongo.readthedocs.io/en/stable/examples/tls.html#specifying-a-ca-file",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't connect to Atlas on MacOS | 2021-01-28T18:35:58.238Z | Can’t connect to Atlas on MacOS | 9,647 |
null | [
"node-js",
"mongoose-odm",
"connecting"
] | [
{
"code": " { MongoError: bad auth : Authentication failed.\n at MessageStream.messageHandler (/home/azimut/work/node/mvc-t/node_modules/mongodb/lib/cmap/connection.js:268:20)\n at MessageStream.emit (events.js:198:13)\n at processIncomingData (/home/azimut/work/node/mvc-t/node_modules/mongodb/lib/cmap/message_stream.js:144:12)\n at MessageStream._write (/home/azimut/work/node/mvc-t/node_modules/mongodb/lib/cmap/message_stream.js:42:5)\n at doWrite (_stream_writable.js:415:12)\n at writeOrBuffer (_stream_writable.js:399:5)\n at MessageStream.Writable.write (_stream_writable.js:299:11)\n at TLSSocket.ondata (_stream_readable.js:710:20)\n at TLSSocket.emit (events.js:198:13)\n at addChunk (_stream_readable.js:288:12)\n at readableAddChunk (_stream_readable.js:269:11)\n at TLSSocket.Readable.push (_stream_readable.js:224:10)\n at TLSWrap.onStreamRead [as onread] (internal/stream_base_commons.js:94:17) ok: 0, code: 8000, codeName: 'AtlasError', name: 'MongoError' }\n",
"text": "Hi, I have problem with connection string. I use node and Mongoose. My connection string is:\nconst dbURI = ‘mongodb+srv://testUser:[email protected]/node-test?retryWrites=true&w=majority’it is testign database - is empty. I get error message and I dont know, where is errorThank you",
"username": "Vojtech_Janousek"
},
{
"code": "",
"text": "bad auth : Authentication failedPlease check your userid/password.Bad auth means wrong combination of userid/pwd\nCan you connect by shell to your db?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you for your advice. I created new user account and now it is running.",
"username": "Vojtech_Janousek"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Bad auth : Authentication failed | 2021-02-01T19:27:45.967Z | Bad auth : Authentication failed | 42,103 |
Subsets and Splits