image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"indexes"
] | [
{
"code": "> db.microsoft.createIndex({geojson: \"2dsphere\"})\n{\n \"operationTime\" : Timestamp(1590202707, 2),\n \"ok\" : 0,\n \"errmsg\" : \"Can't extract geo keys: { _id: ObjectId('5ec875653962b8635f792c0c'), state: \\\"DistrictofColumbia\\\", id: 9303, geojson: { type: \\\"Polygon\\\", coordinates: [ [ [ -76.985101, 38.927718 ], [ -76.985095, 38.927589 ], [ -76.98518, 38.927586 ], [ -76.985178, 38.927538 ], [ -76.985096, 38.927541 ], [ -76.98509900000001, 38.927589 ], [ -76.984953, 38.927593 ], [ -76.984959, 38.927722 ], [ -76.985101, 38.927718 ] ] ], crs: { type: \\\"name\\\", properties: { name: \\\"EPSG:4326\\\" } } } } Loop is not valid: [ [ -76.985101, 38.927718 ], [ -76.985095, 38.927589 ], [ -76.98518, 38.927586 ], [ -76.985178, 38.927538 ], [ -76.985096, 38.927541 ], [ -76.98509900000001, 38.927589 ], [ -76.984953, 38.927593 ], [ -76.984959, 38.927722 ], [ -76.985101, 38.927718 ] ] Edges 0 and 5 cross. Edge locations in degrees: [-76.9851010, 38.9277180]-[-76.9850950, 38.9275890] and [-76.9850990, 38.9275890]-[-76.9849530, 38.9275930]\",\n \"code\" : 16755,\n \"codeName\" : \"Location16755\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1590202707, 2),\n \"signature\" : {\n \"hash\" : BinData(0,\"hNRuE+AA5NSpSsamZRHT1/hS7AE=\"),\n \"keyId\" : NumberLong(\"6828646414617149444\")\n }\n }\n}\nimport com.mongodb.spark._\nimport com.mongodb.spark.config.{ReadConfig, WriteConfig}\nimport org.bson.Document\nimport com.google.gson.Gson\nimport org.locationtech.jts.geom.Geometry\nimport org.locationtech.jts.io.geojson.GeoJsonReader\nimport org.locationtech.jts.io.geojson.GeoJsonWriter\n\ncase class FootprintSchema(state: String, id: Int, geojson: Document)\n\nval connectionString = \"...\"\nval writeConfig = WriteConfig(Map(\"uri\" -> connectionString))\nval inputPath = \"...\"\n\nval data = spark\n .read\n .format(\"csv\")\n .option(\"delimiter\", \"|\")\n .load(inputPath)\n .as[(String, String, String)]\n .rdd\n .mapPartitions { it =>\n val gson = new Gson()\n val reader = new GeoJsonReader()\n val writer = new GeoJsonWriter()\n it.map { case (state, id, geojson) =>\n val fp = FootprintSchema(state, id.toInt, Document.parse(geojson))\n //val fp = FootprintSchema(state, id.toInt, Document.parse(writer.write(reader.read(geojson))))\n Document.parse(gson.toJson(fp))\n }\n }\ndata.saveToMongoDB(writeConfig)\n",
"text": "Hello,I’m trying to create a collection that has polygon geometries and an 2dsphere index but I’m getting the following error.I’ve successfully created geometries and indexes from this same data in postgres and I’ve converted it into jts geometries with org.locationtech.jts.io.geojson.GeoJsonReader and not had any problems on those fronts.I don’t think this matters, because I’ve used this method for writing just points and the geometry field and been able to create indexes fine, but I’m writing the data to mongo via spark with the following code.Is there something different I need to do for mongo to understand the geojson?Thanks!",
"username": "Jeffrey_Picard"
},
{
"code": "Location16755",
"text": "Hi,I believe the error Location16755 points that your polygon crosses the meridian (date line), or you have a polygon with coordinates >180 degrees or < -180 degrees.This is discussed in SERVER-34673: Coordinates are considered out of bounds if the longitude falls outside of [-180, 180] or the latitude falls outside of [-90, 90].SERVER-9948 shows a similar situation, especially this comment.Unfortunately this is a tricky situation, and in this case MongoDB opts to be safe and tries to avoid any situation where it’s describing the wrong shape. Other tools may not be as conservative, so they will try to interpret what you mean using their own logic.If you require further help with this, please post the document in question, and please describe how the polygon should be shaped.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks for getting back to me Kevin. I figured out the problem actually. The polygon was self-intersecting, which you’re correct all the other tools are much more lenient about. My solution ended up being to use jts to correct these polygons, following this code java - Is there a way to convert a self intersecting polygon to a multipolygon in JTS? - Stack Overflow, additionally I had an issue with mongo not liking the crs field that the jts GeoJsonWriter outputs, and was able to turn that off with writer.setEncodeCRS(false).",
"username": "Jeffrey_Picard"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Creating spatial index for polygons | 2020-05-23T07:38:28.833Z | Creating spatial index for polygons | 3,385 |
null | [
"charts"
] | [
{
"code": "",
"text": "Hi, have a question on using Mongo Charts and filters. I had originally implemented the iframe deprecated solution and saw some of my charts were not loading correctly. So I switched to the jwt method with authenticated users, only to find in the last stage that the problem persisted… The last stage was adding filters.I have a few aggregation queries that basically do this:When I run this chart on the dashboard with a date filter everything works fine. The problem is that I cannot add the createdDate field to the list of filters allowed in embedding, since it is not there. I have found with other queries that I actually need to project the date field through all stages to make it selectable as a filterable parameter, which seems odd as I believe to have read somewhere that the filters are a special $match fase at the beginning of the query (which also seems the way it works on the Charts dashboard).TLDR:Thanks.",
"username": "Arnold_Ligtvoet"
},
{
"code": "",
"text": "Tried to rewrite the query:The problem of course is that I also loose the createdDate here. So at current it seems to be that as soon as I use some type of grouping I loose the ability to filter on dates.",
"username": "Arnold_Ligtvoet"
},
{
"code": "",
"text": "HI @Arnold_Ligtvoet -Even though the autocomplete for fields in the Embed Chart dialog only shows fields from the raw collection, you can still add your own fields by choosing the “Create option…” command. This should work in your case where you want to filter on a field created during the custom agg stages.image733×321 18.4 KBTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom,tested this by adding the filter value I use from embedding as an option. There is a difference, instead of displaying an error on these graphs I now get the title of the graph, but no graph. Tested it with two graphs that have the date field present by default, added the date as user specified filter on the third.When I load my portal I see the first two graphs with data and the third graph as just the title. When I remove the filter field from my portal and reload, I see all three graphs with data, so I’m pretty sure the filter is what’s killing it.",
"username": "Arnold_Ligtvoet"
},
{
"code": "",
"text": "Thanks Arnold. I think your assessment is correct - assuming the filter is correct, it may be being applied at a different part of the pipeline you what you expected.Are you able to mail me either a URL of an embedded chart, or if it’s sensitive then just the base URL for the project to let me check the logs? You can reach me at tom.hollander at MongoDB.com.",
"username": "tomhollander"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Charts and filters | 2020-06-04T07:42:00.345Z | Charts and filters | 4,385 |
null | [] | [
{
"code": "",
"text": "I’ve asked the following in the MongoDB University and was referred here:Hi, playing with Compass I’ve noticed that if you use _id field in Find()\nthe indexStats accesses.ops field increases by 2, what is the explanation for this behavior? is it the same for all unique indexes matches?Another thing, when using INSERT on a unique index field, I’d expect the accesses.ops to increase by 2 as the index uniqueness must be verified and only then inserted.\n(and if the index is found to be duplicated an increase by 1 only)I’d appreciate a clarification.\nThanks,\nGal",
"username": "Gal_Itach"
},
{
"code": "",
"text": "Bump. still couldnt find answer.\nTested on a different environment 3.6 Mongo and the index ops increased by 1.The environments are not the same so I will try to reproduce it on the exact same environment.\nThanks",
"username": "Gal_Itach"
},
{
"code": "",
"text": "Hi @Gal_Itach,Can you confirm the exact version of MongoDB server and Compass used in your original test?It sounds like your later test returns the expected outcome.Regards,\nStennie",
"username": "Stennie_X"
}
] | indexStats - Behavior on UNIQUE indexes | 2020-06-02T13:31:20.660Z | indexStats - Behavior on UNIQUE indexes | 1,238 |
[] | [
{
"code": "db.t1.update({ _id: ObjectId(\"5ec26497deb9782b501b3b7b\") }, \n[\n\n { $addFields : { NextValue: { $sum: ['$seq',1] }} },\n { $addFields : { Obj: { name: \"hi2\", val: \"$NextValue\" } } }\n])\n",
"text": "Hi i want do update request like:I use lastest mongodb and .net mongodb driver.Compatibility:In changelog:Update specification using an aggregation framework pipelineBut i miss set\\addFields method, what am I doing wrong ? ",
"username": "alexov_inbox"
},
{
"code": "BsonDocumentvar pipeline = new BsonDocumentStagePipelineDefinition<BsonDocument, BsonDocument>(\n new[] { \n new BsonDocument{{\"$addFields\", \n new BsonDocument{{\"NextValue\", \n new BsonDocument{{ \"$sum\", new BsonArray().Add(\"$seq\").Add(1) } } \n }}\n }},\n new BsonDocument{{\"$addFields\", \n new BsonDocument{{\"Obj\", \n new BsonDocument(\"name\", \"hi2\").Add(\"val\", \"$NextValue\")\n }}\n }} \n } \n);\nvar updateDefinition = new PipelineUpdateDefinition<BsonDocument>(pipeline);\nvar result = collection.UpdateOne(new BsonDocument{}, updateDefinition); \n$sum",
"text": "Hi @alexov_inbox,As of MongoDB C# driver version 2.10, there is no strong typed definition for $addFields aggregation pipeline stage. However you can still construct BsonDocument to build a pipeline definition stage. For example:What you see in the release change is likely related to CSHARP-2570, which is to support aggregation pipeline definition on an update operation.In addition, without knowing more the context of you aggregation pipeline, you may be able to replace $sum with $inc. As it looks like you’re just incrementing the value by one.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C# MongoDB missing update aggregations pipline in strong type | 2020-06-05T19:02:36.967Z | C# MongoDB missing update aggregations pipline in strong type | 6,917 |
|
null | [] | [
{
"code": "",
"text": "Hi team,First post here, so be kind I have two queries.To clarify, I want to understand how much memory(and disk, if any is consumed) by the mongod process when I launch the compact command. This is to plan in advance if I need to stop other applications and ensure there’s enough disk space available.Thanks for your inputs,\nMurali",
"username": "Murali_Rao"
},
{
"code": "mongo",
"text": "Hi Murali,Unfortunately we can’t really tell beforehand how much resources in terms of time and space will be consumed by the compact command since it’s highly dependent on the state of the data files. It is, however, an extensive operation that preferably not done while the server is live in production as either a primary or a secondary.If you want to do this on a secondary, it’s recommended that you do a rolling maintenance instead, where you take one secondary offline, do compaction on it, and rejoin it to the replica set. However, the time the compact command takes cannot be longer than the oplog window, or the secondary will fall off the oplog and not be able to rejoin the set later. To determine the oplog window, you can run rs.printReplicationInfo() on the mongo shell.Having said all that, most of the time it’s not necessary to run the compact command, unless you have deleted a large part of your database and not planning to insert that much data anymore in the future (i.e. you’re downsizing your data). WiredTiger will reuse those empty spaces eventually. That is, returning space to the OS that will be reallocated again by WiredTiger in the near future results in zero net gain for you.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,Thanks for the response. I think the point right at the end you make is one of utmost importance, that needs to go into public documentation. As you rightly mention, compaction is a maintenance procedure, and as such requires scheduled downtime in production environments.If customers are running out of disk space, what they need to do is just remove unused/large documents/collections and not necessarily a compaction(unless they’re sure of the scenario you mention).Once again, thanks for some valuable inputs!Cheers,\nMurali",
"username": "Murali_Rao"
}
] | Compaction requirements | 2020-05-05T12:12:19.714Z | Compaction requirements | 2,560 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "Hello all,Apologies if this is the wrong area. Anyway we’ve been upgrading our MongoDB hosts and I’m seeing items for which I would like to know if they’re answers. I’ll break it apart into three sections for ease.Correct terminology?MongoDB WT storage and compression within a replica set 3.2.22 on Ubuntu Xenial.When is a replica set secondary really a secondary considering it’s status and optime?Correct terminologyWhen trying to describe the space freed up by Mongo that is not reallocated, I use the term “holes” often. I don’t know what the correct term is tto convey to someone that basically there’s empty data taking up space which will not be used until a repairDatabase is ran for MMAPv1 and or correct me if I’m wrong a compaction job ran for WT.I know this is an old version but we’ve recently begun updating and will be making our way to the most recent version of MongoDB but until then, we just recently switched over to WT as 3.2 was considered a good stable point to start using WT.First - we noticed during an initial sync despite having a decent amount of RAM, mongodb would invoke OOM. We noticed this in our non-prod environment which is kept rather bare (i.e. PSA architecture). The primary has all the resources it needs, secondary has half the RAM (i.e. 8 GB), and the arbiter is a baby host. We first converted our primary to WT, this went well AFAICT. But when we needed to do our secondary, we’d run into memory issues. At this point I bumped thee RAM up to the same as the primary (i.e. 16 GB), then proceeded to sync. We ran into the same issue again. I saw a bug report but it was for an earlier version of Mongo (i.e. 3.0) but it felt very similar. The bug report is SERVER-20159. We do have no swap on our servers however when adding more RAM it didn’t seem to help. It did take a bit longer to invoke OOM but it still happened. Surprisingly aside from altering any DB parameters I just kept flushing the buffers/cache in linux (i.e. echo 3 > /proc/sys/vm/drop_caches) and it finally completed. I understand MongoDB uses cache rather efficiently but I couldn’t understand why if it’s cache and not RAM allocation it grew high enough to invoke OOM. We were on kernel version 4.15.0-72 BTW. After the initial sync the RAM usage also dropped to about 50%. I didn’t want to risk OOM’s while working on a higher environment so I created a task to flush buffers cache when memory usage as reported by free got to 80% so I can’t report further on this, but I am curious what may be going on. This higher environment has a PSSSA architecture. One of the secondaries is prohibited from elections and is used as our backup server. I’ll call it PSSHA for ease of conversation. The higher environment is similar but runs 4.15.0-99 and has a lot more RAM.So w/ the PSSHA environment, after migrating one host to WT (type S), we then did a snap copy of the data to switch over another host (type S). This went well however upon starting up the host, the sync started and it started processing the oplog. Our disk usage grew quite a bit past the already stood up WT host, about 36 GB more by the time rs.status showed the same optimes. This was concerning but then we noticed it shrunk pretty quickly and now the difference between these two hosts is about 4 GB (i.e. existing WT was at 896 GB, new WT host is at 900 GB). BTW our oplog size is about 300 GB. I was curious as to why that happened. I have a couple theories and they may be wrong but feel free to correct me:I’m asking this because when I did a seeded sync, the member switched to secondary practically immediately. I couldn’t find anything standing out in the logs with regards to storage and or replication events but initially I did see messages regarding the oplog. I used rs.status to monitor the primary and this hosts optimes. Although the secondary showed it’s state was SECONDARY, the optimes were still a couple tens of thousands behind the primary. Eventually the optimes synced up +/- 5 but I didn’t see any specifics for optimes or completed oplog sync in the logs on that secondary.I ask because the switch to SECONDARY was pretty fast. I did see txn-recover, WT local.oplog.rs threads, oplog markers, etc which started at 17:08:23. This same instance then went to STARTUP2 3 seconds later (i.e. 17:08:26), then started replication applier threads and switched to RECOVERING about 0.02 seconds later (i.e. 17:08:26.010), then switched to SECONDARY about 0.02 seconds after that (i.e. 17:08:26.012). However the hosts optime via rs.status() wasn’t within +/- 5 optimes of the primary until about 12/13 minutes later.So I’m curious why there’s comfort in switching to SECONDARY despite not being within +/- 5 optimes of the primary? I imagine there’s definitely good reason, I just don’t know it and I would like warm fuzzies in the off chance the primary gets obliterated, we can safely switch to a SECONDARY that is quite a bit away with regards to optimes. During this point our storage also grew as mentioned earlier. It came down shortly after.Thank you very much in advance. Apologies if these questions have been asked already!Zahid Bukhari",
"username": "Zahid_Bukhari"
},
{
"code": "",
"text": "Hi Zahid,I believe some of what you experienced could be solved by upgrading to a supported version of MongoDB. The 3.2 series was released in Dec 2015, and the whole series is out of support in Sept 2018. I would recommend you to upgrade to at least MongoDB 3.6 (which will be supported until April 2021), or better yet, to the newest 4.2 series.Regarding the OOM issues, some operations in older out-of-support MongoDB versions are known to use excessive memory, such as SERVER-25318, SERVER-26488, SERVER-25075, among others. Since you mentioned that you setup no swap, your deployments will be prone to OOMkill. If you can, please allow some swap space to alleviate this.Regarding provisioning, a secondary node’s function is to provide high availability. It’s supposed to take over as the new primary in case something happened to the current primary, so it’s recommended to provision them with the same hardware as the primary.Regarding auto-compaction, WiredTiger never does this. This is because WiredTiger operates under the assumption that your data will always grow in size, and not shrinking. Thus, if you delete anything, those space will be marked unused by WT but not released to the OS. This is because if your data size keeps growing, releasing space to the OS and regaining them later provide net negative gain for you, since it’s basically useless work that cancels each other out.Regarding secondary status, once it reached SECONDARY in the output of rs.status(), it is ready to take over from the primary at any time. In older MongoDB, it is possible for you to see their optime to be way behind the primary if your replica set receive no write for an extended period. This is changed by SERVER-23892 in MongoDB versions 3.4 and newer.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB upgrade questions | 2020-05-28T20:24:36.778Z | MongoDB upgrade questions | 2,005 |
null | [] | [
{
"code": "",
"text": "Hello,\nMy main usage in the DB is to store and use geo-spatial data. When I move (and therefore have new coordinates) I would like the DB to “know” that it should move its focal point with me. Since I’m using an embedded system with limited resources (RAM) I need the DB to “know” which data to fetch from its local FS (file system) so when I perform a query (e.g. $geoWithin) the most relevant data will be accessable in minimum time. I need to configure the local database server to know to have the relevant data in RAM and update itself in accordance with my geo-position changes (my queries will reflect my position).\nIs that possible?\nif so, please guide me to the solution.\nthanks,",
"username": "Arieh_Salomon"
},
{
"code": "",
"text": "Hi,I think what you’re describing (a server knowing its position in real time and adjust its cache accordingly) is beyond the functionality provided by databases. A database should be able to retrieve your queries as quickly as possible. Knowing its location is tangential to its purpose.Having said that, unless you have a very strict location-based SLA, the database should be able to cope. It may need to warm up its cache for the first query using the new location, but subsequent queries should be faster.One possible solution off the top of my head is to have a regular cron job querying the database using the current location, so the cache will stay warm using that particular location (at least for the time in-between the cron job running). How often the cron job runs will depend on how often you envision location change will take place. There’s also a balancing act required to ensure that the cron job is small enough to affect the cache, but not too large that it interferes with the database operation.Best regards,\nKevin",
"username": "kevinadi"
}
] | DB configuration for geo-spatial usage | 2020-05-21T07:48:22.173Z | DB configuration for geo-spatial usage | 1,248 |
null | [] | [
{
"code": "start_time = \"10:30\"\nEnd_Time = null\nprocess != \"coding\" \n@Query(value = \"{End_Time: null}, , start_time: {$in: ?1}\" )\n",
"text": "I am a newbie to Mongo DB. I wanted to write a custom queryI need to filter records based on start time is not null and end time is null and other data do not match with the collection",
"username": "Sreekarthikeyan_K"
},
{
"code": "db.collection.find({start_time: {$ne: null}, end_time: {$eq: null}})",
"text": "Hi,Quick answer by translating your question literally: db.collection.find({start_time: {$ne: null}, end_time: {$eq: null}})However this is not the most efficient query. You’ll find that as your document count grows, this query will be slower and slower.If you’re new to MongoDB, please note that building a query in MongoDB is very different from building a query in a regular tabular (e.g. SQL) databases, because how the system works under the hood is very different.Instead, I would recommend you to check out free courses available in the MongoDB University, which are designed to get you up to speed with MongoDB as quickly as possible.Best regards,\nKevin",
"username": "kevinadi"
}
] | How to write custom query | 2020-06-02T10:54:53.972Z | How to write custom query | 1,498 |
null | [
"backup"
] | [
{
"code": "",
"text": "Hi,I am working a file system based snapshot solution. While copying the snapshot, few files like WiredTigerPrepLog.* files are NOT found and the copy fails. Can someone confirm if we really need these pre-allocated journal files to be copied ? can it be skipped ? are they needed for bringing up the mongods later from the restored content.Thanks\nSaamaja",
"username": "saamaja_vupputuri"
},
{
"code": "",
"text": "Hi Saamaja,Basically when taking a backup using snapshot, you should treat the whole dbPath as a single unit, including all the preplog files, and any file that seems to have no bearing on your data. This is to ensure that WiredTiger is starting from a known state, and would lower the risk of things going wrong with the restore.Best regards,\nKevin",
"username": "kevinadi"
}
] | WiredTIger pre allocated journal files behaviour | 2020-06-02T20:59:14.719Z | WiredTIger pre allocated journal files behaviour | 1,768 |
[] | [
{
"code": ".getCollection(\"mycollection\").find(\n{\n $and:[{\"myDateTime\":{$gte:\"2020-03-01T00:00:00.000\"}},\n {\"myDateTime\":{$lte:\"2020-03-15T23:59:59.999\"}},\n {\"code\":\"zzzz\"},\n {\"protocol\":\"X\"}]\n\t\t }\n ).sort({\"myDateTime\":-1}).limit(200)\n.getCollection(\"mycollection\").find(\n{\n $and:[{\"myDateTime\":{$gte:\"2020-03-13T00:00:00.000\"}},\n {\"myDateTime\":{$lte:\"2020-03-13T23:59:59.999\"}},\n {\"code\":\"zzzz\"},\n {\"protocol\":\"X\"}]\n\t\t }\n ).sort({\"myDateTime\":-1}).limit(200)\n",
"text": "Helloi read the documentation but that is absolute not clear when search in the detail of working queryi have query on periode with index on date 15 daysfirst question very important, how work internaly the query with sort and limitIs the query with sort on index is fulled completed internaly before it execute the limit of 200 on the resultset, or it stop at 200 lines of index reading.Second questionif i use same query on 15 days period and i execute again on 24h the same query in case of period, is the cache is used if the data is already loaded by the previous query ?when i execute i got more than 2s for 15 days, and less 1s for 24h, on 30 million operations with same index sort readingedit: for this one, i get answer, the response is noThanks for answer",
"username": "Jp_B"
},
{
"code": "",
"text": "Hi,Is the query with sort on index is fulled completed internaly before it execute the limit of 200 on the resultset, or it stop at 200 lines of index reading.If you have an index that can answer the query, it will traverse the index and stop at result # 200. Since an index is always sorted, it is relatively cheap to do this operation. Especially if your query is a covered query.if i use same query on 15 days period and i execute again on 24h the same query in case of period, is the cache is used if the data is already loaded by the previous query ?Assuming the index/documents are still in the WiredTiger cache, yes it will be reused. The link you posted specifically mentions that MongoDB does not cache query results, but also mentions that it keeps the most recently used data in RAM.For query indexing in general, you might want to check out my answer on StackOverflow about this topic.Best regards,\nKevin",
"username": "kevinadi"
}
] | Cache work and limit | 2020-06-05T07:25:46.659Z | Cache work and limit | 1,594 |
|
null | [] | [
{
"code": "npm list mongoose\n/Users/Wilson\n└─┬ [email protected]\n └── [email protected] extraneous\n\nnpm list -g mongoose\n/Users/Wilson/.nvm/versions/node/v10.15.3/lib\n└── [email protected]\n",
"text": "How to uninstall “[email protected]”?Should I also uninstall “[email protected]”,since I have globally installed [email protected]?Thank you.",
"username": "Wilson"
},
{
"code": "npm listmongoose-global",
"text": "Hi,mongoose-global looks like an unmaintained project. It was last updated 3 years ago, and there is no repo listed in the npm page so I can’t check what’s the module is about. I suggest to remove it since it’s likely to be very outdated by now, or contain security holes.From your npm list output, you should be able to remove mongoose-global using the standard npm uninstall command. You might also want to check out the npm prune command to remove extraneous packages.Best regards,\nKevin",
"username": "kevinadi"
}
] | How to uninstall “[email protected]”? Should I also uninstall “[email protected]”? | 2020-04-26T08:13:46.997Z | How to uninstall “[email protected]”? Should I also uninstall “[email protected]”? | 3,246 |
null | [
"server"
] | [
{
"code": "",
"text": "Is it possible to set timeout for fsynclock or does it have any default value?\nI want my operation to fail after 5 minutes if lock fails to get acquired, just wanted to know of there s some value to configure it directly in MongoDB.",
"username": "Akshaya_Srinivasan"
},
{
"code": "mongodmongodmongod",
"text": "Hi Akshaya,No db.fsyncLock() is a very basic command that does:So if after a while the command doesn’t return, it means that there are either a lot of dirty data to write to disk, or the disk is too slow to process the writes in a timeframe that you deem reasonable.Please note that this command is quite risky to run, as it is possible to not be able to unlock the mongod process if the original client that executes the lock command is disconnected from the server for any reason. At that point, you would need to restart the mongod process, which may or may not be feasible depending on your use case.Best regards,\nKevin",
"username": "kevinadi"
}
] | Does FsyncLock have timeout in MongoDB? | 2020-03-23T05:22:04.573Z | Does FsyncLock have timeout in MongoDB? | 1,886 |
null | [
"charts",
"on-premises"
] | [
{
"code": "2020-06-08T16:07:13.169+00:00 INFO called charts-cli startup with arguments {\"_\":[\"startup\"],\"debug\":false,\"help\":false,\"version\":false,\"with-test-facilities\":false,\"withTestFacilities\":false,\"d\":\"/mongodb-charts\",\"directory\":\"/mongodb-charts\",\"$0\":\"mongodb-charts/bin/charts-cli.js\"} \n2020-06-08T16:07:13.175+00:00 INFO parsedArgs task success \n2020-06-08T16:07:13.178+00:00 INFO installDir task success ('/mongodb-charts') \n2020-06-08T16:07:13.266+00:00 INFO log task success \n2020-06-08T16:07:13.266+00:00 INFO salt task success \n2020-06-08T16:07:13.269+00:00 INFO productNameAndVersion task success ({ productName: 'MongoDB Charts Frontend', version: '1.9.1' }) \n2020-06-08T16:07:13.270+00:00 INFO gitHash task success ('1a46f17f') \n2020-06-08T16:07:13.270+00:00 INFO supportWidgetAndMetrics task success (undefined) \n2020-06-08T16:07:13.270+00:00 INFO tileServer task success (undefined) \n2020-06-08T16:07:13.270+00:00 INFO tileAttributionMessage task success (undefined) \n2020-06-08T16:07:13.270+00:00 INFO rawFeatureFlags task success (undefined) \n2020-06-08T16:07:13.272+00:00 INFO chartsMongoDBUri task success \n2020-06-08T16:07:13.273+00:00 INFO encryptionKeyPath task success \n2020-06-08T16:07:13.275+00:00 INFO featureFlags task success ({}) \n2020-06-08T16:07:13.470+00:00 INFO waiting for MongoDB, attempt #1 to connect to MongoDB at mongodb://mongo-svc.dldorg-ns:27017/admin?replicaSet=rs0. \n2020-06-08T16:07:13.773+00:00 INFO lastAppJson task success ({}) \n2020-06-08T16:07:13.773+00:00 INFO existingInstallation task success (false) \n2020-06-08T16:07:13.775+00:00 INFO tenantId task success ('5cb34aef-07c0-4413-8090-c75823681978') \n2020-06-08T16:07:13.866+00:00 INFO tokens task success \n2020-06-08T16:07:13.867+00:00 INFO stitchMigrationsLog task success ({ completedStitchMigrations: [ 'stitch-1332', 'stitch-1897', 'stitch-2041', 'migrateStitchProductFlag', 'stitch-2041-local', 'stitch-2046-local', 'stitch-2055', 'multiregion', 'dropStitchLogLogIndexStarted' ] }) \n2020-06-08T16:07:13.868+00:00 INFO stitchConfigTemplate task success \n2020-06-08T16:07:13.870+00:00 INFO libMongoIsInPath task success (true) \n2020-06-08T16:07:14.567+00:00 INFO waiting for MongoDB, successfully connected to MongoDB at mongodb://mongo-svc.mongo-ns:27017/admin?replicaSet=rs0 after 1 attempts. \n2020-06-08T16:07:14.570+00:00 INFO mongoDBReachable task success (true) \n2020-06-08T16:07:14.675+00:00 INFO stitchMigrationsExecuted task success ([ 'stitch-1332', 'stitch-1897', 'stitch-2041', 'migrateStitchProductFlag', 'stitch-2041-local', 'stitch-2046-local', 'stitch-2055', 'multiregion', 'dropStitchLogLogIndexStarted' ]) \n2020-06-08T16:07:14.783+00:00 INFO minimumVersionRequirement task success (true) \n2020-06-08T16:07:14.785+00:00 INFO stitchConfig task success \n2020-06-08T16:07:14.868+00:00 INFO stitchConfigWritten task success (true) \n2020-06-08T16:07:15.071+00:00 INFO stitchChildProcess task success \n2020-06-08T16:07:15.269+00:00 INFO waiting for Stitch to start, attempt #1 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:15.468+00:00 WARN waiting for Stitch to start, attempt #1 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:15.472+00:00 INFO indexesCreated task success (true) \n2020-06-08T16:07:15.569+00:00 INFO waiting for Stitch to start, attempt #2 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:15.571+00:00 WARN waiting for Stitch to start, attempt #2 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:15.772+00:00 INFO waiting for Stitch to start, attempt #3 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:15.773+00:00 WARN waiting for Stitch to start, attempt #3 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:16.074+00:00 INFO waiting for Stitch to start, attempt #4 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:16.167+00:00 WARN waiting for Stitch to start, attempt #4 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:16.667+00:00 INFO waiting for Stitch to start, attempt #5 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:16.668+00:00 WARN waiting for Stitch to start, attempt #5 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:17.469+00:00 INFO waiting for Stitch to start, attempt #6 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:17.567+00:00 WARN waiting for Stitch to start, attempt #6 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:18.868+00:00 INFO waiting for Stitch to start, attempt #7 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:18.871+00:00 WARN waiting for Stitch to start, attempt #7 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:21.066+00:00 INFO waiting for Stitch to start, attempt #8 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:21.068+00:00 WARN waiting for Stitch to start, attempt #8 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:23.169+00:00 INFO waiting for Stitch to start, attempt #9 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:23.267+00:00 WARN waiting for Stitch to start, attempt #9 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:25.367+00:00 INFO waiting for Stitch to start, attempt #10 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:25.467+00:00 WARN waiting for Stitch to start, attempt #10 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:27.568+00:00 INFO waiting for Stitch to start, attempt #11 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:27.666+00:00 WARN waiting for Stitch to start, attempt #11 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:29.766+00:00 INFO waiting for Stitch to start, attempt #12 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:29.768+00:00 WARN waiting for Stitch to start, attempt #12 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:31.869+00:00 INFO waiting for Stitch to start, attempt #13 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:31.870+00:00 WARN waiting for Stitch to start, attempt #13 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:33.972+00:00 INFO waiting for Stitch to start, attempt #14 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:33.973+00:00 WARN waiting for Stitch to start, attempt #14 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:36.166+00:00 INFO waiting for Stitch to start, attempt #15 to connect to Stitch server at http://localhost:8080. \n2020-06-08T16:07:36.168+00:00 WARN waiting for Stitch to start, attempt #15 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:36.168+00:00 ERROR stitchServerRunning task failure: Can't connect to Stitch Server at http://localhost:8080. Too many failed attempts. Last error: connect ECONNREFUSED 127.0.0.1:8080 \n2020-06-08T16:07:36.267+00:00 ERROR startup failed \n",
"text": "I am running mongo(mongo:4.0.4) and trying to connect mongo charts (Quay)\non Kubernetes cluster\nthis is the log for charts-cli.logthe file stitch-startup.log is emptyany help is much appreciated.",
"username": "Amit_Patel"
},
{
"code": "",
"text": "Hi @Amit_Patel -We don’t support Charts on Kubernetes - while it can be made to work, we don’t have experience with this setup so I’m not sure what’s wrong. You could look at this sample template to see if it’s of any help.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Charts: Can't connect to Stitch Server at http://localhost:8080 | 2020-06-08T16:27:08.421Z | Charts: Can’t connect to Stitch Server at http://localhost:8080 | 4,194 |
null | [] | [
{
"code": "mongod.wt",
"text": "Starting mongod is taking several hours due to the number of file pointers being opened.I have approx 133k .wt files on disk and on my small test server the startup process seems to manage about 5 per second. (~7 hours). Even in a test environment this is completely unworkable.I’m aware that under normal operation idle pointers are closed. I can verify that my live servers have between 5k-10k files open at once under normal load. This fine, but the startup time is a big problem.Am I doing something wrong?Is there any way I can speed this up to a sane boot time?Even just for testing or data crunching a backup, is there a way I can skip these initial file openings?MongoDB 4.0.9 on CentOS 7.\nDisk is formatted as XFS.",
"username": "timw"
},
{
"code": "",
"text": "Just to follow up on my own issue…I incrementally upgraded my test server from Linode’s 1G “Nanode”. I gave up on the 2G instance after 30 mins. Then I tried the 4G instance (lowest 2 core option) and mongod startup plummeted to 5 minutes! Out of curiosity I then tried the 8G (4 core) instance and the result was 2 minutes.My live servers are the 4G instances, so a 5 minute start is far from ideal in emergency situations, but at least I know not to bother with single core machines for testing. I’d still like to know if there are any tips for faster starts in general.",
"username": "timw"
},
{
"code": "",
"text": "It is possible that your issue isn’t just MongoDB related, more about your cloud provider related. In your first message you didn’t mention that you use Linode’s virtual servers, but that could be the clue. Many VM providers limit IOPS on their products based on tier (which is often memory based). What you experienced with your testing kind of shows that. Those Nanode’s and lowest 2 core computers, they seem to be shared hosting. Dedicated VM’s could have more IOPS, and startup times could be a lot faster.I haven’t tried out MongoDB on real hardware, or big databases, so I don’t have exact tips for you. But if you have hardware where you could try out starting up your MongoDB, it could verify those usage restrictions. Or you could find it from some more detailed specs sheets of Linode. I know that Google Cloud & AWS at least have such resource restrictions in their VMs, and I/O intence products need more juice than they would otherwise require CPU/RAM.That could potentially be selling point of MongoDB Atlas, if wanting to use hosting provider anyway, why not use theirs, and get their support making sure things are working as it should. ",
"username": "kerbe"
}
] | Mongod taking several hours to start | 2020-06-06T13:29:55.915Z | Mongod taking several hours to start | 3,365 |
null | [
"production",
"rust"
] | [
{
"code": "mongodb",
"text": "The MongoDB Rust driver team is pleased to announce the first stable release of the driver, v1.0.0. This release marks the general availability of the driver. Additionally, per semver requirements, no breaking API changes will be made to the 1.x branch of the driver.You can read more about the release on Github, and the release is published on https://crates.io under the package name mongodb . If you run into any issues, please file an issue on JIRA.Thank you, and we hope you enjoy using the driver!",
"username": "Samuel_Rossi"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Announcing the first stable release of the Rust driver, v1.0.0 | 2020-06-08T17:11:54.786Z | Announcing the first stable release of the Rust driver, v1.0.0 | 2,635 |
null | [] | [
{
"code": "",
"text": "",
"username": "mrinal_srivastava"
},
{
"code": "",
"text": "Have you updated mongodb into your PATH?\nShow us the output of echo $PATH\nCan you run mongo --nodb command by giving full path of mongo.exe?",
"username": "Ramachandra_Tummala"
},
{
"code": "$PATHecho %PATH%",
"text": "Hi @mrinal_srivastava,Have you updated mongodb into your PATH?Any update on this ?If $PATH doesn’t work then try echo %PATH% and share the output with us.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Thankyou soo much for. Your concern …its solved…now if I Type mongo --db i can see my version of my mongo db 4.2.7…If i ever need any further assistance i will contact you people❤️",
"username": "mrinal_srivastava"
},
{
"code": "",
"text": "Hi i am having similiar problems.\ni have updated my path and it gives the following error\n“zsh: command not found: mongo”",
"username": "Faith_Rambire"
},
{
"code": "",
"text": "Show us the output of echo $PATH\nCan you run the command using full path of mongo.exe and see",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "same here with bash, I have installed 4.2.7\n-bash: mongo: command not found",
"username": "Avinash_DV"
},
{
"code": "",
"text": "Hi, I found solution for this\njust use => export PATH=\"$PATH:x\"\nx => path to bin like (/Users/avinash/mdb/mongodb-x86_64-enterprise-4.2.7/bin)",
"username": "Avinash_DV"
},
{
"code": "",
"text": "Hi @Faith_Rambire,Did you switch to the bash shell before setting up the path ?Screenshot 2020-06-08 at 1.27.40 PM1958×848 90.7 KBAlso, this Show us the output of echo $PATH\nCan you run the command using full path of mongo.exe and see~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "hi i ended up installing using brew",
"username": "Faith_Rambire"
},
{
"code": "",
"text": "I’m glad your issue got resolved. Please feel free to get back to us if you have any other query.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | I am having problem in installing mongo db..can anyone please help me? | 2020-06-01T16:46:40.570Z | I am having problem in installing mongo db..can anyone please help me? | 1,502 |
null | [
"queries",
"performance"
] | [
{
"code": "io1 {\n \"_id\" : ObjectId(\"5daff2657ca2680c8ace2d2e\"),\n \"cron\" : \"peer_comparison_profit_loss\",\n \"co_code\" : \"1\",\n \"co_name\" : \"Company A\",\n \"status\" : \"Pass\",\n \"date\" : ISODate(\"2019-10-23T00:00:00.000Z\")\n }\n /* 2 */\n {\n \"_id\" : ObjectId(\"5daff2657ca2680c8ace2d2f\"),\n \"cron\" : \"peer_comparison_financial_ratio\",\n \"co_code\" : \"1\",\n \"co_name\" : \"Company A\",\n \"status\" : \"Pass\",\n \"date\" : ISODate(\"2019-10-23T00:00:00.000Z\")\n }\n /* 3 */\n {\n \"_id\" : ObjectId(\"5daff2657ca2680c8ace2d30\"),\n \"cron\" : \"price_nse\",\n \"co_code\" : \"1\",\n \"co_name\" : \"Company A\",\n \"status\" : \"Pass\",\n \"date\" : ISODate(\"2019-10-23T00:00:00.000Z\")\n }\n /* 4 */\n {\n \"_id\" : ObjectId(\"5daff2657ca2680c8ace2d31\"),\n \"cron\" : \"peer_comparison_profit_loss\",\n \"co_code\" : \"2\",\n \"co_name\" : \"Company B\",\n \"status\" : \"Pass\",\n \"date\" : ISODate(\"2019-10-23T00:00:00.000Z\")\n }\n /* 5 */\n {\n \"_id\" : ObjectId(\"5daff2657ca2680c8ace2d32\"),\n \"cron\" : \"peer_comparison_financial_ratio\",\n \"co_code\" : \"2\",\n \"co_name\" : \"Company B\",\n \"status\" : \"Pass\",\n \"date\" : ISODate(\"2019-10-23T00:00:00.000Z\")\n }\n /* 6 */\n {\n \"_id\" : ObjectId(\"5daff2657ca2680c8ace2d33\"),\n \"cron\" : \"price_nse\",\n \"co_code\" : \"2\",\n \"co_name\" : \"Company B\",\n \"status\" : \"Fail\",\n \"date\" : ISODate(\"2019-10-23T00:00:00.000Z\")\n }\n {\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"sma.cron_status\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"date\" : {\n \"$eq\" : ISODate(\"2020-06-05T00:00:00.000Z\")\n }\n }, \n {\n \"status\" : {\n \"$eq\" : \"Pass\"\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"date\" : {\n \"$eq\" : ISODate(\"2020-06-05T00:00:00.000Z\")\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"status\" : 1.0\n },\n \"indexName\" : \"status_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"status\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"status\" : [ \n \"[\\\"Pass\\\", \\\"Pass\\\"]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : []\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 33288,\n \"executionTimeMillis\" : 10819,\n \"totalKeysExamined\" : 8784061,\n \"totalDocsExamined\" : 8784061,\n \"executionStages\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"date\" : {\n \"$eq\" : ISODate(\"2020-06-05T00:00:00.000Z\")\n }\n },\n \"nReturned\" : 33288,\n \"executionTimeMillisEstimate\" : 683,\n \"works\" : 8784062,\n \"advanced\" : 33288,\n \"needTime\" : 8750773,\n \"needYield\" : 0,\n \"saveState\" : 68625,\n \"restoreState\" : 68625,\n \"isEOF\" : 1,\n \"invalidates\" : 0,\n \"docsExamined\" : 8784061,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 8784061,\n \"executionTimeMillisEstimate\" : 191,\n \"works\" : 8784062,\n \"advanced\" : 8784061,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 68625,\n \"restoreState\" : 68625,\n \"isEOF\" : 1,\n \"invalidates\" : 0,\n \"keyPattern\" : {\n \"status\" : 1.0\n },\n \"indexName\" : \"status_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"status\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"status\" : [ \n \"[\\\"Pass\\\", \\\"Pass\\\"]\"\n ]\n },\n \"keysExamined\" : 8784061,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0,\n \"seenInvalidated\" : 0\n }\n }\n },\n \"serverInfo\" : {\n \"host\" : \"localhost\",\n \"port\" : 27017,\n \"version\" : \"4.0.10\",\n \"gitVersion\" : \"c389e7f69f637f7a1ac3cc9fae843b635f20b766\"\n },\n \"ok\" : 1.0\n }\n",
"text": "Hi fellow community members,We have been using MongoDB for over a year and a half. Recently, in the past week, we have observed that the query which took about milliseconds to complete is taking more than 10 seconds to return data.After observing slow performance we switched to c5.large instance type which comes with:\n**2 Dedicated Cores of Intel® Xeon® Platinum 8124M CPU @ 3.00GHz **\n4GB RAMAlso tried switching the EBS volume from General Purpose (gp2) to Provisioned IOPS SSD ( io1 ) with value of 600, but to no luck.Please find more information below:Collection with data like:Total Records: 13798861Query:db.cron_status.find({“status” : “Pass”,\n“date” : ISODate(“2020-06-05T00:00:00.000Z”)}).explain(“executionStats”)Execution Stats:This is just an example of a single collection we are facing with, there are multiple collections having slow query performance.Let me know if you require further information.Can someone please help us or guide us in the right direction?",
"username": "Deep_Shah"
},
{
"code": "",
"text": "8784061Hello @Deep_Shah,Here, the ratio of nReturned documents to totalDocsExamined is too high.\n“nReturned” : 33288,\n“executionTimeMillis” : 10819,\n“totalKeysExamined” : 8784061,\n“totalDocsExamined” : 8784061,Index on the status field is being used, I would prefer if you can perform one test with Index field “status and date” and then check how query is performing.\nAlso if the index on date is already created, then filter date first then status.",
"username": "Aayushi_Mangal"
}
] | MongoDB find query taking more than 10 seconds | 2020-06-08T09:33:12.065Z | MongoDB find query taking more than 10 seconds | 5,292 |
null | [] | [
{
"code": "",
"text": "Hi everyone,I am happy to share that Qubitro, a fully-managed SaaS IoT platform is now accepting registration for the private beta. We are a long-time MongoDB user and built the platform on top of that.We make developers/companies able to develop IoT projects with zero infrastructure set up by offering both user and API interfaces.If you are interested in or you know someone who might be interested, please help us to spread the word.Website: https://www.qubitro.comRegistration: Qubitro Portal",
"username": "Beray_Bentesen"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Qubitro Private Beta Launch | 2020-06-08T13:32:28.449Z | Qubitro Private Beta Launch | 4,659 |
null | [
"swift",
"production"
] | [
{
"code": "MongoErrorMongoErrorMongoErrorProtocolClientOptionsMongoClientOptionsDatabaseOptionsMongoDatabaseOptionsCollectionOptionsMongoCollectionOptionsHintIndexHintAddressServerAddressCursorTypeMongoCursorTypeWriteConcern.W.tagWriteConcern.W.customlocalhost:27017localhost:27018localhost:27019mongodb://localhost:27017directConnection=trueMongoClientOptions.directConnectiontruefalsebson_tDate(msSinceEpoch:)IdIDBSON.objectIDMongoCollection.drop()ReadConcernWriteConcern.otherWriteConcern.WtagcustomMongoClientOptions",
"text": "Today I’m very pleased to announce our 1.0 release.Our API is now stable, and from this point forward we’ll follow semantic versioning.For more details on the driver, please check out this blog post.We’d like to thank the following people for helping us get to this release out:This release was preceded by 2 release candidates (rc0, rc1); if you are upgrading from an earlier version of the driver, please see their respective release notes for details on what has changed since v0.3.0.Below are some changes of note we’ve made, as well as a list of all tickets we’ve closed since 1.0.0-rc1.The minimum macOS version the driver now supports is 10.14.To improve the discoverability of driver error types, their definitions have now all been nested in an enumeration MongoError. The old protocol MongoError has been renamed MongoErrorProtocol. Additionally, a separate namespace and set of errors have been introduced for use within the BSON library. Please see our error handling guide for more details.We’ve made some naming changes to the BSON library to prevent collisions with other libraries as well as to provide more consistency within the library as a whole. Please see the migration guide section of our BSON guide for details on upgrading from 1.0.0-rc1’s API.In addition to prefixing some types in the BSON library, we’ve also made the following renames in the driver:The driver’s behavior around initial discovery of replica set members has changed as of SWIFT-742.Consider the following scenario: you have a three-node replica set with hosts running at localhost:27017, localhost:27018, and localhost:27019.\nPreviously, given a connection string containing a single one of those hosts (e.g. mongodb://localhost:27017) the driver would make a direct connection to that host only, and would not attempt to discover or monitor other members of the replica set.The driver’s default behavior is now to automatically attempt discovery of the entire replica set when given a single host.If you need to establish a direction connection, you can use the new connection string option directConnection=true, or set MongoClientOptions.directConnection to true. Omitting the option is equivalent to setting it to false.We’ve added a new Vapor example, demonstrating how to use the driver within the context of a CRUD application. If you have any suggestions for improvement or other example code you’d like to see added, please let us know!",
"username": "kmahar"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Swift Driver 1.0 Released | 2020-06-08T12:59:39.885Z | MongoDB Swift Driver 1.0 Released | 1,559 |
null | [] | [
{
"code": "",
"text": "Hi,I am Raghavender from Sweden. I am new to Mongo DB. started learning mongo DB now.Br\nRaghavender",
"username": "Kodumuri_Raghavender"
},
{
"code": "",
"text": "Hello Raghavender,Good to know you are learning MongoDB database programming. How are you learning and what are you planning to do with the new skills?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi @Kodumuri_Raghavender and welcome to the MongoDB Community Forums!started learning mongo DB nowHave you by chance seen the free MongoDB University Courses yet? They are a great place to learn about MongoDB, with courses for both developers and DBAs.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Kodumuri_Raghavender! Welcome to the community! We’re happy to have you here ",
"username": "Jamie"
},
{
"code": "",
"text": "Hi,i am learning through pluralsight.",
"username": "Kodumuri_Raghavender"
}
] | Hi I am from Sweden | 2020-06-03T09:51:38.990Z | Hi I am from Sweden | 1,961 |
null | [] | [
{
"code": "",
"text": "From where do I need to download the json file in order to understand the sessions and complete the quiz sections",
"username": "Gautu_Pinkyar"
},
{
"code": "",
"text": "In chapter 1, if I recall, you are not requested to manipulate any json file. All examples are in the class shared cluster.",
"username": "steevej"
},
{
"code": "",
"text": "How and where can I get the JSON file for quizzes? Could you help me with it. In short where Can i get the shared cluster where I can get the data on which the quiz questions can be solved",
"username": "Gautu_Pinkyar"
},
{
"code": "",
"text": "This is not related to Chapter 1 but related to the quiz asked in the next chapter",
"username": "Gautu_Pinkyar"
},
{
"code": "",
"text": "Sorry but the thread was posted in Chapter 1.Which quiz exactly?",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Gautu_Pinkyar,Please provide the information requested by @steevej-1495.Which quiz exactly?By the way, here is the connection string for the class atlas cluster :mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student -password m001-mongodb-basics~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Thanks,\nThe problem has been resolved",
"username": "Gautu_Pinkyar"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | Not able to find the JSON File due to which the quiz tests are failing | 2020-06-02T14:05:41.077Z | Not able to find the JSON File due to which the quiz tests are failing | 1,672 |
null | [] | [
{
"code": "",
"text": "when does mongo plan to support?",
"username": "Richard_Braman"
},
{
"code": "",
"text": "Welcome to the community @Richard_Braman!This was previously discussed in Validation using latest JSON Schema version? - #2 by Stennie_X, but there was no follow-up in terms of what features from were of interest for the user asking about newer JSON Schema drafts.If you have more specific feedback, please comment on the existing topic.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Json schema draft 06 | 2020-06-07T06:07:23.669Z | Json schema draft 06 | 2,454 |
null | [
"stitch"
] | [
{
"code": "",
"text": "This is regrading MongoDB Stitch External service. How can I configure Mongodb Stitch webhook URL to be called with a jwt token? ( if I store a secret key in Stitch, it should authorize the request before calling function ).",
"username": "Roshan_Prabashana"
},
{
"code": "",
"text": "Hi Roshan – You should be able to configure this by following these two steps –Hope that helps!",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hi,Because webhooks only support JWT Tokens and not google auth.Can I set the JWK URI → https://www.googleapis.com/oauth2/v3/certsAnd validate the token through jwktokenstring in the header?It keeps returning invalid session on Stitch",
"username": "Chi_Tran"
}
] | Stitch Webhook URL with JWT | 2020-05-22T21:03:54.414Z | Stitch Webhook URL with JWT | 3,262 |
null | [] | [
{
"code": "",
"text": "I like how community has been integrated to forums (or forums integrated to community part of site). However, in this process there has been lost easy way to return to front page of forums, once you have dived to some topic reading it. Before MongoDB logo brought one to forum frontpage, now it takes you to community page.Maybe add forums next to Learn and Community on header? Without direct link to forum start, I think it is three clicks to get there via current navigation options. Or I didn’t find any easier way, which is also a bit bad. ",
"username": "kerbe"
},
{
"code": "",
"text": "Before MongoDB logo brought one to forum frontpage, now it takes you to community page.Yes. I miss that navigation convenience - now I have to use the browser’s go back button (feels not very comfortable for some reason).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi @kerbe and @Prasad_Sayayou can use a keyboard shortcut:\ng followed by l (small L) this will bring you back to the “latest” sectionThere are many more keyboard shortcuts which you can find when you press “?” in a non editor field.Hope that helps\nMichaelPS I miss the leaf-button, too",
"username": "michael_hoeller"
},
{
"code": "u",
"text": "you can use a keyboard shortcut:\ng followed by l (small L) this will bring you back to the “latest” sectionThank you. Actually I can press the u - this took me back to the main screen from this post (and thats what I wanted) ",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hello @Prasad_Saya,\nyes the u shortcut is fairly handy, I didn’t mention this as example since @kerbe asked for the start page.The use of keyboard shortcuts is anyway a very personal thing. E.g. I like to use:\ng,n – to go to the new posts\no – to open a post\nu – to go back\nj – to move to the next postSince this “back link” has been asked multiple times: To add a top link to the start page as @kerbe suggested can be helpful. Would that fit to the future plans @Stennie_X, @Jamie?Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Keyboard shortcuts are neat, but they don’t work for me. On my own computers I always use some Vim -keybinding plugins (Vimperator, Pentadactyl, Vimium or similar), so sites own keyboard shortcuts doesn’t work (or well, need to escape from browser extension to pass them through to site, which makes it more complicated )I have mouse with more buttons than I have figured sensible bindings for, and I use it’s back/forward quite often in browsing. Still, there are times when I prefer to get into very beginning of forum, and not backtrack there ",
"username": "kerbe"
},
{
"code": "Community Forums",
"text": "Still, there are times when I prefer to get into very beginning of forum, and not backtrack there @kerbe you can scroll to the bottom of the page and click on the Community Forums link to get back to your home page.",
"username": "Doug_Duncan"
},
{
"code": "Community Forums",
"text": "you can scroll to the bottom of the page and click on the Community Forums link to get back to your home page.Thank you @Doug_Duncan, I hadn’t noticed link down there. I was simply looking it from topside of page. This helps me to move around a bit more easier. ",
"username": "kerbe"
},
{
"code": "",
"text": "Quick update, y’all. We have changed the top Community nav link to point directly to the forums homepage, both here and on DevHub until the nav menus get built out more. Cheers ",
"username": "Jamie"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Quick way back to forum front page | 2020-06-04T07:24:39.265Z | Quick way back to forum front page | 4,263 |
null | [
"react-native"
] | [
{
"code": "",
"text": "I have a production RN mobile app that currently uses AsyncStorage which no longer meets our needs. Are there any known guides to refactoring apps to use Realm? What would be an expected level of effort for integrating Realm into a rather large, professional-grade app?Any links to such information would be most appreciated. Thanks.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "@Michael_Stelly We do not have any guides for AsyncStorage to Realm specifically but we do have a couple of examples of using Realm in a RN app, I hope this helps -main/rnMongoDB Realm Tutorials. Contribute to mongodb-university/realm-tutorial development by creating an account on GitHub.We also have this RN helper framework -A higher-order component for listening to Realm data in React Native components.. Latest version: 0.1.2, last published: 6 years ago. Start using react-native-realm in your project by running `npm i react-native-realm`. There are no other projects in...",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks. I will check them out.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Convert existing React Native app to Realm | 2020-05-18T17:21:34.840Z | Convert existing React Native app to Realm | 2,218 |
null | [
"data-modeling",
"react-native"
] | [
{
"code": " insertIdentity = [{\n name: 'Identity',\n properties:{\n city: City,\n }\n }]\n",
"text": "Hi,\nI would like to create a realtionship between a newly created object (lets say Identity) and a already existing object (let’s say City), also many identites should have a link to one City. I’ve tried on-to realationships, to-many realtionships and even inverse relationships but nothing seems to work. Every time I get the same error that I am trying to create an object with the same primary key.Identity object is:Where both identity and city have primary keys.Is what I need possible with realm?",
"username": "Milan_Nikolic"
},
{
"code": "",
"text": "@Milan_Nikolic You can create a many-to-many relationship with Realm - is that what you are wanting to do? You should be able to create a List in Identity for City and a List in City for IdentityDoes this work for you?",
"username": "Ian_Ward"
}
] | Relationships between new and existing object Realm react-native | 2020-04-20T16:56:26.938Z | Relationships between new and existing object Realm react-native | 1,679 |
null | [] | [
{
"code": "",
"text": "hi,\nI am facing below error while connecting atlas cluster from mongo shell. please help me\nMongoDB shell version v4.2.7mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student -password m001-mongodb- mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student -password m001-mongodb-basics\n2020-06-02T14:42:52.947+0100 E QUERY [js] uncaught exception: SyntaxError: unexpected token: string literal :\n@(shell):1:6\nmongo “mongodb://cluster0-shard-00-00-jxeqq.mongodb.net:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?replicaSet=Cluster0-shard-0” --authenticationDatabase admin --ssl --username m001-student --password m001-mongodb-basics\n2020-06-02T14:43:15.147+0100 E QUERY [js] uncaught exception: SyntaxError: unexpected token: string literal :\n@(shell):1:6",
"username": "pavankumar_24237"
},
{
"code": "",
"text": "Please provide a screenshot of what you are doing. Because what you are doing depends of where you are doing it. I suspect that you are doing that at the wrong place. A lot of people had the same syntax error while doing what you were doing and they were all doing it at the wrong place. You should always search the forum to see if your issue has already been discussed and resolved.",
"username": "steevej"
},
{
"code": "mongo shellexitmongo",
"text": "Hi @pavankumar_24237,mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student -password m001-mongodb- mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student -password m001-mongodb-basicsIt looks like that you have typed the same connection string twice. Also you are running this command inside the mongo shell.Please exit out of the mongo shell by running exit command and then run the mongo command at the OS prompt.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Error while Connecting to Our Class Atlas Cluster from the mongo Shell | 2020-06-02T13:48:05.527Z | Error while Connecting to Our Class Atlas Cluster from the mongo Shell | 1,171 |
null | [] | [
{
"code": "{ \n name: 'book',\n mobile:'7777777777',\n email:'[email protected]'\n},\n{ \n name: 'pen',\n mobile:'8888888888',\n email:'[email protected]'\n}\n{ \n name: 'book',\n mobile:'7777777777'\n},\n{ \n name: 'pen',\n mobile:'8888888888'\n}\n",
"text": "For example: I have a collection with fields as below.Here in this collection, I don’t want the email field anymore.I want the collection without “email” field as below.Is it possible?",
"username": "lalitha_devi"
},
{
"code": "$unset> db.collection.find()\n{ \"_id\" : ObjectId(\"5edaf777cdb02689803b0b8b\"), \"name\" : \"book\", \"mobile\" : \"7777777777\", \"email\" : \"[email protected]\" }\n{ \"_id\" : ObjectId(\"5edaf78dcdb02689803b0b8c\"), \"name\" : \"pen\", \"mobile\" : \"8888888888\", \"email\" : \"[email protected]\" }\n> db.collection.updateMany({}, {\"$unset\": {\"email\": 1}})\n{ \"acknowledged\" : true, \"matchedCount\" : 2, \"modifiedCount\" : 2 }\n> db.collection.find()\n{ \"_id\" : ObjectId(\"5edaf777cdb02689803b0b8b\"), \"name\" : \"book\", \"mobile\" : \"7777777777\" }\n{ \"_id\" : ObjectId(\"5edaf78dcdb02689803b0b8c\"), \"name\" : \"pen\", \"mobile\" : \"8888888888\" }\n>\nemail{}",
"text": "Hi @lalitha_devi and welcome to the community forums.You want to use the $unset operator.Note the above will remove the email field from all documents which is what is sounds like what you want. If you only want to do this on select documents, then you would need to change the {} portion to contain the proper match for the documents you want to update.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to remove a column from a mongo db collection? | 2020-06-05T19:02:52.641Z | How to remove a column from a mongo db collection? | 8,279 |
null | [
"golang"
] | [
{
"code": "",
"text": "In your documentation of the Go driver there’s always context.TODO() when using a base context.The description of context.TODO() isTODO returns a non-nil, empty Context. Code should use context.TODO when it’s unclear which Context to use or it is not yet available (because the surrounding function has not yet been extended to accept a Context parameter).Are you using context.TODO() for documentation purposes only?The description of context.Background()Background returns a non-nil, empty Context. It is never canceled, has no values, and has no deadline. It is typically used by the main function, initialization, and tests, and as the top-level Context for incoming requests.I’ve been working with Go for the last 7 years and maybe I’ve missed something obvious, but shouldn’t this be mentioned in the documentation?\nOr is context.TODO actually the preferred context?",
"username": "dalu"
},
{
"code": "var (\n\n\tbackground = new(emptyCtx)\n\n\ttodo = new(emptyCtx)\n\n)\n\n\n// Background returns a non-nil, empty Context. It is never canceled, has no\n\n// values, and has no deadline. It is typically used by the main function,\n\n// initialization, and tests, and as the top-level Context for incoming\n\n// requests.\n\nfunc Background() Context {\n\n\treturn background\n\n}\n\n\n// TODO returns a non-nil, empty Context. Code should use context.TODO when\n\n// it's unclear which Context to use or it is not yet available (because the\n\n// surrounding function has not yet been extended to accept a Context\n\n// parameter).\n\nfunc TODO() Context {\n\n\treturn todo\n\n}\n",
"text": "Oh well, not it’s at least official \nBoth context.Background() and context.TODO() return the same empty context.Source: - The Go Programming Language",
"username": "dalu"
},
{
"code": "context.Backgroundcontext.TODOcontext.TODO()",
"text": "Hi @dalu,As you mentioned in your response, they are the same context (no timeout/deadline and no associated values). The difference is semantic. I think of context.Background as “I’m deliberately passing in an empty context” and context.TODO as “there should be some other context here, which could be empty, but I’m not sure what the right value is yet so here’s a placeholder”.We use context.TODO() in the examples embedded in our documentation because the correct value would depend on the calling function, which is usually code from the user’s application. In this case, it’s up to the user to decide what the context should be, which usually depends on things like the maximum time they want the operation to take and whether or not the calling function has a context that it can propagate.– Divjot",
"username": "Divjot_Arora"
}
] | Go: why context.TODO() and not context.Background()? | 2020-06-05T20:32:38.470Z | Go: why context.TODO() and not context.Background()? | 29,241 |
null | [
"database-tools"
] | [
{
"code": "{\n type:\"xyz\"\n isAdmin:\"true\"\n}\n{\n name:\"abc\"\n age:25\n email:\"[email protected]\"\n}\n{\n name:\"efg\"\n age:24\n email:\"[email protected]\"\n}\n{\n name:\"abc\"\n age:25\n email:\"[email protected]\"\n type:\"xyz\"\n isAdmin:\"true\"\n},\n{\n name:\"efg\"\n age:24\n email:\"[email protected]\"\n type:\"xyz\"\n isAdmin:\"true\"\n}\n",
"text": "Hello,\nI need to restore a bson file which contains one or more fields likeNow i need to add these two fields which are present in user.bson file into user collection which already has some data like thisI want final collection like this",
"username": "Rakshith_HR"
},
{
"code": "mongorestoremongoimport",
"text": "Hi,I don’t think there’s a straightforward method to do what you described.As I understand it, you have a collection with two documents. Now you want to “restore” a BSON file that contains two additional fields. You wanted each document to be updated with those two fields.I don’t believe this is possible to do using standard tools like mongorestore or mongoimport, since you’re basically trying to perform an update to each document in the collection. The best way to approach this is to write a script that will:If you need further help with this, please post more details, e.g. the content of the BSON file with the additional fields, and some example documents from the collection.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I am not too sure if that could work but I would try.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to restore a bson file without affecting the existing data in a collection | 2020-06-01T10:05:36.086Z | How to restore a bson file without affecting the existing data in a collection | 2,045 |
null | [
"performance"
] | [
{
"code": "",
"text": "Dear Members,\nWe are using the compact command in order to reduce the storage space of data and index.As we have upgraded MongoDB version from MongoDB-3.4.0 to MongoDB-3.6.17.We observed that the command compact execution time is more with MongoDB-3.6 and it took very less time with MongoDB-3.4.We need your Support to confirm that the execution time for compact command is more with MongoDB-3.6.Below are the environment detailsThanks & Regards,\nRam",
"username": "mohan_mohan"
},
{
"code": "",
"text": "Hi Mohan,I believe you have also opened a JIRA ticket about this: SERVER-48349. We should keep the discussion in that ticket so all information is available in a single place.One thing I would note is that although you’re talking about the running time of the compact command, I don’t believe you mentioned whether any reduction in file size was actually visible in both 3.4.0 and 3.6.17. It might be worth putting that detail into the SERVER ticket.Note that the 3.4 series is out of support since January 2020, and so it’s possible that you’re seeing the effect of an old bug or similar.Best regards,\nKevin",
"username": "kevinadi"
}
] | Compact command execution time is more with MongoDB-3.6 | 2020-05-21T11:07:33.280Z | Compact command execution time is more with MongoDB-3.6 | 1,824 |
null | [] | [
{
"code": "[initandlisten] connection accepted from 48.29.16.208:5307 #48 (2 connections now open)it capture \npublic ip in log\n\nanyways to capture private ip in log\neg: [initandlisten] connection accepted from 192.168.0.76:5307 #48 (2 connections now open)\n\n",
"text": "",
"username": "sathish_n"
},
{
"code": "mongodmongod192.168.0.xxx",
"text": "Hi,The mongod server only reports the client’s IP address depending on which interface it binds to in the server. It cannot know anything about any client that the client doesn’t present to the server.If you want to capture the private IP of the client, the mongod server first needs to bind to a local IP address (e.g. 192.168.0.xxx). However this would also make the server only accessible from that private network, which is something that you may or may not want.Best regards,\nKevin",
"username": "kevinadi"
}
] | MongoDB captures public IP in log, any way to capture private IP in log? | 2020-05-26T09:37:09.405Z | MongoDB captures public IP in log, any way to capture private IP in log? | 1,760 |
null | [] | [
{
"code": "",
"text": "Hi,Can anyone tell me how to restore a replica set using filesystem snapshot (copy of dbpath) without downtime ?I have simulated by following below process.It’s obvious that even if we will replace secondaries data files with the snapshot, it will again re-issue drop command from Primary’s oplog.I have found out below process but it will need completed downtime.Is there any other workaround to get back the original data without downtime using only the snapshot ?Also, is it possible to recover specific DB from snapshots ?",
"username": "Divyanshu_Soni"
},
{
"code": "mongodump",
"text": "Hi,Unfortunately I agree with you that I don’t see a method to do a “rolling restore”. A replica set can be used to perform a rolling maintenance, but a restore is a different matter entirely since it is a contradiction of what a replica set is.Let me explain: all nodes in a replica set should contain the same data, since the main function of a secondary is to be able to take over the primary’s function at a moment’s notice. Therefore, a secondary that has a different data than the primary is by definition not a secondary anymore, so not really part of the replica set anymore.So it’s not possible to do a rolling restore without downtime (at least when using filesystem snapshot). It is entirely possible if you have a backup of that specific database, though, as all you need to do to restore it is to insert that database into the primary again, so a likely scenario is:Since filesystem snapshot takes a snapshot of the volume, I don’t believe it’s possible to recover a specific content of the filesystem (database or otherwise) unless the snapshot was restored first.Best regards,\nKevin",
"username": "kevinadi"
}
] | Mongo restore using filesystem snapshot | 2020-05-30T22:08:47.974Z | Mongo restore using filesystem snapshot | 1,787 |
null | [] | [
{
"code": "",
"text": "how can i pass mongostat data in nodejs backend or is there some function to get mongostat data in nodejs?",
"username": "J_Ej"
},
{
"code": "mongostatmongostat> db.serverStatus().opcounters\n{\n\t\"insert\" : NumberLong(27975),\n\t\"query\" : NumberLong(5636),\n\t\"update\" : NumberLong(3991),\n\t\"delete\" : NumberLong(76),\n\t\"getmore\" : NumberLong(262),\n\t\"command\" : NumberLong(137933)\n}\nmongostat0stat_headers.godirty let res = await conn.db('admin').command({serverStatus: 1})\n console.log(res.opcounters)\n let cache_dirty = res.wiredTiger.cache['tracked dirty bytes in the cache']\n let cache_max = res.wiredTiger.cache['maximum bytes configured']\n console.log(100 * cache_dirty / cache_max)\n{\n insert: 27975,\n query: 5635,\n update: 3989,\n delete: 73,\n getmore: 262,\n command: 137876\n}\n0.7547448389232159\n0.75...dirtymongostat",
"text": "Hi,The output of mongostat are just post-processed output of db.serverStatus() command.You can determine which part of serverStatus forms which part of mongostat output from the source (which is in go, but should be quite readable): stat_headers.goThe source also contains which part of serverStatus is used: server_status.goNote that the raw output of serverStatus is showing absolute numbers. For example, the insert, query, update, delete, getmore, command output:However mongostat takes the delta of those numbers each seconds, that’s why it’s showing 0 if there’s no server activity. Which numbers are deltas and which are not should be explained in stat_headers.go.Quick example using node for getting the opcounters number and the dirty number:This code in my PC outputs:where the 0.75... number corresponds with the dirty number shown in mongostat.Best regards,\nKevin",
"username": "kevinadi"
}
] | mongostat data passing in nodejs | 2020-05-31T21:36:27.119Z | mongostat data passing in nodejs | 1,604 |
null | [
"aggregation"
] | [
{
"code": "{\n \"convert_NID\": { \n \"$let\": {\n \"vars\" : { \n \"id\": \"$NID\" \n },\n \"in\" : ObjectId(\"$$id\")\n }\n }\n}\n",
"text": "Hello !I’m trying to convert a string field (NID) to an ObjectIdAll NID field for each documents looks something like this “58c59c6a99d4ee0af9e0c325”\nSo 24 character.I tried this aggregation :But come up with error :Illegal ObjectId : argument must be a 24 character hexadecimalI tried to look at my variable by changing ObjectId(\"$$id\") with “$$id” and everything looks good. I get my field “convert_NID” with my 24 character string for each document.So what’s the problem ?",
"username": "Henry"
},
{
"code": "mongo$$id",
"text": "Hi @Henry, welcome!So what’s the problem ?The problem here is that ObjectId() is a client side method (i.e. mongo shell) and not an aggregation operator/method that is able to resolve the $$id reference on the aggregation pipeline execution.I’m trying to convert a string field (NID) to an ObjectIdIf you’re MongoDB server version is v4.0+, you could utilise $toObjectId aggregation pipeline operator to convert 24-long hexadecimal string to an ObjectId.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi @wan !Thank you for your answer, it’s great.\nI was about to select your reply as the solution but perhaps I can ask for more help if I may.If you’re MongoDB server version is v4.0+, you could utilise $toObjectId aggregation pipeline operator to convert 24-long hexadecimal string to an ObjectId.The company for which I work for is working on MongoDB 3.6, so I don’t have access to your operator, and our database won’t be upgraded soon. For the meantime, is there another solution ?",
"username": "Henry"
},
{
"code": "ObjectId()ObjectId()",
"text": "Hi @Henry,The company for which I work for is working on MongoDB 3.6, so I don’t have access to your operator, and our database won’t be upgraded soon. For the meantime, is there another solution ?Depending on the use case, you could utilise any of MongoDB drivers to write a client side script to either update the field to ObjectId() format or to add a new field that contains the ObjectId() value.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Create an ObjectId with a variable | 2020-06-03T00:24:54.981Z | Create an ObjectId with a variable | 3,234 |
null | [
"indexes"
] | [
{
"code": "db.documents.aggregate([\n {\n $match: {\n \"_id\": { $in: [ObjectId(\"5ed78b0e7ae41374c00ca1b1\")] },\n \"deleted\":false,\n \"companyId\":\"5beb0743ad228201803435c0\",\n \"isExport\":{\"$ne\":true},\n \"folders\":{\"$in\": [\"5beb0743ad228201803435c0\"] },\n }\n },\n {\n \"$sort\": {\n \"createdAt\": -1\n }\n },\n {\n \"$limit\": 20\n },\n]).explain('allPlansExecution')\n{\n \"createdAt\" : 1,\n \"deleted\" : 1,\n \"folders\" : 1\n}\n$sort_id$match_id{\n \"companyId\" : 1,\n \"folders\" : 1,\n \"deleted\" : 1,\n \"createdAt\" : -1\n}\n{ _id: { $in: [...] } }{\n \"createdAt\" : 1,\n \"_id\" : 1,\n \"companyId\" : 1,\n \"folders\" : 1,\n \"deleted\" : 1,\n \"isExport\" : 1\n}\n",
"text": "hey i have a slight problem with my aggregation that looks like this:with the chosen index it uses that looks like thisbut that will result in my DB in a 800+ms query execution time.\nif i remove either the $sort or the _id in $match it uses either just the standard _id index or one of my indexes i defined:and both result in a 1-5ms query exeution time.\nFor the question why i query for { _id: { $in: [...] } }, its because i get a list of ids from another collection and then just wanna filter out any document that isn’t e.g in the current companyId and then sort it to send them in the correct order to the client.\nSo my question is how should my index look like to get the most performant response and why is it that index?i tried already indexes that look like e.g.and similar variations of that where the keys are in a different order or some keys are missing but they just get not used even if i use the hint option.thanks already for the help ",
"username": "Steffen_Meyer"
},
{
"code": "",
"text": "So my question is how should my index look like to get the most performant response and why is it that index?Hello Steffen,To start with please take a look at the following post, and it has somewhat similar issue about using a compound index with matching and sorting: Index scan not filtering as expectedEDIT Add: Also, please include the query plan results in the post.",
"username": "Prasad_Saya"
}
] | What is the right index for an Aggregation which matches with _id´s and sorts by date? | 2020-06-04T18:31:50.872Z | What is the right index for an Aggregation which matches with _id´s and sorts by date? | 2,151 |
null | [
"dot-net",
"replication"
] | [
{
"code": "IMongoCollection<User> collection = MongoDatabase.GetCollection<User>(\"UserCollection\")\n\t.WithReadConcern(ReadConcern.Majority)\n\t.WithWriteConcern(WriteConcern.WMajority)\n\t.WithReadPreference(ReadPreference.Secondary);\n\nRandom rnd = new Random();\n\nwhile (true)\n{\n\tUser newUser = new User\n\t{\n\t\tEmail = $\"{rnd.Next(int.MinValue, int.MaxValue)}@gg.com\"\n\t};\n\n\tcollection.InsertOne(newUser);\n\n\tif (newUser.Id == ObjectId.Empty)\n\t{\n\t\tthrow new Exception(\"Id is empty\");\n\t}\n\n\tvar findFluent = collection.Find(Builders<User>.Filter.Eq(x => x.Id, newUser.Id));\n\tUser foundUser = findFluent.FirstOrDefault();\n\n\tif (foundUser == null)\n\t{\n\t\tthrow new Exception(\"User not found\");\n\t}\n}\n",
"text": "I am using mongodb in a 3-member replicaset, trying to read my own writes. However, I seem to be getting stale data back from my reads. According to the documentation, by doing read/write with “majority” concerns, it should guarantee that:“Read operations reflect the results of write operations that precede them.”The same is stated in this post from 2018:The causal read R1 with read concern majority waits to see T1 majority committed before returning success.However, I can’t seem to be so lucky. The code below inserts a user and instantly tries to find that same user by the ID. This is done in a loop and only takes 1-3 iterations before it fails with “User not found”.I have specified “Majority” concerns for both read/write. And I specify “Secondary” as read preference for the sake of testing this. If I specify “Primary” as the read preference, this will never fail (obviously).What am I doing wrong?",
"username": "Jan_Philip_Tsanas"
},
{
"code": "",
"text": "The problem I was having, was that I was not doing the operations in a session. By doing so, I am no longer able to reproduce the problem.",
"username": "Jan_Philip_Tsanas"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Read own writes in a MongoDB replicaset. Casual consistency not working? | 2020-06-04T12:14:05.171Z | Read own writes in a MongoDB replicaset. Casual consistency not working? | 1,724 |
null | [] | [
{
"code": "db.collection.find({}).limit(1).sort({ createdAt: 1 })\ndb.collection.aggregation([\n { $sort: { createdAt: 1 }},\n { $limit: 1 }\n])\n",
"text": "What is the difference between using chaining and aggregation for data transformation. ExampleUsing chainingUsing aggregationPlease some one should help demystify this for me. Thanks",
"username": "Ndifreke_Essien"
},
{
"code": "",
"text": " Hi @Ndifreke_Essien, welcome to the community!Chaining aggregates documents from a single collection. While these operations provide simple access to common aggregation processes, they lack the flexibility and capabilities of the aggregation pipeline and map-reduce. You will find further details here in the MongoDB documentationHope this helps as a starter\nMichael",
"username": "michael_hoeller"
},
{
"code": "db.collection.find()mongofindsort()limit()findaggregateallowDiskUse$out$merge$exprfindallowDiskUsefind",
"text": "Hi @Ndifreke_Essien,The db.collection.find() helper in the mongo shell happens to use method chaining (aka a fluent API), but it would be possible to implement similar syntactic sugar for aggregation pipelines.The general distinction between these two commands is that find (historically, at least) does not perform any data transformation and has fewer options. Chained methods like sort() and limit() set some of the options used to construct a query cursor, and a subset of document fields can be specified in the query projection. If you need to return results without any data transformation, find is the straightforward choice.The aggregate command includes a large variety of pipeline stages and expression operators to allow you to reshape and transform documents. Since new aggregation stages and expressions continue to be added in successive major MongoDB releases, a fluent API will become outdated more quickly than constructing pipelines directly. Aggregation pipelines are designed to process larger result sets, so also have options like allowDiskUse to enable writing data to temporary files if needed as well as output stages like $out (output results to new collection) and $merge (merge results into a specified collection). If you are doing any significant data transformation, aggregation pipeline is the best approach.However, there has been some convergence in features of these two commands over time. MongoDB 3.6 introduced the $expr query operator which allows the use of aggregation expressions within a find query, and MongoDB 4.4 adds an allowDiskUse cursor option to allow find queries to write temporary data to disk if needed for in-memory sorts.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Alright, Thanks so much, i get it now",
"username": "Ndifreke_Essien"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Aggregation Vs. Chaining | 2020-05-25T20:49:40.252Z | Aggregation Vs. Chaining | 6,796 |
null | [
"node-js"
] | [
{
"code": "",
"text": "This function has one parameter “returnNewDocument”. we set this parameter to get the updated document.\ni want bring into attention that its not working. It gives the old document.please do the needful!!!thanks\nsuraj",
"username": "Suraj_wasan"
},
{
"code": "",
"text": "Could you provide example code and data that illustrate your claim?",
"username": "steevej"
},
{
"code": "mongo --version",
"text": "Please add also the output of\nmongo --version",
"username": "michael_hoeller"
},
{
"code": "",
"text": "At the time for preparing data for you guys, i though to directly try it in RoboMongo.so i test create test collection and use findOneAndUpdate and found its working there.so i think this is the problem in node.js which i am using.Thanks to both of you and sorry for long delay reply. ",
"username": "Suraj_wasan"
},
{
"code": "",
"text": "Thanks for the update and good luck!\nMichael",
"username": "michael_hoeller"
}
] | findOneAndUpdate isn't returning the new document | 2020-05-27T16:15:27.176Z | findOneAndUpdate isn’t returning the new document | 1,676 |
null | [] | [
{
"code": "",
"text": "Hi everyone,We’ve rolled out a new header & footer to align with the DevHub!Important note: clicking the header icon no longer brings you back to the forums home page.Instead, you’ll be directed to the DevHub homepage. This will make sense in the longer term as we bring more of the developer resources/properties together as a single experience, but it can be a bit jarring at first. If you need a direct link back to the forums homepage, please check the footer for now. We expect the menus in the header to also develop more, as DevHub develops more, going into the future too. Thanks for your patience & feedback!Best,Jamie",
"username": "Jamie"
},
{
"code": "g, h?",
"text": "Hello @Jamie,\nthe footer is always far away, I’d like to go with the keyboard shortcut\ng, h (g followed by an h) this will jump to the home screenMore keyboard short cuts can be found by pressing ? in a non editor field.Michael",
"username": "michael_hoeller"
},
{
"code": "?",
"text": "Thanks for the tip. This will help me.More keyboard short cuts can be found by pressing ?",
"username": "steevej"
},
{
"code": "",
"text": "Hi everyone,We’ve updated the link in the top nav to point directly to the forums homepage until the top nav gets built out more to deal with this UX problem.Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Community Header & Footer Update | 2020-05-28T20:33:11.798Z | Community Header & Footer Update | 3,117 |
null | [] | [
{
"code": "db.inventory.insertMany([\n { _id: 1, item: null },\n { _id: 2 },\n { _id: 3, item: 3 },\n { _id: 4 items: [1, 2, 3] },\n { _id: 5, items: [] }\n ])\ndb.inventory.find({ 'item': {$ne: null} })\n{ _id: 3, item: 3 }\ndb.inventory.find({ 'items.0': {$ne: null} })\n{ _id: 3, items: [1, 2, 3] },\n{ _id: 4, items: [] }\n",
"text": "Data:Query 1:Result 1:Query 2:Result 2:Why mongoDB finds this document: { _id: 4, items: }?\nIf use $exists then everything is ok. But why does not work with $ne (like https://docs.mongodb.com/manual/tutorial/query-for-null-fields/)",
"username": "Serg_Kash"
},
{
"code": "null> db.inventory.aggregate([\n {$match: {items: {$type:'array'}}},\n {$project: {first: {$arrayElemAt: ['$items',0]}, items: '$items'}},\n {$match:{first:null}}\n])\n{ \"_id\" : 5, \"items\" : [ ] }\n",
"text": "Hi Serg,I think the behaviour you’re seeing is described in SERVER-27442. It is a known ambiguity if you’re combining null and array notation with equality/inequality.A quick workaround I can think of is using aggregation to determine the field type (array), and project the first element of the array. Something like:Unfortunately it’s not an elegant solution, but off the top of my head, this aggregation could work.Best regards,\nKevin",
"username": "kevinadi"
}
] | Why mongodb finds this document ($ne: null for array) | 2020-05-21T04:59:28.334Z | Why mongodb finds this document ($ne: null for array) | 4,741 |
null | [] | [
{
"code": "db.getCollection('rateReachData').find({\"parentProfileType\" : \"pcclassDLS Without Video\",\"serviceProfileName\":\"S2_Max_Fast_16352_1024\",serviceProfileDetails: { $elemMatch: { attributeName: \"omsMaxSpeed\", attributeValue: \"Internet 18\" } }})\n",
"text": "HI Experts,Am using this query:need java script to update attributeValue: “Internet 18” to attributeValue: “Internet 12”",
"username": "bp_sin"
},
{
"code": "",
"text": "Hi experts,By using db.collection.update how can I update the above function and requirement.",
"username": "bp_sin"
},
{
"code": "",
"text": "Hi experts,can you please update on it.thanks",
"username": "bp_sin"
}
] | Reg: mongo query | 2020-06-03T20:34:25.337Z | Reg: mongo query | 1,487 |
null | [] | [
{
"code": "name: \"John Doe\"\nclass: 5\nMisChief: \"Bullying\"\nMisChiefmischief: \"bullying\"$regex/mischief/i : /bullying/i",
"text": "Suppose I have multiple fields in a document whose key value I don’t know(only the user who has added the keys know that)Now suppose user wants to finds a document specifying the key value, but he forgot what his key value was EXACTLYSo say I have this document:where the MisChief key was custom created.Suppose user wants to search for mischief: \"bullying\"I can use $regex operator(or pattern; regex object) to search for the value, but how can I specify the key as a regex expression ?I want to do something like this:/mischief/i : /bullying/iBut the field value isn’t being accepted as a regex expression(I used compass to test it)Another way to do this is to return all documents, and doing a case insensitive search for the keys from all those documents which is very inefficient.",
"username": "Susnigdha_Bharati"
},
{
"code": "$regex{ \"name\" : \"John Doe\", \"class\" : 5, \"mischief\" : \"Bullying\" }\n{ \"name\" : \"Jane\", \"class\" : 2, \"plays\" : \"Hop scotch\" }\ndb.test.aggregate([ \n { \n $addFields: { \n doc: { $objectToArray: \"$$ROOT\" } \n } \n }, \n { \n $match: { \n \"doc.k\": /^mis/i, \n \"doc.v\": /bull/i \n } \n } \n])\n",
"text": "I can use $regex operator(or pattern; regex object) to search for the value, but how can I specify the key as a regex expression ?You can search the keys of a document in MongoDB. This is using an aggregation query and using the $objectToArray operator. Here’s how:Consider input documents:The query:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for the link to $objectToArray. I did not know this operator yet.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I specify a document key's value as Regex expression to find a document in MongoDB? | 2020-06-03T20:32:25.779Z | How do I specify a document key’s value as Regex expression to find a document in MongoDB? | 6,988 |
null | [] | [
{
"code": "",
"text": "I am trying to make an update request to the server and my page automatically refreshes.\nIs there a way to update things in the database without causing the entire page to refresh in the browser?",
"username": "Ryan_Branco"
},
{
"code": "",
"text": "We need more information.What is you page? If you could provide a screenshot that would be helpful.",
"username": "steevej"
}
] | CRUD operation without page refreshing? | 2020-06-03T22:34:02.636Z | CRUD operation without page refreshing? | 1,449 |
null | [] | [
{
"code": "db.stories.find({ 'if': {$ne: true}, 'sa': 2, 'dd': {$ne : null}, 'ca': 11}).skip(3990).limit(30).sort({'vw':-1}).explain('executionStats'){\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"lushstories.stories\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"ca\" : {\n \"$eq\" : 11\n }\n },\n {\n \"sa\" : {\n \"$eq\" : 2\n }\n },\n {\n \"dd\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n },\n {\n \"if\" : {\n \"$not\" : {\n \"$eq\" : true\n }\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"LIMIT\",\n \"limitAmount\" : 30,\n \"inputStage\" : {\n \"stage\" : \"SKIP\",\n \"skipAmount\" : 0,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [\n {\n \"ca\" : {\n \"$eq\" : 11\n }\n },\n {\n \"sa\" : {\n \"$eq\" : 2\n }\n },\n {\n \"dd\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n },\n {\n \"if\" : {\n \"$not\" : {\n \"$eq\" : true\n }\n }\n }\n ]\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"vw\" : -1,\n \"if\" : 1,\n \"sa\" : 1,\n \"dd\" : -1,\n \"ca\" : 1\n },\n \"indexName\" : \"Viewed_By_Category\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"vw\" : [ ],\n \"if\" : [ ],\n \"sa\" : [ ],\n \"dd\" : [ ],\n \"ca\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"vw\" : [\n \"[MaxKey, MinKey]\"\n ],\n \"if\" : [\n \"[MinKey, MaxKey]\"\n ],\n \"sa\" : [\n \"[MinKey, MaxKey]\"\n ],\n \"dd\" : [\n \"[MaxKey, MinKey]\"\n ],\n \"ca\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n }\n },\n \"rejectedPlans\" : [\n {\n \"stage\" : \"SKIP\",\n \"skipAmount\" : 3990,\n \"inputStage\" : {\n \"stage\" : \"SORT\",\n \"sortPattern\" : {\n \"vw\" : -1\n },\n \"limitAmount\" : 4020,\n \"inputStage\" : {\n \"stage\" : \"SORT_KEY_GENERATOR\",\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"dd\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"dd\" : -1,\n \"if\" : 1,\n \"sa\" : 1,\n \"ca\" : 1,\n \"ha\" : 1\n },\n \"indexName\" : \"Story_Visible_With_Audio\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"dd\" : [ ],\n \"if\" : [ ],\n \"sa\" : [ ],\n \"ca\" : [ ],\n \"ha\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"dd\" : [\n \"[MaxKey, null)\",\n \"(null, MinKey]\"\n ],\n \"if\" : [\n \"[MinKey, true)\",\n \"(true, MaxKey]\"\n ],\n \"sa\" : [\n \"[2.0, 2.0]\"\n ],\n \"ca\" : [\n \"[11.0, 11.0]\"\n ],\n \"ha\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n }\n }\n }\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 30,\n \"executionTimeMillis\" : 5500,\n \"totalKeysExamined\" : 55743,\n \"totalDocsExamined\" : 55743,\n \"executionStages\" : {\n \"stage\" : \"LIMIT\",\n \"nReturned\" : 30,\n \"executionTimeMillisEstimate\" : 5372,\n \"works\" : 55744,\n \"advanced\" : 30,\n \"needTime\" : 55713,\n \"needYield\" : 0,\n \"saveState\" : 565,\n \"restoreState\" : 565,\n \"isEOF\" : 1,\n \"invalidates\" : 0,\n \"limitAmount\" : 30,\n \"inputStage\" : {\n \"stage\" : \"SKIP\",\n \"nReturned\" : 30,\n \"executionTimeMillisEstimate\" : 5372,\n \"works\" : 55743,\n \"advanced\" : 30,\n \"needTime\" : 55713,\n \"needYield\" : 0,\n \"saveState\" : 565,\n \"restoreState\" : 565,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"skipAmount\" : 0,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [\n {\n \"ca\" : {\n \"$eq\" : 11\n }\n },\n {\n \"sa\" : {\n \"$eq\" : 2\n }\n },\n {\n \"dd\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n },\n {\n \"if\" : {\n \"$not\" : {\n \"$eq\" : true\n }\n }\n }\n ]\n },\n \"nReturned\" : 4020,\n \"executionTimeMillisEstimate\" : 5372,\n \"works\" : 55743,\n \"advanced\" : 4020,\n \"needTime\" : 51723,\n \"needYield\" : 0,\n \"saveState\" : 565,\n \"restoreState\" : 565,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"docsExamined\" : 55743,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 55743,\n \"executionTimeMillisEstimate\" : 80,\n \"works\" : 55743,\n \"advanced\" : 55743,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 565,\n \"restoreState\" : 565,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"keyPattern\" : {\n \"vw\" : -1,\n \"if\" : 1,\n \"sa\" : 1,\n \"dd\" : -1,\n \"ca\" : 1\n },\n \"indexName\" : \"Viewed_By_Category\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"vw\" : [ ],\n \"if\" : [ ],\n \"sa\" : [ ],\n \"dd\" : [ ],\n \"ca\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"vw\" : [\n \"[MaxKey, MinKey]\"\n ],\n \"if\" : [\n \"[MinKey, MaxKey]\"\n ],\n \"sa\" : [\n \"[MinKey, MaxKey]\"\n ],\n \"dd\" : [\n \"[MaxKey, MinKey]\"\n ],\n \"ca\" : [\n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\" : 55743,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0,\n \"seenInvalidated\" : 0\n }\n }\n }\n }\n },\n \"serverInfo\" : {\n \"host\" : \"redacted\",\n \"port\" : 27017,\n \"version\" : \"4.0.9\",\n \"gitVersion\" : \"fc525e2d9b0e4bceff5c2201457e564362909765\"\n },\n \"ok\" : 1\n}\n",
"text": "I have a query that uses an index but during the fetch is looking up too many documents.The index in question is:{\n“v” : 2,\n“key” : {\n“vw” : -1,\n“if” : 1,\n“sa” : 1,\n“dd” : -1,\n“ca” : 1\n},\n“name” : “Viewed_By_Category”,\n“ns” : “redacted”,\n“background” : false\n}the query in question:db.stories.find({ 'if': {$ne: true}, 'sa': 2, 'dd': {$ne : null}, 'ca': 11}).skip(3990).limit(30).sort({'vw':-1}).explain('executionStats')and this is the explain output:So why is the IXSCAN scan stage not using any of the predicates to filter, the indexBounds are all using [MaxKey, MinKey]This is returning the full number of records 55743 which is being fed into the fetch.Is there something I dont understand about these indexes?Thanks",
"username": "Gavin_Sansom"
},
{
"code": "",
"text": "So why is the IXSCAN scan stage not using any of the predicates to filter, the indexBounds are all using [MaxKey, MinKey]This is returning the full number of records 55743 which is being fed into the fetch.Is there something I dont understand about these indexes?Hello Gavin,Here are some details about using indexes with query filter and the sort operations: Use Indexes to Sort Query Results. Specifically see the sub-topics Sort and Index Prefix and Sort and Non-prefix Subset of an Index.The sort operation of your query is using the index for sure (noted by the missing SORT stage in the plan’s “winningPlan”).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for the reply.So I can see the index is being used, its just that to me there are no filters being applied.The sort column (vw) is the first in the index. The remaining fields all appear in the covering index (if, sa, dd, ca) and i would expect these values in the predicate to be used, not:“sa” : [ “[MinKey, MaxKey]” ]but more like something like“sa” : [ “[1, 1]” ]Using the same filter returns a count of 4647, so i would expect at least this number passed to the fetch, not 55743Thanks",
"username": "Gavin_Sansom"
},
{
"code": "sa + ca + vwsacavwsa + caca + sa",
"text": "You may want to try using a compound index with the keys sa + ca + vw (in that order). This will likely result in a query filter using the index on the fields sa and ca. Then the sort on vw too uses the index. Whether the index prefix should be sa + ca or ca + sa, you have to figure based upon the number of documents returned on the first key (see Create Queries that Ensure Selectivity).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for that.\nEnded up with the following index that worked{“ca” : 1, “sa” : 1, “vw”:-1, “dd” : -1, “if” : 1}",
"username": "Gavin_Sansom"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Index scan not filtering as expected | 2020-06-04T04:49:15.182Z | Index scan not filtering as expected | 3,451 |
null | [
"atlas-device-sync",
"react-native"
] | [
{
"code": "",
"text": "Hi guys. I am currently working on a fully-synced react-native app and i was wondering how can i get a list of unsynced actions. I would have a back-up screen where the user knows if the data is updated. In case the data is not updated, what shoul i do to acces a list of unsynced write actions? Thanks in advance!",
"username": "JustSurrenderBTW_N_A"
},
{
"code": "",
"text": "@JustSurrenderBTW_N_A The Realm SDK does provide an API that tells you if data has yet to be uploaded to the server and whether it has fully synced, so you could provide a general banner that tells the user they have un-synced data - https://docs.realm.io/sync/using-synced-realms/syncing-data#progress-notificationsIt won’t tell you specifically which objects have not been synced, if you needed this level of granularity then you would need to add a flag to your object and then set the flag once it has synced to the other side.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm unsynced data list | 2020-05-22T21:03:33.836Z | Realm unsynced data list | 2,173 |
null | [] | [
{
"code": "",
"text": "I know that Percona MongoDB supports MongoDB enterprise, but I wonder if there is anything else.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon the Percona Server for MongoDB uses the community code and then adds features on top of it. Take a look at the comparison of features between MongoDB Community, MongoDB Enterprise and PSMDB.While Percona offers support for MongoDB Enterprise, PSMDB is not MongoDB Enterprise. Percona added enterprise features but they are not using MongoDB’s code for those features.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is there a functional and technical difference between MongoDB and Percona MongoDB? | 2020-06-04T00:32:39.788Z | Is there a functional and technical difference between MongoDB and Percona MongoDB? | 3,862 |
null | [] | [
{
"code": "",
"text": "Hi! I just got invited to the private beta of MongoDb Realm. As a long time Realm user there are a few concepts that are not clear to me after spending a day on it.What is the right forum to ask questions about MongoDb Realm? Can these be discussed here even though it is a private beta?",
"username": "Simon_Persson"
},
{
"code": "",
"text": "@Simon_Persson you can just email me directly. Once we have a public release feel free to post here",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is the right forum for discussing the MongoDb Realm Private Beta? | 2020-06-03T13:26:47.476Z | What is the right forum for discussing the MongoDb Realm Private Beta? | 1,776 |
null | [
"production",
"cxx"
] | [
{
"code": "",
"text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.5.1. This release provides bug fixes since r3.5.0.Please note that this version of mongocxx requires the MongoDB C driver 1.15.0 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.The mongocxx 3.5.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions to the MongoDB community forum in the Drivers, ODMs, and Connectors category with the cxx-driver tag. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C++11 Driver 3.5.1 Released | 2020-06-03T21:37:23.237Z | MongoDB C++11 Driver 3.5.1 Released | 1,557 |
null | [
"production",
"cxx"
] | [
{
"code": "",
"text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.4.2. This release provides bug fixes since r3.4.1.Please note that this version of mongocxx requires the MongoDB C driver 1.13.0 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.The mongocxx 3.4.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions to the MongoDB community forum in the Drivers, ODMs, and Connectors category with the cxx-driver tag. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C++11 Driver 3.4.2 Released | 2020-06-03T21:36:37.049Z | MongoDB C++11 Driver 3.4.2 Released | 1,698 |
null | [
"production",
"cxx"
] | [
{
"code": "",
"text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.3.2. This release provides bug fixes since r3.3.1.Please note that this version of mongocxx requires the MongoDB C driver 1.10.1 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.The mongocxx 3.3.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions to the MongoDB community forum in the Drivers, ODMs, and Connectors category with the cxx-driver tag. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C++11 Driver 3.3.2 Released | 2020-06-03T21:35:37.834Z | MongoDB C++11 Driver 3.3.2 Released | 1,311 |
null | [
"production",
"cxx"
] | [
{
"code": "",
"text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.2.1. This release provides bug fixes since r3.2.0.Please note that this version of mongocxx requires the MongoDB C driver 1.9.2 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.The mongocxx 3.2.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions to the MongoDB community forum in the Drivers, ODMs, and Connectors category with the cxx-driver tag. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C++11 Driver 3.2.1 Released | 2020-06-03T21:34:22.712Z | MongoDB C++11 Driver 3.2.1 Released | 1,461 |
null | [
"release-candidate",
"rust"
] | [
{
"code": "mongodb",
"text": "The MongoDB Rust driver team is pleased to announce the first release candidate of the driver, v0.11.0. This release readies the driver for API stabilization in anticipation of a generally available 1.0 release. You can read more about the release on Github, and the release is published on https://crates.io under the package name mongodb. If you run into any issues, please file an issue on JIRA.Thank you, and we hope you enjoy using the driver!",
"username": "Samuel_Rossi"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Announcing the first Rust driver release candidate, v0.11.0 | 2020-06-03T18:49:43.844Z | Announcing the first Rust driver release candidate, v0.11.0 | 2,550 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.0.19-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.0.18. The next stable release 4.0.19 will be a recommended upgrade for all 4.0 users.\nFixed in this release:4.0 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Kelsey_Schubert"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.0.19-rc0 is released | 2020-06-03T16:47:57.709Z | MongoDB 4.0.19-rc0 is released | 1,610 |
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of 1.3.4 of the MongoDB Go Driver.This release contains several bugfixes. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.3.4 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.3.4 Released | 2020-06-03T15:51:41.147Z | MongoDB Go Driver 1.3.4 Released | 1,692 |
null | [
"aggregation"
] | [
{
"code": "{\n \"_id\" : \"5e4a8e13b0b2ddcad37fa308\",\n \"EntryDate\" : ISODate(\"2020-02-17T12:59:00.023Z\"),\n \"Text\" : \"randomtext\"\n}\nvar top = col.Find(c.EntryDate < point).SortByDescending(c => c.EntryDate).Limit(5);\nvar down = col.Find(c.EntryDate > point).Sort(c => c.EntryDate).Limit(5);\nvar result = top.Concat(down);\n",
"text": "Hi i have sorted collection by EntryDate:I know EntryDate point. I need take 5 items before and 5 items after this point. Now i do 2 query, can in optimization to 1 query?",
"username": "alexov_inbox"
},
{
"code": "",
"text": "I know EntryDate point. I need take 5 items before and 5 items after this point. Now i do 2 query, can in optimization to 1 query?You can try using the $facet aggregation. This allows multiple sub-pipelines (two in this case - one for before and the other for after) and then combine as one result within the same aggregation.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "good variant . but…\nwhat other ideas?",
"username": "alexov_inbox"
},
{
"code": "",
"text": "good variant . but…What (but)? Any concerns or issues.? It helps to think further if you are a bit more specific.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "mb mongodb not for this case ?\nbecause for hiload such request looks strange",
"username": "alexov_inbox"
}
] | Window query (get before and after) | 2020-06-02T10:54:39.010Z | Window query (get before and after) | 1,537 |
null | [] | [
{
"code": "App running on port ${port}...",
"text": "When I connect and it tells me very little about failure or success. As it fails a lot connecting from my windows 10. (same code) It has nothing to do with whitelisting because the IP isn’t changing and is already whitelisted. What can I do to understand where the driver is spending time during connecting? Is this indicative of the free Atlas platform.On success, I generally get this: ( I added timings for benefit)Connecting to Mongoose: 10:26:01 AM\nApp running on port 3000… 22 ms 10:26:01 AM\nDB connection successful! 326 ms 10:26:02 AMConnecting to Mongoose: 11:32:31 AM\nApp running on port 3000… 11:32:31 AM: 18 ms : pass\nDB connection successful! 11:33:06 AM: 35111 ms : failConnecting to Mongoose: 11:36:08 AM\nApp running on port 3000… 11:36:08 AM: 20 ms : pass\nUNHANDLED REJECTION! 💥 Shutting down… 11:37:23 AM: 75120 ms : fail\nError queryTxt ETIMEOUT cluster0-vplwu.mongodb.net Error: queryTxt ETIMEOUT cluster0-vplwu.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (dns.js:202:19)My code below Mongoose 5.9.14 to Atlasconst DB = process.env.DATABASE.replace(\n‘’,\nprocess.env.DATABASE_PASSWORD\n);DB=mongodb+srv://username:@cluster0-vplwu.mongodb.net/natours?retryWrites=truemongoose\n.connect(DB, {\nuseNewUrlParser: true,\nuseUnifiedTopology: true,\nuseCreateIndex: true,\nuseFindAndModify: false,\nserverSelectionTimeoutMS: 5000,\nfamily: 4\n})\n.then(() => console.log(‘DB connection successful!’, tm(started)));const port = process.env.PORT || 3000;const server = app.listen(port, () => {\nconsole.log(App running on port ${port}...);\n});I’m seeing failure rates on connections over 20% and that is being kind. On rare occasions, I see sub-second response times. Generally connections are made in 15 or 35 seconds. This is pretty consistent. Any 35-second connection won’t run on Heroku. I’m not sure a 15-second connection will get in done on Heroku.",
"username": "Scott_Hopper"
},
{
"code": "",
"text": "Hi @Scott_Hopper, welcome!Error queryTxt ETIMEOUT cluster0-vplwu.mongodb.net Error: queryTxt ETIMEOUT cluster0-vplwu.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (dns.js:202:19)Based on the error message that’s thrown, it’s failing on networking level. The failure seems to be related to attempting to resolve the SRV record.Try connecting from a different internet network, and see whether you’re getting the same problem. For debugging purposes, you could also try changing the DNS settings to point to a public DNS.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi Wan,Yes, definitely a DNS issue. I’m putting together a document on this topic, I measured 30 restarts and only twice did it respond under 15 seconds and once under 5 seconds. While my ISP generally very good response times via speed test generally getting 100+ Mbs. Their DNS seems to be awful. I’m also documenting my response time on public DNS’ where 1/2 second appears to be a long time.",
"username": "Scott_Hopper"
}
] | Debugging the mongoose Connection to atlas | 2020-05-20T18:35:51.304Z | Debugging the mongoose Connection to atlas | 3,449 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi Team,What would happen if",
"username": "Joanne"
},
{
"code": "",
"text": "Hi @Joanne,Oplog size of a DB is reduced from number A to B (A>B)?Reducing the size of the oplog will reduce the amount of data it can hold. You want your oplog to be able to hold at least as much data as your longest potential downtime would be. If you have some sort of monitoring on your system you will be able to trend how much of a window your oplog contains over time. Reducing the oplog size could cause problems with a resync if too small as the oldest data could be overwritten before the sync completes and you would be stuck in a loop of trying to sync data.Oplog size of a replica set Primary is higher than that of the secondaries. Will secondary nodes notice any difference in syncing due to oplog size difference? Is it advisable to have different oplog size of members of same replica set?My recommendation is to keep the oplogs the same size on all machines in a replica set. You never know when your primary will go down and if the secondary has a smaller oplog size t will hold less data which could cause problems. Also note that all the data in the Primary oplog is replicated down to the Secondary oplogs so your Secondary oplog window would be shorter.",
"username": "Doug_Duncan"
},
{
"code": "Primary",
"text": "Oplog size of a replica set Primary is higher than that of the secondaries. Will secondary nodes notice any difference in syncing due to oplog size difference? Is it advisable to have different oplog size of members of same replica set?Hi,To further clarify Doug’s comment: Primary should be considered a transient role so another secondary can be elected in the event of failover. If you have different replica set member configurations in terms of oplog size or system resources, failover may result in unanticipated consequences (for example, reducing the time you have to get a former primary back online before it becomes stale).Varying member configurations are supported, but you should have good reasons for doing so and should definitely try to model failover scenarios. You will encounter fewer operational challenges using the default deployment settings with identically configured replica set members.Some replica set configuration/failover considerations:Failover is not always a result of failure. For example, regular maintenance activities such as upgrading software versions may also require you to briefly restart services on a replica set member.If your deployment is distributed across multiple data centres, consider the effect of chained replication (which is enabled by default). With chained replication a secondary can choose to replicate from another secondary of the replica set which is closer (based on network ping time) than the current primary.The current duration of the oplog is estimated based on the timestamps between first and last entries. If data insert or update patterns change significantly in your workload, the oplog duration will also be affected. Note: the upcoming MongoDB 4.4 server release adds a new Minimum Oplog Retention Period to provide better assurance on oplog duration.If you want to better understand behaviour for a proposed deployment configuration, I suggest standing up a replica set in a test environment to simulate scenarios.Also note that all the data in the Primary oplog is replicated down to the Secondary oplogs so your Secondary oplog window would be shorter.If oplog sizes vary, members with a larger oplog will be able to store more history. The oplog sizes on each replica set member are not determined by the size of the source oplog.Regards,\nStennie",
"username": "Stennie_X"
}
] | What is the impact of different oplog sizes? | 2020-06-02T17:38:41.590Z | What is the impact of different oplog sizes? | 7,077 |
null | [
"app-services-user-auth",
"stitch"
] | [
{
"code": "// 1) Anon user is created\nif (!client.auth.isLoggedIn) {\n const user = await this._auth.loginWithCredential(new AnonymousCredential())\n}\n\n// 2) When user decide to login, I try to link them like that\nthis._auth.user.linkUserWithRedirect(new GoogleRedirectCredential()).then(user => console.log(\"I AM BACK\", user)).catch(console.error)\n// ^ this fails with following error\n//{\n// error: \"error exchanging access code with OAuth2 provider\",\n// error_code: \"AuthError\",\n// link: \"https://stitch.mongodb.com/groups/5d61344a9ccf6410569421c3/apps/5d617ecc552b569688b8acf8/logs?co_id=5ebe9bec77b63e267d604a01\"\n// }\n\n// 3) I've found out on stack overflow that adding url to provider could solve that but I get CORS error if I try to do this\nthis._auth.user.linkUserWithRedirect(new GoogleRedirectCredential(window.location.href)).then(user => console.log(\"I AM BACK\", user)).catch(console.error)\n",
"text": "So, I have issue with linking my anonymous user accounts with Google provider. My process is following:Any idea what could still be wrong?",
"username": "Ondrej_Sevcik"
},
{
"code": "window.location.hrefhttps://eu-west-1.aws.stitch.mongodb.com/api/client/v2.0/auth/callback?....{\"error\":\"must start a new auth request\"}",
"text": "Hi Ondrej, I am currently trying to achieve the same. I am struggling as well.Passing window.location.href to the provider already helped me a step further. However, after logging in to my google account I currently get redirected to https://eu-west-1.aws.stitch.mongodb.com/api/client/v2.0/auth/callback?.... with an error message:{\"error\":\"must start a new auth request\"}To avoid a CORS error I had to set “Allowed Request Origins” under “Settings” as mentioned here",
"username": "Roy_Rutishauser1"
},
{
"code": "",
"text": "I’ve actually gave up on that and moved to developing other features but your post gave me a little extra guidance on how to do it. Thanks for the CORS help.",
"username": "Ondrej_Sevcik"
}
] | Google Auth linking with Anon account | 2020-05-15T20:53:54.041Z | Google Auth linking with Anon account | 3,288 |
null | [
"graphql",
"flutter"
] | [
{
"code": "",
"text": "Hi,i didn’t saw something about flutter in their websites. Is there a way to use realm with flutter ?\nAnd can we do authentication (anonymous, google, facebook), notifications and other stuff with realm ?",
"username": "Nabil_Ben"
},
{
"code": "",
"text": "@Nabil_Ben We do not currently have a SDK with Flutter support. We are exploring a possible implementation but are currently blocked by features we need added to the Dart language before we can continue - we are tracking the issue here: https://github.com/realm/realm-object-server/issues/55You can use MongoDB Realm for authentication and push notification services todayhttps://docs.mongodb.com/stitch/services/push-notifications/",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Do i need realm if i’m using the Stitch GraphQL Api?\nI hope i can work around with those doc’s links for auth and notifications.",
"username": "Nabil_Ben"
},
{
"code": "",
"text": "@Nabil_Ben You can call the Stitch GraphQL API from whatever platform or language you choose - it’s designed to be agnostic. You do not have to use realm for persistence when using GraphQL",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can we use Realm with flutter? | 2020-06-01T20:30:53.146Z | Can we use Realm with flutter? | 11,983 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi Team,If we say there is a network partition and we saw two primaries in set transiently as per this link , why only one replica set faced heartbeat issue or network failure? Will not all replica sets go down if they are on same VM?",
"username": "Joanne"
},
{
"code": "",
"text": "Hi,Can you describe your scenario in more detail:If you have multiple replica set members on the same host VM (which would not be advisable for members of the same replica set), a single host VM failing will cause multiple members of your deployment to be unavailable.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "a single host VM failing will cause multiple members of your deployment to beHi Stennie,\nWe have 1P 2S 1A sharded clusters members in a replica set and all members of a replica set on different hosts but there are 4 different replica sets and each set’s members are divided and distributed on different VMs.Secondly, if VM-1 has primary of replSet01 and replSet03 and the transient primary is logged in replSet01 logs. Now if concluded that transient primaries are observed due to network failure then isn’t it obvious that replSet03 member on VM-1 will also log some network related issue?Other question is if we say there was network issue, why mongo process gets affected why not other applications running on same VM complaint?",
"username": "Joanne"
},
{
"code": "",
"text": "Now if concluded that transient primaries are observed due to network failure then isn’t it obvious that replSet03 member on VM-1 will also log some network related issue?Hi,Outcomes really depend on where the problem lies in your deployment. If there was a network connectivity problem on host VM-1, it would be reasonable to expect all instances on that host to be similarly affected.However, each VM also has its own virtual network interfaces and resources. A perceived network issue from the point of view of replication could be the result of a specific VM being non-responsive to network heartbeat pings. The actual cause may be something administrative (for example, VM live migration or backup) or an issue with resource contention.Is your question about an actual incident or a hypothetical scenario?For an actual incident I suggest you try to create a timeline of activity based on the MongoDB and system log files from your deployment. Ideally you would have a monitoring/metrics system in place (for example, MongoDB Cloud Manager or Ops Manager) which would provide a starting point for your investigation.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Is your question about an actual incident or a hypothetical scenario?Just some edge cases to consider, thanks for the reply though. It helps!",
"username": "Joanne"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Query about network failure and behaviour of replica sets | 2020-05-29T13:49:10.058Z | Query about network failure and behaviour of replica sets | 2,463 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.0.4 MongoDB Java & JVM Drivers release is a patch to the 4.0.3 release and a recommended upgrade.The documentation hub includes extensive documentation of the 4.0 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.0/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.0.4 Released | 2020-06-02T15:46:27.614Z | MongoDB Java Driver 4.0.4 Released | 3,308 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 3.12.5 MongoDB Java Driver release is a patch to the 3.12.4 release and a recommended upgrade.The documentation hub includes extensive documentation of the 3.12 driver, includingand much more.You can find a full list of bug fixes here .http://mongodb.github.io/mongo-java-driver/3.12/javadoc/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 3.12.5 Released | 2020-06-02T15:45:17.335Z | MongoDB Java Driver 3.12.5 Released | 2,906 |
null | [
"react-native"
] | [
{
"code": "Require cycle: node_modules/realm/lib/browser/util.js -> node_modules/realm/lib/browser/rpc.js -> node_modules/realm/lib/browser/util.js\n\nRequire cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.",
"text": "Chrome debugger outputs the following warning that appears to cause circular references and possible memory leaks.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "@Michael_Stelly If you have a reproduction case and steps which result in a memory leaks please file an issue on GitHub - realm/realm-js: Realm is a mobile database: an alternative to SQLite & key-value stores and we will take a look",
"username": "Ian_Ward"
}
] | Realm node package has "cycle" warnings | 2020-05-31T14:48:02.515Z | Realm node package has “cycle” warnings | 3,798 |
null | [
"queries"
] | [
{
"code": "",
"text": "Insert Query -\ndb.product1.insertOne({ “_id” : 24, “pno” : “P320”, “pname” : “Juicer”, “price” : 5000, “qty” : 44, “MRP” : 10000 });Expression query - db.product1.find({$expr:{$gte:[“price”,“MRP”]}});Im expecting above document to be returned since price is greater that 10000. But there is no output. Please suggest.",
"username": "Durga_Krishnamoorthi"
},
{
"code": "",
"text": "In general, when you want the value of a field you need to prefix its name with the dollar sign. Otherwise the field name is used as the value or your operator.In your find() the strings price and MRP are compared. You should use $price and $MRP to indicate that you want the value.",
"username": "steevej"
}
] | Query using $expr doesn't return expected result | 2020-06-02T12:57:57.390Z | Query using $expr doesn’t return expected result | 1,528 |
null | [
"atlas-functions",
"stitch"
] | [
{
"code": "",
"text": "I have a mongoDB instance which has a set of data uploaded every 24 hours. There is then a set of aggregation pipelines which need to run to clean and prepare the data. The restrictions of stitch have meant I need to split the pipeline into a number of functions each of which will complete in under 90 seconds. Now I’m at the stage where I need to run these 5 scripts one after the other. Triggering them from database updates is one option but I don’t want script 2 firing before script 1 has completed.Question: Is there a good way to fire multiple functions in order , one at a time? The entire set of scripts takes 7 minutes to run.",
"username": "Neil_Albiston"
},
{
"code": "",
"text": "Hi Neil –Currently, I would recommend running the functions one after another via Triggers with some sort of state stored on the document itself (such as a field storing the last run time/stage). We have an item tracking longer runtime on feedback.mongodb.com and if you have another suggestion on how we could better support this case you can add it there.Thanks,\nDrew",
"username": "Drew_DiPalma"
}
] | Multiple stitch functions in a chain | 2020-05-29T15:49:31.181Z | Multiple stitch functions in a chain | 2,839 |
null | [
"app-services-user-auth",
"stitch"
] | [
{
"code": "",
"text": "Hi All, I am new to mongo… is there a mongo driver I can use in my c# app to do authentication using stitch … or has anyone written anything like this I could use or see a snippet of code ?\nOr does anyone have a snippet for doing authentication with a JWT for a web api.Much appreciated.",
"username": "Shane_Ansara"
},
{
"code": "",
"text": "Hi Shane – For this, we would currently recommend using Wire Protocol support in conjunction with MongoDB’s .NET driver. We are working on a .NET SDK currently, but it’s still a few months out.",
"username": "Drew_DiPalma"
}
] | C# Authentication - Stitch | 2020-05-19T07:34:12.394Z | C# Authentication - Stitch | 2,540 |
null | [
"graphql",
"stitch"
] | [
{
"code": "const {\n Stitch,\n RemoteMongoClient,\n AnonymousCredential\n} = require('mongodb-stitch-browser-sdk');\n\nconst client = Stitch.initializeDefaultAppClient('weblocu-axibm');\n\nconst db = client.getServiceClient(RemoteMongoClient.factory, 'mongodb-atlas').db('<DATABASE>');\n\nclient.auth.loginWithCredential(new AnonymousCredential()).then(user =>\n db.collection('<COLLECTION>').updateOne({owner_id: client.auth.user.id}, {$set:{number:42}}, {upsert:true})\n).then(() =>\n db.collection('<COLLECTION>').find({owner_id: client.auth.user.id}, { limit: 100}).asArray()\n).then(docs => {\n console.log(\"Found docs\", docs)\n console.log(\"[MongoDB Stitch] Connected to Stitch\")\n}).catch(err => {\n console.error(err)\n});\n",
"text": "The MongoDB Stitch Documentation is showing an example where we use also the RemoteMongoClient, but do we really need this client when working only with GrapohQL for querying and mutating data?Here the example:",
"username": "Ivan_Jeremic"
},
{
"code": "",
"text": "Hi Ivan – You are correct, you do not need the RemoteMongoClient if you are just working with GraphQL/Authentication.",
"username": "Drew_DiPalma"
}
] | Do we need RemoteMongoClient, when working with GraphQL? | 2020-05-19T20:34:20.539Z | Do we need RemoteMongoClient, when working with GraphQL? | 2,067 |
null | [
"graphql",
"stitch"
] | [
{
"code": "",
"text": "Hello all, I would like to discuss some problems I see in how Stitch works compared to other similar tools.Here the problems I see currently.Problem 1. Stitch uses SDK’s but for what? SDK’s are pretty 90’s early 2000, Since the GraphQL release, there is no need since you interact with your database only through the API. I know not all people use GraphQL but why not give instant REST API the same way we have GraphQL?Problem 2. You need the SDK for authentication since Stitch is a ‘Serverless App’ I think the auth logic should be baked in in Stitch without me having to use an SDK, more modern solutions like Strapi give you GraphQL endpoints for Login/Register/LostPassword, so I can simply make a Login Mutation instead of installing the SDK or make a Registration form which triggers a Register Mutation without any packages.Problem 3 Because all the SDK does regarding auth is storing the token in localStorage is it very hard to keep track of the current state of the User and if he is really authenticated, I know it is easy to check but not so easy to keep track as for example with endpoints that give me more information.Any chance we will see improvements here?",
"username": "Ivan_Jeremic"
},
{
"code": "",
"text": "Hi Ivan – Great questions, let me tackle them in order –Overall, we try to cater to a wide variety of developers and different development styles and are always looking at ways of being more flexible.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Yeah I think I was a bit to hard going on SDK’s, they are a great tool for people that interact with the mongo database, however it would be nice to have an option in the future to not use them if you don’t need it a solution would be like described above to have mutations for authentication.",
"username": "Ivan_Jeremic"
}
] | Questions about Stitch design decisions | 2020-05-28T20:24:32.232Z | Questions about Stitch design decisions | 2,121 |
null | [
"compass"
] | [
{
"code": "",
"text": "Hello there,I think MongoDB Compass is an incredibly powerful software using which one can do easy queries and manipulations to data. I also like the UI - its clutter free and easy to use.I want to request an additional feature in Compass which isn’t currently there:Viewing sub collections as dropdowns (Tree structure)Generally, sub collections are namespaced using a period(.) and they make sense since the data becomes easy to organise and work with.The feature I want is something like this -For subcollections ‘2020.06.02’, ‘2020.06.01’, ‘2020.05.31’ and ‘2019.01.01’ under ‘log’ database, the collections would be shown in this format2020.06.02\n2020.06.012020.05.312019.01.01Say we have nx.ny.na.nb…(n - nodes, x y a and b are integers) type collectionsSo if n10.n20 is the only sub collection under n0.n1 and n0.n2 has n66, n55, n66.n95, n55.n85, n66., n66.n95.n54 and n55.n85.n86 as sub-collections the output would be this -n0.n1.n10.n20n0.n2.n66\nn0.n2.n66.n0.n2.n66.n95\nn0.n2.n66.n95.n54n0.n2.n55n0.n2.n55.n85\nn0.n2.n55.n85.n86Adding a search bar to search for collections would also be helpful for those who know exactly what they are looking for.Implementing this would greatly improve readability, and de-clutter the collections for databases.",
"username": "Susnigdha_Bharati"
},
{
"code": "",
"text": "Could you please post this feature request in our feedback engine? Here is the link: Compass: Top (247 ideas) – MongoDB Feedback Engine",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "I have done as you have instructed; thanks a lot for building Compass and improving it continuously!",
"username": "Susnigdha_Bharati"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Feature Request - View sub collections as dropdowns in Compass | 2020-06-02T01:09:27.098Z | Feature Request - View sub collections as dropdowns in Compass | 2,439 |
null | [
"golang"
] | [
{
"code": " {\"_id\":{\"$oid\":\"5ed0cb4cb8d5570916d1ee7e\"},\"rolecode\":\"DHBK1_ROLE_05\",\"productid\":\"XYZ_Platform\",\"functioncodelist\":[\"DHBK1_FUNC_1\",\"DHBK1_FUNC_2\",\"DHBK1_FUNC_3\",\"DHBK1_FUNC_4\"],\"comid\":\"DHBK1\"} {\"_id\":{\"$oid\":\"5ed0cc67b8d5570916d1ef86\"},\"rolecode\":\"DHBK1_ROLE_06\",\"productid\":\"LAM_Platform\",\"functioncodelist\":[\"DHBK1_FUNC_1\",\"DHBK1_FUNC_2\",\"DHBK1_FUNC_3\"],\"comid\":\"DHBK1\"} {\"_id\":{\"$oid\":\"5ed0d23cb8d5570916d1f4c8\"},\"rolecode\":\"DHBK1_ROLE_09\",\"productid\":\"LAM_Platform\",\"functioncodelist\":[\"DHBK1_FUNC_1\"],\"comid\":\"DHBK1\"}DHBK1_FUNC_1db.company_role_function.update(\n { },\n { $pull: { functioncodelist: { $in: ['DHBK1_FUNC_1'] }}},\n { multi: true }\n)\n package main\n import (\n \"context\"\n \"fmt\"\n \"strings\"\n \"time\"\n \"gopkg.in/mgo.v2\"\n )\n func main() {\n var functionCode []string\n functionCode = append(functionCode, \"DHBK1_FUNC_1\")\n fmt.Println(functionCode)\n deleteArray(functionCode)\n }\n func deleteArray(functionCode []string) {\n session, err := mgo.Dial(\"mongo_uri_connect\")\n if err != nil {\n\t panic(err)\n }\n c := session.DB(\"users\").C(\"company_role_function\")\n err = c.Update(bson.M{}, bson.M{\"$pull\": bson.M{\"functioncodelist\": bson.M{\"$in\": functionCode}}}, bson.M{\"multi\": true})\n if err != nil {\n\tfmt.Println(err)\n }\n}\n# command-line-arguments\n .\\main.go:86:16: too many arguments in call to c.Update\n have (primitive.M, primitive.M, primitive.M)\n want (interface {}, interface {})\nbson.M{\"multi\": true}err = c.Update(bson.M{}, bson.M{\"$pull\": bson.M{\"functioncodelist\": bson.M{\"$in\": functionCode}}}, bson.M{\"multi\": true})DHBK1_FUNC_1",
"text": "I have data in MongoDB like below: {\"_id\":{\"$oid\":\"5ed0cb4cb8d5570916d1ee7e\"},\"rolecode\":\"DHBK1_ROLE_05\",\"productid\":\"XYZ_Platform\",\"functioncodelist\":[\"DHBK1_FUNC_1\",\"DHBK1_FUNC_2\",\"DHBK1_FUNC_3\",\"DHBK1_FUNC_4\"],\"comid\":\"DHBK1\"} {\"_id\":{\"$oid\":\"5ed0cc67b8d5570916d1ef86\"},\"rolecode\":\"DHBK1_ROLE_06\",\"productid\":\"LAM_Platform\",\"functioncodelist\":[\"DHBK1_FUNC_1\",\"DHBK1_FUNC_2\",\"DHBK1_FUNC_3\"],\"comid\":\"DHBK1\"} {\"_id\":{\"$oid\":\"5ed0d23cb8d5570916d1f4c8\"},\"rolecode\":\"DHBK1_ROLE_09\",\"productid\":\"LAM_Platform\",\"functioncodelist\":[\"DHBK1_FUNC_1\"],\"comid\":\"DHBK1\"}And I have Mongo shell to remove DHBK1_FUNC_1 element from array.Here is my Mongo shell:Then I write Go code to implement my Mongo shell.Here is my code:When I run my code, it showed this error:When I remove bson.M{\"multi\": true} in line err = c.Update(bson.M{}, bson.M{\"$pull\": bson.M{\"functioncodelist\": bson.M{\"$in\": functionCode}}}, bson.M{\"multi\": true}), it worked but doesn’t remove any element DHBK1_FUNC_1.Thank you",
"username": "Napoleon_Ponaparte"
},
{
"code": "mgo/v2{multi: true}mongo-go-drivercollection := client.Database(\"users\").Collection(\"company_role_function\")\nfilter := bson.M{}\nstatement := bson.M{\"$pull\": bson.M{\"functioncodelist\": bson.M{\"$in\": bson.A{\"DHBK1_FUNC_1\"}}}}\nresult, err := collection.UpdateMany(ctx, filter, statement)\nmgomgomongo-go-driver",
"text": "Hi @Napoleon_Ponaparte,Based on your code snippet, looks like you’re using mgo/v2 instead of the MongoDB official mongo-go-driver. The problem is caused by the option to specify multiple updates i.e. {multi: true} option.Using mongo-go-driver you should be able to utilise Collection.UpdateMany(). For example:If you’re using mgo please see Collection.UpdateAll() instead. Please note that mgo only has full support up to MongoDB server v3.6. If you’re starting a new project, I’d recommend to use mongo-go-driver.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "thank you for your help",
"username": "Napoleon_Ponaparte"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Remove array element in MongoDB in Go | 2020-06-01T03:21:55.738Z | Remove array element in MongoDB in Go | 6,319 |
null | [
"node-js"
] | [
{
"code": "//Here is the schema model that I have exported\nvar atticThings = require(\"./atticModel.js\")\n\n\nrouter.put(\"/:id\", function(req, res) \n{ \n var updateObj = {};\n if (req.body.title) updateObj.title = req.body.title; \n if (req.body.label) updateObj.label = req.body.label; \n if (req.body.position) updateObj.position = req.body.position; \n if (req.body.wrapping) updateObj.wrapping = req.body.wrapping; \n if (req.body.text) updateObj.text = req.body.text; \n\n atticThings.findByIdAndUpdate({_id: new mongodb.ObjectID(req.params.id)}\n , {$set: { updateObj }}, \n function(err, thing) {\n if(err)\n res.send(err);\n\n console.log(\"Gör en exit\");\n process.exit();\n })\n}); \n{\n \"_id\": {\n \"$oid\": \"5ec68451dcacc085aca9927d\"\n },\n \"title\": \"skor\",\n \"label\": \"sommarskor\",\n \"position\": \"h3\",\n \"text\": \"gamla och skorsASASDASD\",\n \"wrapping\": \"Resegarderob\",\n \"__v\": 0\n}\n{\n \"_id\": {\n \"$oid\": \"5ec68b214e96687e44141440\"\n },\n \"title\": \"1111111\",\n \"label\": \"11111111111\",\n \"position\": \"h3\",\n \"text\": \"111111111111\",\n \"wrapping\": \"Löst\",\n \"__v\": 0\n}\n",
"text": "I use rest-api in Node and mongodb with a model. I run into problem when I update a document. The complete code looks like this\nI have put a process.exit() at the end just to keep the problem isolated to the server.\nWhen the statement process.exit() is reached I\ncan I see in mongodb(mlab) has made an update but in addition\nthe original document is still there.\nSo an update resulted in two documents one with the update and the other is the originalI have looked here to find a solution\nHow to use mongoose to find by id and update with an example | ObjectRocket\nI have google to find a solution to this but without luck\nI have also looked in mongodb documentation and I do as it say in the dokHere is a copy from mongodb(mlab) where I have two documents…\nI updated several field with 111.\nA funny thing is that the ObjectId that I really updated was this one 5ec68451dcacc085aca9927d but the one that was updated was 5ec68b214e96687e44141440//Alexandra",
"username": "Tony_Johansson"
},
{
"code": "req.params.idnew mongodb.ObjectID()",
"text": "Hi @Tony_Johansson, welcome!So an update resulted in two documents one with the update and the other is the originalWithout more context, it’d difficult to reproduce this issue. I would suggest to check what req.params.id value and also what new mongodb.ObjectID() returns, to make sure it is the value that you’re after.I’d also recommend to check anywhere in the code whether upsert option is set. See also Model.findByIdAndUpdate to learn more about the behaviour of the method.If you’re still encountering this issue, it would help others to assist if you could provide the following:Regards,\nWan.",
"username": "wan"
}
] | Update does not work as expected | 2020-05-21T17:18:39.176Z | Update does not work as expected | 2,023 |
[] | [
{
"code": "",
"text": "My Aggregation\nagg909×617 25.1 KB\nOutput from aggregation\nouput990×807 131 KBBut I need data in that format\nneed796×578 40.5 KBhow can I do that?",
"username": "Roytter_Staff"
},
{
"code": "[\n {\n \"$addFields\" : {\n \"feeds.author\" : \"$author\"\n }\n },\n {\n \"$replaceRoot\" : {\n \"newRoot\" : \"$feeds\"\n }\n }\n]",
"text": "You can use the present output as input to the following pipeline and get the desired output (add this pipeline to your present aggregation):",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Change output Data format by Aggregation | 2020-06-01T10:05:52.530Z | Change output Data format by Aggregation | 2,151 |
|
null | [
"compass"
] | [
{
"code": "",
"text": "I am new to MongoDB . I am using the free tier.I created a database called yelp and a collection called reviews using MongoCompass. I am trying to load the Yelp dataset ( Yelp Dataset) from a JSON file. I imported the JSON file successfully using compass. The import indicates that 8.2M documents completed successfully. But the collection only has 660K documents. Why is the import dropping so much data?The total size of the JSON file is 5.8 GB.\nThe total size of my database collection is only 325MB.Can someone help me figure out what has happened?",
"username": "Michelle_Levine"
},
{
"code": "",
"text": "The free tier limit on size cannot accept the 5.8 GB of data. The limit is 512MB of space. I do not know why it stopped at 325MB. May be you have some other database on the same cluster.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @steevej . I figured it was something like that. Well i guess i’ll have to do a trial of one of the better tiers to load my data. thanks.",
"username": "Michelle_Levine"
}
] | Documents disappear after successful import | 2020-05-28T20:24:27.976Z | Documents disappear after successful import | 2,538 |
null | [
"flutter"
] | [
{
"code": "",
"text": "I setup my first Atlas Cluster, but alas I don’t have a Dart/Flutter api option.\nOf course I have no problems with Dart/Flutter on our internal server(s), but my first Atlas project just threw me a curve.\nIdeas?\nThanks,\n-nat",
"username": "Nat_Gross"
},
{
"code": "",
"text": "@Nat_Gross Are you looking for a server side or client side Dart binding?",
"username": "Ian_Ward"
},
{
"code": "Db db = new Db(\"mongodb://localhost:27017/whatever\");",
"text": "In order not to confuse terminology, because Mongodb Dart pkg is a server side library that clients use to connect, here is a simple line I use to connect from Android devices:Db db = new Db(\"mongodb://localhost:27017/whatever\");So, if Atlas replaces my localhost, how do I connect to it?Reference: mongo_dart | Dart Package\nThanks,\n-nat",
"username": "Nat_Gross"
},
{
"code": "mongo-dartmongodb+srvmongo-dart",
"text": "Hi Nat,Welcome to the MongoDB Community!There is currently no Officially Supported Driver for Dart/Flutter.The mongo-dart driver is community-supported but unfortunately it does not appear to be actively maintained or overly feature complete. From a quick review of the driver documentation, I don’t see mention of support for TLS/SSL (which is required for Atlas) or SNI extension support in particular (which is required for connecting to Atlas free/shared tier deployments). It’s also not clear what versions of MongoDB server the mongo_dart driver supports.I found an open pull request for adding TLS/SSL support as well as discussion around supporting the mongodb+srv connection string format. My takeaway is that mongo-dart currently isn’t a recommendable solution for working with modern MongoDB deployments, as these are more basic connectivity requirements. Official drivers are tested against a more comprehensive set of MongoDB specifications so expected behaviour can be rationalised.I suggest looking into MongoDB Stitch and considering the GraphQL API as an alternative starting point with Dart/Flutter support.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I guess my Atlas aspirations will wait until an official Dart/Flutter driver is released.\nThanks.\n-nat",
"username": "Nat_Gross"
},
{
"code": "",
"text": "A post was split to a new topic: With the graphql Api will we need to have an external server for authentication or notifications?",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connect to Atlas with Dart/Flutter app | 2020-05-04T21:14:18.519Z | Connect to Atlas with Dart/Flutter app | 11,220 |
null | [
"aggregation"
] | [
{
"code": "{ \n \"_id\" : \"b299e2d2-4011-4968-a55c-638d9c37664b\", \n \"times\" : {\n \"2020-05-30\" : 1685, \n \"2020-05-29\" : 5470\n }\n},\n{ \n \"_id\" : \"0859e698-7e57-44ba-b84e-cf17ac9d0f77\", \n \"times\" : {\n \"2020-05-30\" : 1520, \n \"2020-05-29\" : 10085\n }\n}\ndb.getCollection(\"hours\").aggregate(\n { \n $project : {\n '_id': \"$_id\",\n 'sum': { $sum : [ \"$times.2020-05-30\", \"$times.2020-05-29\" ] }\n }\n },\n { $sort : { 'sum': -1 } }\n);\n2020-05-*db.hours.aggregate([\n { $project : { _id : \"$_id\", times : { $objectToArray: \"$times\" } } },\n { $project : { _id : \"$_id\", total : { $sum : \"$times.v\" } } },\n { $sort : { 'total': -1 } }\n]);\n$match{ $match : { \"times.k\" : { $regex : \"2020-05-.*\" } } },\n",
"text": "I have a MongoDB schema that looks like this:and I want a query that sums all the values of each date together. I got this:but I don’t want to have to list out every date that I want to include, i just want all of them (or even better, a way to regex match them, e.g. 2020-05-*; is there any shortcut to this besides switching to mapreduce, or changing my data structure?EDIT: I was able to solve the issue with the $objectToArray operator, like so:So I’m trying to filter using the $match operator now (between the two projects):And it’s filtering down to the records that have at least one key that pass that filter, but it’s still summing all the keys, rather than just the keys that match that filter. How can I fix this?",
"username": "Pugabyte"
},
{
"code": "{ $project : { _id : \"$_id\", times : { $objectToArray: \"$times\" } } }timesdb.hours.aggregate([\n { \n $project : { times : { $objectToArray: \"$times\" } } \n },\n { \n $unwind: \"$times\" \n },\n { \n $match : { \"times.k\" : { $regex : \"2020-05-.*\" } } \n },\n { \n $group: { _id: \"$_id\", total: { $sum: \"$times.v\" } } \n },\n { \n $sort : { total: -1 } \n }\n])\n\ndb.hours.aggregate([\n { \n $project : { times : { $objectToArray: \"$times\" } } \n },\n { \n $unwind: \"$times\" \n },\n { \n $group: {\n _id: \"$_id\", \n total: { \n $sum: { \n $cond: [ \n { $eq: [ { $regexMatch: { input: \"$times.k\" , regex: \"2020-05-.*\" } }, true ] }, \n \"$times.v\", 0 \n ] \n } \n } \n } \n },\n { \n $sort : { total: -1 } \n }\n])\n\ndb.hours.aggregate([\n { \n $project : { \n total: { \n $reduce: { \n input: { $objectToArray: \"$times\" },\n initialValue: 0,\n in: {\n $let: {\n vars: {\n matches: { $regexMatch: { input: \"$$this.k\" , regex: \"2020-05-.*\" } }\n },\n in: { \n $cond: [ { $eq: [ \"$$matches\", true ] }, { $add: [ \"$$value\", \"$$this.v\" ] }, \"$$value\" ]\n }\n }\n }\n }\n }\n }\n },\n {\n $sort: { total: -1 }\n }\n])",
"text": "Hello Pugabyte Griffin,The { $project : { _id : \"$_id\", times : { $objectToArray: \"$times\" } } } returns an array of of times objects. To filter the array elements and then grouping and summing you can use the any of the following aggregates:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Querying with dynamic keys | 2020-05-30T22:09:05.037Z | Querying with dynamic keys | 16,676 |
null | [
"containers",
"security"
] | [
{
"code": "",
"text": "Hello,We use a wildcard certificate for enabling tls encryption between our web app and our mongodb instance running inside of a docker container on a remote server. Until recently, it was working fine until it began to return “MongoServerSelectionError: certificate has expired”. However we use this same wildcard certificate across multiple services and they have been functioning normally. Is there another reason why mongodb would generate this response?",
"username": "Matthew_Piccinich"
},
{
"code": "",
"text": "Welcome to the community @Matthew_Piccinich!Until recently, it was working fine until it began to return “MongoServerSelectionError: certificate has expired”. However we use this same wildcard certificate across multiple services and they have been functioning normally.What specific MongoDB driver & version are you using and how recently did you start seeing the certificate expiry error?One possibility is that your wildcard certificate was signed with an intermediate or root certificate that has expired. If so, the solution would be updating the certificate trust store for any affected environments.For example, Sectigo (formerly known as Comodo) had a root certificate which expired on the weekend: Sectigo AddTrust External CA Root Expiring May 30, 2020. This would not be an issue for clients with updated trust stores, but could cause a scenario where clients with outdated trust stores would no longer be able to verify valid certificates.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello Stennie,What specific MongoDB driver & version are you using?We’re using MongoDB 4.2, I’ve started the mongodb instance via the latest mongodb docker image available.How recently did you start seeing the certificate expiry error?It started happening yesterday and we use a certificate issued by Sectigo.It looks like you’ve pointed me in the right direction Stennie.Thank You!",
"username": "Matthew_Piccinich"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "certificate has expired" Response from known working certificate | 2020-05-31T23:48:44.859Z | “certificate has expired” Response from known working certificate | 6,838 |
null | [] | [
{
"code": "performancesmatchesconst player = { ... }\nconst performances = await mongo.collection('performances').find({ \"player._id\": player._id }).toArray()\nmatchesconst matches= await mongo.collection('matches').find({ matchId: { $in: performances.map(p => p.matchId) } }).toArray()\n// performances\n// indexes: { player._id: 1 }\n_id: ObjectId\nmatchId: string\nplayer: {\n _id: ObjectId\n username: string\n clantag: string\n}\nstats: {\n kills: number\n deaths: number\n downs: number[] // max length 8\n gulagKills: number\n gulagDeaths: number\n revives: number\n contracts: number\n teamWipes: number\n lootCrates: number\n buyStations: number\n teamPlacement: number\n teamSurvivalTime: number\n xp: {\n score: number\n match: number\n bonus: number\n medal: number\n misc: number\n challenge: number\n }\n}\n\n\n// matches\n// indexes: { matchId: 1 }\n_id: ObjectId\nmatchId: string\nmapId: string\nmodeId: string\nutcSecStart: number\nutcSecEnd: number\nteams: []{ // average array length ~150\n {\n name:string\n placement: number\n players: []{\n username: string\n clantag: string\n platform: string\n stats: {\n kills\n deaths\n score\n assists\n headshots\n executions\n damageDone\n damageTaken\n longestStreak\n timePlayed\n distanceTraveled\n percentTimeMoving\n }\n }\n }\n\n",
"text": "I have a service that queries profile data for a given user. Right now there are only two users, myself and my brother, but the results are dreadfully slow. The profile data consists of two collections, performances and matches. I have an indexes on both collections that are listed below in the collection models but I could be misusing them.Performances are queried first, with an operation like this:This query takes about 2 seconds for 330 results out of ~2,500 documents of a few kb in size each. After getting an array of performances I query matches with an operation like this:This query takes about 1 second per match returned out of ~1,500 documents of approx 2mb in size each. As you can imagine, fetching a hundred or more matches takes several minutes.Sorry if this is an overload, but the document models are listed below. I’d like to know if I can optimize indexing or data structure to improve performance for this scenario. I’ve tried piping the results to a stream rather than returning the aggregate but did not see any performance increase.Thanks for looking!",
"username": "Dan_Lindsey"
},
{
"code": "findconst cursor = performancesColl.find( { \"player._id\": player._id } ).project( { matchId: 1 } );\n\nconst matchIds = [];\n\nwhile(await cursor.hasNext()) {\n const doc = await cursor.next();\n matchIds.push(doc.matchId);\n}\n\nconst matches = matchesColl.find( { matchId: { $in: matchIds } } )\n .project( { // .... include only the fields you need in the application // } )\n .toArray();\nmongofindperformancesdb.performances.find( { \"player._id\": player._id } ).explain()",
"text": "Hello Dan Lindsey,I have some suggestions.1. Using Projection:Projection used with a find query allows restrict the fields you need in the result. See about projection at db.collection.find.2. Aggregation Lookup:Another way of building the query is using an aggregation’s $lookup stage which lets you query both the collections together on a common field (in a “join” operation). This will get the result in a single query.3. Index Usage:You can verify if an index is being applied in a query correctly by using the explain method. The explain generates a query plan which has details about the index usage. For example from the mongo shell, get a query plan for the find method you are using with performances collection:db.performances.find( { \"player._id\": player._id } ).explain()",
"username": "Prasad_Saya"
}
] | Collection of 2k docs ~2mb each very slow to query | 2020-05-31T23:48:59.819Z | Collection of 2k docs ~2mb each very slow to query | 5,829 |
null | [] | [
{
"code": "String connectString = \"mongodb+srv://username:[email protected]/kchange?ssl=true&retryWrites=true&w=majority\";\nMongoClientSettings settings =\n MongoClientSettings.builder()\n .applyToConnectionPoolSettings(builder ->\n builder.maxSize(15).minSize(10).maxConnectionIdleTime(6000, TimeUnit.SECONDS))\n .applyToClusterSettings(builder ->\n builder.applyConnectionString(new ConnectionString(connectString)))\n .applyToSslSettings(builder -> builder.enabled(true))\n .applyToSocketSettings(builder -> {\n builder.connectTimeout(100, TimeUnit.SECONDS);\n builder.readTimeout(100, TimeUnit.SECONDS);\n }\n )\n .build();\n",
"text": "I am using a Java Driver 3.12.0 to connect to Mongo Atlas Free Tier. I get an error “Query failed with error code 8000 and error message 'user is not allowed to do action” when trying to run a simple queryWhen i connect without using the MongoClientSettings object then it seems to work fineAny pointers ?",
"username": "Chandra_B"
},
{
"code": "",
"text": "user is not allowed to do actionIt means user is not having enough privileges\nWhat command you were running?\nDoes it run with shell?\nAlso M0 cluster has some restrictions.Some admin commands cannot be run",
"username": "Ramachandra_Tummala"
}
] | Query errors when connecting to Atlas | 2020-05-31T23:50:38.586Z | Query errors when connecting to Atlas | 1,531 |
null | [
"react-native"
] | [
{
"code": "Realm.write()openDb()name: \"Brand\"\nproperties:\nbrandName: \"string\"\ncolor: \"string\"\nicon: \"string\"\ntextColor: \"string\"\nExceptionsManager.js:76 TypeError: _realm.default.write is not a functionwrite()isClosedundefined console.log('realm is closed?', Realm.isclosed);realm is closed? undefined",
"text": "I’ve been stymied why I cannot get my app to recognize my wrapper function for Realm.write().Here is a gist that covers what code is involved in the execution. My openDb() wrapper appears to work,But when I try to write to the db I get\nExceptionsManager.js:76 TypeError: _realm.default.write is not a function\nThis is my first attempt to actually incorporate Realm into our existing app. This Realm demo has a quick expected turnaround (next 24hrs). So any help is most appreciated.UPDATE: After messing around with the API calls, I imported realm directly into my App.js to test it. I found that not only was write() failing, but the isClosed boolean registered as undefined.When I logged the call, console.log('realm is closed?', Realm.isclosed);, it returned realm is closed? undefined. So, still don’t know what’s going on here.",
"username": "Michael_Stelly"
},
{
"code": "realmRealmrealmRealm",
"text": "I’ve updated the gist so, hopefully, others won’t have to struggle with the multiple vectors of 1. learning this API, 2. translating the current example in the doc to ES7+, and 3. scouring the Internet for guidance.A couple of points:It wasn’t apparent to me that creating a new Realm also opens it. It took me a long time to resolve that issue because it wasn’t documented.Using the symbol realm interchangeably is bad practice and confusing. Assuming folks understand that Realm is not realm, other than knowing that JS is case-sensitive is not helpful. Using unambiguous symbols is JS 101. The updated version uses Realm only twice: once to import and then to construct the instance.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "More “learn as you go” issues. I’ll close.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [react native] realm.write is not a function | 2020-05-21T23:08:25.941Z | [react native] realm.write is not a function | 4,078 |
null | [
"node-js"
] | [
{
"code": " const ConfigSchema = {\n BrandSchema: {\n name: 'Brand',\n primaryKey: 'id',\n properties: {\n id: 'string',\n brandName: 'string',\n icon: 'string',\n partner: 'bool',\n color: {\n brand: 'string',\n brandCustomer: 'string?',\n brandPro: 'string?',\n },\n },\n },\n };\n import ConfigSchema from './models/Schemas';\n const {BrandSchema} = ConfigSchema;\n const realm = new Realm({\n schema: [{name: BrandSchema.name, properties: BrandSchema.properties}],\n deleteRealmIfMigrationNeeded: true,\n });\nUnhandled JS Exception: Error: type must be of type 'string', got (undefined)schema: [{BrandSchema}],",
"text": "I created a schema and imported it into App.js.When I create a realm and pass it the schema,I get the following error:Unhandled JS Exception: Error: type must be of type 'string', got (undefined) which tells me it’s not recognizing the schema object.What exactly is undefined about the schema object? Passing in the whole object, schema: [{BrandSchema}], fails as well. What am I doing wrong?",
"username": "Michael_Stelly"
},
{
"code": "const realm = new Realm({\n schema: [BrandSchema],\n deleteRealmIfMigrationNeeded: true,\n});",
"text": "A lot of the issues I encounter stem from a lack of adequate examples and misunderstanding those that do exist. I finally understand what caused this issue. The constructor was fine. I had to cast the schema ID to a string if I wanted to auto-increment it. ",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | JS exception - undefined type for schema object | 2020-05-29T15:49:17.123Z | JS exception - undefined type for schema object | 4,688 |
null | [] | [
{
"code": " var u = Builders<Blog>.Update\n .SetOnInsert(f => f.BlogId, blogId)\n .SetOnInsert(f => f.VideoId, videoId)\n -- other fields...\n\nvar blog = Blog.FindOneAndUpdate<Blog>(c => c.BlogId == blogId && c.VideoId == VideoId, u, \n new FindOneAndUpdateOptions<Blog>{IsUpsert = true, ReturnDocument = ReturnDocument.After}\n);\n\nbool wasUpsert = ? \nreturn wasUpsert;\n",
"text": "I do query with logic “insert if not exists”. How i can check was upsert or return exists document ?",
"username": "alexov_inbox"
},
{
"code": "Blogblog == nullBlogIdVideoId",
"text": "Hi @alexov_inbox,You didn’t specify so I’m guessing your using C# from the syntax?The ReturnDocument option provides two options, Before and After (see also, properties). You’ve specified After, which means you’ll always get the document you inserted when performing an upsert. Using the value Before will return a null if the document did not previously exist. This does complicate getting the final document should an upsert occur, but I’m guessing you already have a Blog object hanging around somewhere and it should be fairly straight forward to check if blog == null and set the values for BlogId and VideoId.Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "its good use Before and check to null. But i need get After(final) document. In your variant i should use second querybool wasUpsert = blog == null;\nvar blog = finalBlog;",
"username": "alexov_inbox"
}
] | findAndModify return WriteResult | 2020-05-26T19:54:40.731Z | findAndModify return WriteResult | 1,374 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi everyone,I need some advice on how to store stock trades in MongoDB Atlas. I will potentially have millions of documents for each user depending on how much they actively trade.I am currently using the bucket pattern to store financial transactions with aggregate and it works well for syncing, but I can’t really filter and sort results properly.So I was wondering would I use the same pattern for when it comes time to store the trades?",
"username": "Jeffery_Vincent"
},
{
"code": "",
"text": "Hi Jeffery,The bucket pattern is a great solution for storing large quantities of data but you’re right that it does have a few drawbacks, specifically when it comes to things like sorting. Ultimately, your use case should drive the schema decisions you make.What are you doing with the stock trades you’re storing? Displaying them in pages (bucket pattern)? Performing aggregate calculations (computed pattern)? Different schema design patterns are useful for different use cases.I recommend checking out a blog entry by Daniel Coupal called “Building with Patterns: A Summary”: Building with Patterns: A Summary | MongoDB Blog .If you’re paging through trades, there’s also my blog post on this exact topic: Paging with the Bucket Pattern - Part 1 | MongoDB BlogI hope this helps get you started!",
"username": "Justin"
},
{
"code": "",
"text": "Hey Justin,Really nice articles. I read through the patterns and pagination with bucket pattern.\nIn your examples for the bucket pagination you give the id of the inserting document “customerId + timestampOfTrade” I understand this makes a unique entry and can still be queried to find by customer id by using regex but are there any other benefits? Could the timestamp be useful in any kind of query?",
"username": "Ihsan_Mujdeci"
},
{
"code": "_id$gt$lt_id",
"text": "The second concatenated part of _id can really be any positively increasing monotonic value.Logically, a positively increasing monotonic value only comes in two flavors: a positively increasing consecutive, or positively increasing non-consecutive value. Consecutive values are almost always a bad idea for use in databases for a variety of reasons (mostly because they’re hard to generate reliably in a distributed system). That leaves us with non-consecutive numbers.A timestamp is a readily available and easy to understand. Calling it a positively increasing non-consecutive monotonic value is also accurate but much less understandable. Timestamps also have the benefit of using $gt or $lt range queries on _id per customerId through a time range using a timestamp.",
"username": "Justin"
},
{
"code": "$gt$lt_id",
"text": "\" Timestamps also have the benefit of using $gt or $lt range queries on _id per customerId through a time range using a timestamp.\"\nThat’s very true, thanks for the insight mate.",
"username": "Ihsan_Mujdeci"
},
{
"code": "",
"text": "I am currently using the bucket pattern to store financial transactions with aggregate and it works well for syncing, but I can’t really filter and sort results properly.Can you be specific about the filtering and sorting aspects. What are the issues you are facing? Any use case you want to discuss (and may be find some solution).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Is bucket pattern applies to chat history? That could be updated. It looks like bucket pattern is hard to perform update operation.",
"username": "Yong_Wei_Lun"
}
] | Storing millions of potential documents | 2020-02-10T16:57:04.671Z | Storing millions of potential documents | 5,380 |
null | [
"queries",
"node-js"
] | [
{
"code": "",
"text": "i want to find 10 results from database and for that i am using this command\ndb.collections.find().limit(10)\nbut actually what i want is to also show next 10 results on every click of the client i.e. firstly i want to show 10 documents but if the client demands more documents what command should i use to show next 10 documents after the old 10 documents on every click.I am using javascript, nodejs, express",
"username": "Rishabh_Dhingra"
},
{
"code": "",
"text": "A very good source of information is Paging with the Bucket Pattern - Part 1 | MongoDB Blog",
"username": "steevej"
},
{
"code": "",
"text": "I think the bucket pattern only applies if you have huge number of list that doesn’t update. Like IoT logs",
"username": "Yong_Wei_Lun"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Infinite scrolling using express ejs mongodb and nodejs | 2020-05-01T19:37:37.847Z | Infinite scrolling using express ejs mongodb and nodejs | 7,315 |
null | [
"cxx"
] | [
{
"code": "#include <cstdlib>\n#include <iostream>\n#include <bsoncxx/builder/stream/document.hpp>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/client.hpp>\n#include <mongocxx/instance.hpp>\n#include <mongocxx/uri.hpp>\n\n\nusing bsoncxx::builder::stream::close_document;\nusing bsoncxx::builder::stream::document;\nusing bsoncxx::builder::stream::finalize;\nusing bsoncxx::builder::stream::open_document;\nmongocxx::instance instance{};// don't put inside main \n\nint main() {\n\t//mongocxx::instance inst;\n\tstd::string name = \"myname\";\n\tstd::string pw = \"mypassword\";\n\t\n\tmongocxx::client conn{ mongocxx::uri{ \"mongodb+srv://\"+name+pw+\"@cluster0-xokay.mongodb.net/test?retryWrites=true&w=majority\" } };\n\tauto coll = conn[\"spam\"];\n\tauto res = coll[\"db\"];\n\tbsoncxx::builder::stream::document document{};\n\tdocument << \"Data\" << \"hello\";\n\tres.insert_one(document.view());\n\n\t\n\treturn 0;\n}\n",
"text": "Hi,I have no idea why cannot connect MongoDB Atals test mongocxx program via Visual Studio:Environment :Visual Studio 2015 Community\nBoost 1.64\ncmake 3.17.2\nmongo-c-driver-1.16.2\nMongoDB C++ 11 Driver r3.5.0",
"username": "deve_ru"
},
{
"code": "",
"text": "Don’t you need a colon as separator between name and pw?",
"username": "steevej"
}
] | Types.hpp error | 2020-05-30T07:31:34.245Z | Types.hpp error | 1,735 |
null | [
"swift"
] | [
{
"code": "class AccountClass: Object {\n @objc dynamic var account_id = 0\n @objc dynamic var account_name = \"\"\n \n let transactionList = List<TransactionClass>()\n override static func primaryKey() -> String {\n return \"account_id\"\n }\n}\n\nclass TransactionClass: Object {\n @objc dynamic var transaction_id = UUID().uuidString\n @objc dynamic var transaction_name = \"\"\n @objc dynamic var account: AccountClass!\n @objc dynamic var _amount = 0.0\n\n override static func primaryKey() -> String {\n return \"transaction_id\"\n }\n}\n\nclass ItemClass: Object {\n @objc dynamic var item_id = UUID().uuidString\n @objc dynamic var item_name = \"\"\n\n let transactions = List<TransactionClass>()\n \n func getAverageValue() -> Double {\n self.transactions.forEach { trans in\n let id = trans.transaction_id\n let acct = trans.account.account_name\n let name = trans.transaction_name\n let amt = trans.amount\n print(id, name, acct, amt)\n }\n \n let avgValue: Double = self.transactions.filter(\"account.account_id == 1\").average(ofProperty: \"_amount\") ?? 0.0\n return avgValue\n }\n \n override static func primaryKey() -> String {\n return \"item_id\"\n }\n}\nfunc avgTest() {\n let realm = try! Realm\n let i0 = realm.objects(ItemClass.self).filter(\"item_name == 'Item 0'\").first!\n let account = realm.object(ofType: AccountClass.self, forPrimaryKey: 1)\n \n let t0 = TransactionClass()\n t0.transaction_name = \"Transaction 0\"\n t0.account = account\n t0.amount = 10.0\n \n let t1 = TransactionClass()\n t1.transaction_name = \"Transaction 1\"\n t1.account = account\n t1.amount = 20.0\n \n let t2 = TransactionClass()\n t2.transaction_name = \"Transaction 2\"\n t2.account = account\n t2.amount = 30.0\n \n let a = [t0, t1, t2] \n try! realm.write {\n i0.transactions.append(objectsIn: a)\n }\n \n let avg = i0.getAverageValue()\n print(avg)\n}\nlet avg = i0.getAverageValue()",
"text": "Need another set of eyes on this one.The issue is we are intermittently getting incorrect averages using the Swift .average function on a List. We believe the results returned from a filter are what’s causing the .average function to be incorrect based on the number of returned results from the filterThere are three objects in the items List, and then what Realm says is the average of the ‘value’ property. The properties are transaction_id, transaction_name, account, and value. There’s a simple loop in the ItemClass that prints it’s transactions when the GetAverageValue function is called which also prints the average calculated by Realm.As you can see, the three values are 10, 20 and 30 and the average is 20, but realm is showing 15.0AC8AABC6-C053-41B1-AD13-0C031B7766F2 Transaction 0 Account 0 10.0\n69E9CC4A-2A3A-4437-9A7A-C6EEAA5899BD Transaction 1 Account 0 20.0\nD13E6164-E346-4A39-A561-6A780185025B Transaction 2 Account 0 30.0\nAverage: 15.0There are three Realm objects, AccountClass, TransactionClass and ItemClass. There’s one account with an id of 1 and one item with a name Item 0.and then a simple function that creates three transactions and adds them to the existing itemnote that even if this let avg = i0.getAverageValue() is called outside that function, the results is the same; 15.0A bit more info.Note that within the getAverageValue function, it filters where “account.account_id == 1” and it appears that filter is failing on an intermittent basis which is causing the average function to not calculate correctly.I changed the code to print out the filtered results, and then print the average. Sometimes finds two results, sometimes 3 and sometimes just 1. I ran the test several times, deleting ALL the Realm files and creating the data fresh each time and then in a separate button in the UI, called the getAverageValue functionRun 1\nA1E1DC70-7CFB-46CA-8D93-71EE9B3600A1 Transaction 0 10.0\nD91F2AFE-3BEC-4B2E-BE5E-212AED0794F2 Transaction 2 30.0\n20.0Run 2\nAE8E587B-F137-4351-BF7F-27F130073E18 Transaction 0 10.0\n9D7A1C56-E47F-41E5-8881-A8CA7273AF25 Transaction 1 20.0\n15.0Run 3\n242A7F57-5546-45F5-9905-D5A759BB2F5A Transaction 0 10.0\n10.0",
"username": "Jay"
},
{
"code": "",
"text": "Hi Jay,I noticed you’ve posted this to a few channels including GitHub: realm-cocoa/issues/6540. To avoid duplicating effort, it would be best to keep the investigation focused on GitHub for this potential bug.Thanks for the detailed report!Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_XThanks for the input.Just FYI, we were told very specifically in the past to post on the forums first to get more eyes on it in case it was something we were doing (along with a potential SO post).If no response or resolution for a duplicatable issue, then to open a ticket on the support site, which I did.I was recently told by Ian the support forums are being depreciated and to open a ticket on Github, so now I’ve done that. I also closed the ticket on the support site.Sorry for the duplication but we are following directions from your team - I know there are changes so we’re just trying to adapt as we move forward. Perhaps a forum post that defines a clear escalation path for issues would be in order?Note that I closed the ticket on the support site #6213 and Wes responded. I’ve closed it three times now so not sure why it’s remained open.Would you like me to delete the forum post, or possibly shorten it and point to the github report?",
"username": "Jay"
},
{
"code": "",
"text": "@Jay The forums will always be open. I said in my email that Realm’s Freshdesk implementation will be closed and merged with MongoDB’s support system - which we are excited about. The support system is for paid support of Realm’s sync product - in the same way it is for MongoDB.But for any bug type issue that is for the local database without sync, such as this, I would always recommend filing an issue on Github.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi Jay,Apologies for confusion on any redirection between channels.As a quick guide:Community forums are the recommended starting point for discussion on development or product questions if you do not have a paid support plan. There is no SLA (or guarantee) around responses, but anyone in the community is encouraged to share suggestions or experience so you should get more eyes on your posts. Our engineering and product teams also look for community discussions where we can help, but have to balance availability with development and product priorities.If you are using Realm Cloud Standard ($30/month with Community Support), operational questions about Realm Cloud should go to Realm Support. Development and product questions should be posted in community forums.If you have a paid support plan, you will be onboarded to the MongoDB Support Portal. Our Support Portal is fully integrated with our global support team processes, SLAs, and 24x7 coverage. Paid support plans generally cover development, product, and operational questions about MongoDB products & services. The community forums can still be complementary for questions where you want to get input from a broader audience or perhaps on third party frameworks or tools that are not covered by MongoDB support.You can also raise bug reports directly in the relevant project in the Realm GitHub org. If you’re not sure if something is a bug (or your question is around usage), the community forums would be a better place to start.Raising an issue in multiple channels can split the discussion and duplicate effort. If you do raise an issue in another public channel, it would be excellent to mention the cross-post so other users can follow the discussion.Would you like me to delete the forum post, or possibly shorten it and point to the github report?It’s fine to leave the topic here, since it can act as a signpost to the GitHub issue you posted.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Ian_WardMy apologies… My above post stated you said the support forums were being depreciated and that was a typo on my part.For clarity, it’s the support portal that’s being depreciated and closed soonThat’s the site at support.realm.ioJay",
"username": "Jay"
}
] | Average function calculation is incorrect or filtering failure | 2020-05-24T16:10:36.080Z | Average function calculation is incorrect or filtering failure | 2,386 |
null | [
"compass"
] | [
{
"code": "",
"text": "Hi, I want to be able to edit / create and download data Mongo data in my remote Mongo instance running on Digital Ocean.\nOn my local machine I have installed Mongo DB Compass, is it possible to connect this to my remote Mongo Server ? I can access my server through SSH and I have Ubunto Server (not desktop) running.If not could you suggest what options I have please?Thanks",
"username": "Jon_C"
},
{
"code": "",
"text": "Hello @Jon_C welcome to the forum!You should be able to connect to the db when your database is reachable over the internet. It will not be enough when you can access only the machine via ssh.\nTo connect you need at least to know the Domain/ IP, port authentication method and credentials on which your DB is running. The standard port will be 27017. In case the database is reachable via the internet, you should be able to connect to it via the mongo shell and / or Compass.The mongodb documentation describes very good the connection options to a remote database or replication set.I also might be an option to look into MongoDB Atlas you will get a free account for (I think) 512 MB, including lots of infrastructure benefits. So if there is no strong urge to run MongoDB on your server, Atlas might be the less stressful option.Cheers,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @Jon_C!On my local machine I have installed Mongo DB Compass, is it possible to connect this to my remote Mongo Server ? I can access my server through SSH and I have Ubunto Server (not desktop) running.You can use ssh tunneling to connect to remote MongoDB instances. The Connect to MongoDB portion of the Compass documentation shows you how to do this, although you do have to click on a couple of tabs to find the information.In the connect section you will see two tabs labeled Paste Connection String and Fill in Individual Fields. You want to click on the Fill in Individual Fields tab. Follow those directions and in the optional step 3, you will see two more tabs labeled Connect Using TLS/SSL and Connect Using SSH. Click on the Connect Using SSH tab and that will explain the information you need to set up SSH Tunneling.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi , I created a user with a role of read. I connect to the server with Putty and then I can connect to DB with this user. When I try Compass, I get the following errorError creating SSH Tunnel: (SSH) Channel open failure: Connection refusedAny ideas on where I’m going wrong, thanks",
"username": "Jon_C"
},
{
"code": "bindIp: 127.0.0.1localhostssh/etc/mongod.conf",
"text": "Hi @Jon_C I can reproduce that error if I put in the host name of my server on the Hostname tab. My MongoDB instance is bound to only the localhost address bindIp: 127.0.0.1. Due to this I need to use a host name of localhost on the Hostname tab.In the More Options tab you will provide the actual machine that you would ssh into.Hopefully this takes care of your issue. If not please provide screen shots of the two tabs and your /etc/mongod.conf file as that might help to troubleshoot.",
"username": "Doug_Duncan"
},
{
"code": "# Where to store the data.\ndbpath=/var/lib/mongodb\n\n#where to log\nlogpath=/var/log/mongodb/mongodb.log\n\nlogappend=true\n\nbind_ip = 127.0.0.1\nport = 27017\n\n# Enable journaling, http://www.mongodb.org/display/DOCS/Journaling\njournal=true\n\n# Enables periodic logging of CPU utilization and I/O wait\n#cpu = true\n\n#-- uncommenting auth \n# Turn on/off security. Off is currently the default\n#noauth = true\n#auth = true\n\n# Verbose logging output.\n#verbose = true\n\n# Inspect all client data for validity on receipt (useful for\n# developing drivers)\n#objcheck = true\n\n# Enable db quota management\n#quota = true\n\n# Set diagnostic logging level where n is\n# 0=off (default)\n# 1=W\n# 2=R\n# 3=both\n# 7=W+some reads\n#diaglog = 0\n\n# Diagnostic/debugging option\n#nocursors = true\n\n# Ignore query hints\n#nohints = true\n\n# Disable the HTTP interface (Defaults to localhost:27018).\n#nohttpinterface = true\n\n# Turns off server-side scripting. This will result in greatly limited\n# functionality\n#noscripting = true\n\n# Turns off table scans. Any query that would do a table scan fails.\n#notablescan = true\n\n# Disable data file preallocation.\n#noprealloc = true\n\n# Specify .ns file size for new databases.\n# nssize = <size>\n\n# Accout token for Mongo monitoring server.\n#mms-token = <token>\n\n# Server name for Mongo monitoring server.\n#mms-name = <server-name>\n\n# Ping interval for Mongo monitoring server.\n#mms-interval = <seconds>\n\n# Replication Options\n\n# in replicated mongo databases, specify here whether this is a slave or master\n#slave = true\n#source = master.example.com\n# Slave only: specify a single database to replicate\n#only = master.example.com\n# or\n#master = true\n#source = slave.example.com\n\n# Address of a server to pair with.\n#pairwith = <server:port>\n# Address of arbiter server.\n#arbiter = <server:port>\n# Automatically resync if slave data is stale\n#autoresync\n# Custom size for replication operation log.\n#oplogSize = <MB>\n# Size limit for in-memory storage of op ids.\n#opIdMem = <bytes>\n\n# SSL options\n# Enable SSL on normal ports\n#sslOnNormalPorts = true\n# SSL Key file and password\n#sslPEMKeyFile = /etc/ssl/mongodb.pem\n#sslPEMKeyPassword = pass",
"text": "Hi, I changed bind_ip = 127.0.0.1 to bind_ip = 0.0.0.0, then in terminal, restarted mongoDB with commandsystemctl restart mongodbIt didn’t seem to make any difference to my web application nor could I work with Compass (locally), so then I tried command below after following a blog.service mongod restartWhich caused some errors, and my node application broke, at which point I rebooted the server and changed the config back to default values to reinstate the application :frowning, so back at square one.Also, as I have node running on the server also which pulls data from Mongo, if I change the binding as mentioned above, would it break the app?===========\n# mongodb.conf",
"username": "Jon_C"
},
{
"code": "mongodbind_ip127.0.0.10.0.0.0systemctlservicesystemctllocalhost",
"text": "Also, as I have node running on the server also which pulls data from Mongo, if I change the binding as mentioned above, would it break the app?If your app and the mongod process are both running on the same server then the app should be able to access the database with a bind_ip config set to either 127.0.0.1 (localhost loopback adapter) or 0.0.0.0 (listen on all adapters). The change shouldn’t have broken the application’s connection.The use of systemctl or service depends on the dstro you’re using. Most have gone to systemctl.As for the problem with Compass connecting to the remote server, on the host page did you use localhost on the Hostname tab with the MongoDB user credentials for auth? On the More Options tab all the information for the SSH Tunnel will be the Digital Ocean information you use to SSH to the machine.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connecting to remote MongoDB server | 2020-05-26T11:21:06.804Z | Connecting to remote MongoDB server | 37,879 |
null | [
"database-tools",
"installation"
] | [
{
"code": "",
"text": "We initially were using DocumentDB and have switched to Mongo native. I am not havig much luck getting the shell to upgrade to 4.2, it still shows 3.6.3 after adding http://repo.mongodb.org/apt/ubuntuxenial/mongodb-org/4.2 multiverse as a repo and attempting a upgrade",
"username": "MARVIN_FRANCOIS"
},
{
"code": "apt updatemongo --version",
"text": "Hi @MARVIN_FRANCOIS and welcome to the community forums.Are you getting an error when you run your apt update command? When you run mongo --version is it showing 3.6.3 or 4.2.x?",
"username": "Doug_Duncan"
},
{
"code": "Reading package lists... Done\n",
"text": "after much gnashing of teeth, I think I got it to install, at least it shows as being version 4.2, but apt still shows that the install had an issue:Building dependency tree\nReading state information… Done\nCorrecting dependencies… Done\nThe following packages were automatically installed and are no longer required:\nlibboost-iostreams1.65.1 libgoogle-perftools4 libpcrecpp0v5 libsnappy1v5 libstemmer0d libtcmalloc-minimal4 mongo-tools mongodb-server-core\nUse ‘sudo apt autoremove’ to remove them.\nThe following additional packages will be installed:\nmongodb-org-mongos mongodb-org-server mongodb-org-tools\nThe following NEW packages will be installed:\nmongodb-org-mongos mongodb-org-server mongodb-org-tools\n0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.\n2 not fully installed or removed.\nNeed to get 0 B/85.7 MB of archives.\nAfter this operation, 248 MB of additional disk space will be used.\nDo you want to continue? [Y/n] Y\n(Reading database … 176638 files and directories currently installed.)\nPreparing to unpack …/mongodb-org-server_4.2.7_amd64.deb …\nUnpacking mongodb-org-server (4.2.7) …\ndpkg: error processing archive /var/cache/apt/archives/mongodb-org-server_4.2.7_amd64.deb (–unpack):\ntrying to overwrite ‘/usr/bin/mongod’, which is also in package mongodb-server-core 1:3.6.3-0ubuntu1.1\ndpkg-deb: error: paste subprocess was killed by signal (Broken pipe)\nPreparing to unpack …/mongodb-org-mongos_4.2.7_amd64.deb …\nUnpacking mongodb-org-mongos (4.2.7) …\ndpkg: error processing archive /var/cache/apt/archives/mongodb-org-mongos_4.2.7_amd64.deb (–unpack):\ntrying to overwrite ‘/usr/bin/mongos’, which is also in package mongodb-server-core 1:3.6.3-0ubuntu1.1\ndpkg-deb: error: paste subprocess was killed by signal (Broken pipe)\nPreparing to unpack …/mongodb-org-tools_4.2.7_amd64.deb …\nUnpacking mongodb-org-tools (4.2.7) …\ndpkg: error processing archive /var/cache/apt/archives/mongodb-org-tools_4.2.7_amd64.deb (–unpack):\ntrying to overwrite ‘/usr/bin/bsondump’, which is also in package mongo-tools 3.6.3-0ubuntu1\ndpkg-deb: error: paste subprocess was killed by signal (Broken pipe)\nErrors were encountered while processing:\n/var/cache/apt/archives/mongodb-org-server_4.2.7_amd64.deb\n/var/cache/apt/archives/mongodb-org-mongos_4.2.7_amd64.deb\n/var/cache/apt/archives/mongodb-org-tools_4.2.7_amd64.deb\nE: Sub-process /usr/bin/dpkg returned an error code (1)",
"username": "MARVIN_FRANCOIS"
},
{
"code": "sudo apt autoremovemongo --versionapt",
"text": "The following packages were automatically installed and are no longer required:\nlibboost-iostreams1.65.1 libgoogle-perftools4 libpcrecpp0v5 libsnappy1v5 libstemmer0d libtcmalloc-minimal4 mongo-tools mongodb-server-core\nUse ‘sudo apt autoremove’ to remove them.I’m not sure how the MongoDB tools were originally installed, but did you try running the sudo apt autoremove command as suggested?if mongo --version is reporting 4.2.7 then it seems that the upgrade happened in some form, but I would definitely look into the errors. I’m not sure why apt was having problems trying overwrite binaries. Maybe they’re owned by a different user?",
"username": "Doug_Duncan"
}
] | Upgrade Mongo shell 3.6.3 to 4.2 AWS Ubuntu 18.04 | 2020-05-27T16:18:44.028Z | Upgrade Mongo shell 3.6.3 to 4.2 AWS Ubuntu 18.04 | 5,202 |
null | [
"react-native",
"xamarin"
] | [
{
"code": "componentWill_blahwarn Package uuid has been ignored because it contains invalid configuration",
"text": "Hi,\nThe React Native app for Realm is a hot mess. It has a lot of code that calls deprecated lifecycle methods like componentWill_blah methods along with invalid configs like warn Package uuid has been ignored because it contains invalid configuration.Are there any plans to deliver an updated app so that folks like me that are evaluating the product have something to demo?",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "Hi Michael,Thanks for reaching out to us on the forums. Apologies for the experience you’re having with the React Native demo app. Lots has changed within Realm in the last year and our Docs and Education team are working on new and updated Demos to showcase Realm across all SDKs including React Native. These Demos should be available around the same time as our .Live event which is on the 9th & 10th of June - so it’s only a few weeks away.Please let us know in the forums if there’s any way we can help you further in your evaluation and we’ll try to assist.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "@Michael_Stelly - can you provide a link to the specific app that you’re referring to?",
"username": "kraenhansen"
},
{
"code": "",
"text": "Sure. Here you go. I believe this is one of the many I tried yesterday in my search. It’s the only one in the Realm doc.On a related note, in Realm Studio, the link labeled “Start with React Native” points here which leads to the Xamarin doc.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "Yes, there is a way to help. Give me a workable React Native tutorial so that I can properly evaluate the tool. I have a decision to make long before your June target date.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "Hi all,\nI created an updated solution to the outdated RN doc example. Feel free to use it when updating the documentation @Shane_McAllister.\nRealm example using Hooks.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "Hi Michael,Wow - great! Thanks for that. I’ll try to get somebody to review that and get any feedback back to you.Appreciated",
"username": "Shane_McAllister"
},
{
"code": "useEffectcomponentDidMount",
"text": "What I’m not sure of is the realm.close() return block. If useEffect is executed on update and the db is open, the return code will close it. That doesn’t sound right. I think it should have a componentDidMount behavior, then a separate workflow behavior that closes to db.¯_(ツ)_/¯ I’ll have to think about this more. But it’s a start.",
"username": "Michael_Stelly"
},
{
"code": "mjstelly",
"text": "I’ve updated my example with this gist. Hope it helps. But this forum won’t let me post the link from Github. It’s public, so you can find my user mjstelly and pull it from there.",
"username": "Michael_Stelly"
},
{
"code": "import React from 'react';\nimport {View, Text, StyleSheet, Button, Alert} from 'react-native';\nimport Realm from 'realm';\nconst styles = StyleSheet.create({\n container: {\n flex: 1,\n justifyContent: 'center',\n alignItems: 'center',\n },\n welcome: {",
"text": "Here’s the link that @Michael_Stelly refers to above:\nNot sure why it didn’t let him post the link. ",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | React Native demo app needs an update | 2020-05-13T22:05:28.488Z | React Native demo app needs an update | 3,768 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Whether or not the realm has not been updated for too long, resulting in incompatibility with the latest version of react-native. When I package RN applications, I run the apk file and feedback to me ‘keeps stopping’ to force the exit of the program",
"username": "11142"
},
{
"code": "",
"text": "What debugging have you tried? Does it work in dev? Have you tried downgrading RN? Do you have the latest npm package? We’ll need way more info than you provided to help with this question.",
"username": "Michael_Stelly"
}
] | "react-native": "0.62.2" The production environment is not working | 2020-05-28T01:51:26.675Z | “react-native”: “0.62.2” The production environment is not working | 1,684 |
[] | [
{
"code": "",
"text": "\nlogoh-w-nobg4500×1200 43 KB\nHello everyone,Who has never needed help? In 2013, I was young and in love, so I moved to Helsinki (I’m Brazilian) to live with my ex girlfriend. The day I’ve arrived there she broke up with me after losing our baby, so I was very sad and left out in a cold faraway place without money for food, shelter or any sort of assistance. I needed help. I could be your friend, relative or someone you care about. Mostly we only pay attention to a necessity when it touches our skin, pockets, or homes.But I want to change that.Fast-forward a few years. I have a solid career in web development and after years planning Let’s Hero (former Karuna), it is finally coming true.Let’s Hero aims to be a platform for fostering a culture of mutual help among its users by gamifying the whole experience of compassion. It will do so by connecting friends and strangers, while facilitating the acknowledgement of each other’s needs for help, thus allowing its users to provide and find support in times of need in an either monetized or charitable exchange. The help provided will be accounted in an user’s profile in the form of statistics, badges, awards and medals, in a way that it can be later used as social proof of an user’s engagement and commitment to the benefit of others.Let’s Hero values compassion, proactivity, solidarity, and availability and will benefit its users by providing an easy to use, accessible, entertaining and fulfilling gamified marketplace for the exchange of help and support in cases of either urgent needs, as well as commonplace ordinary necessities.We have a production ready beta version and our db is running on Atlas. MongoDB has just welcomed us as a member of MongoDB for Startups and we couldn’t be more thankful. It was our first official support. You folks have become our very first heroes.Thank you.Angelo Reale\nLet’s Hero\nhttps://letshero.com",
"username": "Angelo_Reale"
},
{
"code": "",
"text": "Wow - great project Angelo! Great to have you in the Community. Let us know how we can help!",
"username": "Michael_Lynn"
}
] | Hello from Let's Hero! | 2020-05-29T04:57:11.764Z | Hello from Let’s Hero! | 5,128 |
|
null | [
"vscode"
] | [
{
"code": "",
"text": "Hello, I don’t see an option for SCRAM-SHA1\nIs there a workaround?",
"username": "siraj"
},
{
"code": "",
"text": "Can you connect with the connection string instead of with the form?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Thanks for that tip!\nI was able to connect using connection string\nThat option wasn’t visible when it starts",
"username": "siraj"
}
] | MongoDB for VS Code - SCRAM-SHA1 | 2020-05-28T18:42:22.108Z | MongoDB for VS Code - SCRAM-SHA1 | 3,512 |
null | [
"charts"
] | [
{
"code": "",
"text": "I have an aggregate query in Charts that uses a $facet to group values. The $match in the facet stage uses dates as it’s filter. Is there any way to insert these dates from a embedded chart implemention?I know I can set the &filter variable in the payload request, however I believe this only filters at the top level?My issue is that records contain two dates (one started and one finished). In my match statement (could be replaced by the filter value) I look at the startdate being between two dates or the finishdate between two dates. In the facet I do a match on the startdate being between the two dates (so count as started) and the finishdate being between two dates (so count as finished). How would I solve that in a single query in Charts (for use with embedding)?",
"username": "Arnold_Ligtvoet"
},
{
"code": "$match$facet",
"text": "Hi @Arnold_Ligtvoet -Unfortunately I can’t think of a way of doing this. The filter applied via the query string parameter is always used in a dedicated $match stage, after the query bar filter. There’s no way to inject this inside your $facet stage. It is possible to do basic date arithmetic in the query bar using Javascript, so if you wanted your embedded charts to use a dynamic but predictable date range like “within the last 30 days” you could do that without needing to use embedding filters.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom, thanks for confirming the issue. Having a basic predictable date range won’t work in my use case, as we have integrated the charts in a portal where the user can select a date range. I think my main issue it trying to do counts based on two different variables with dates. In the initial $match fase provided by the charts filter option I have an or statement, so that will work. I need to split these later on in two different counts (hence the $facet). Perhaps using the inserted filters later on (like $$filter1 and $$filter2) would be a nice feature addition?Added a feature request: https://feedback.mongodb.com/forums/923524-charts/suggestions/40541110-possibility-to-use-filter-parameters-in-later-stag",
"username": "Arnold_Ligtvoet"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Charts with a dynamic filter in aggregate queries | 2020-05-28T11:44:47.500Z | Charts with a dynamic filter in aggregate queries | 4,679 |
null | [] | [
{
"code": "",
"text": "After running mongo --nodb command, the following message coming up and can’t figure out how to solve this issue:“mongo” cannot be opened because the developer cannot be verified.\nmacOS cannot verify that this app is free from malware.Can you help me out here, please?",
"username": "Balazs_Hetenyi"
},
{
"code": "",
"text": "Hi @Balazs_Hetenyi,Please have a look at this post.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Hi Shoubham,\nThanks for your reply. I have solved the issue.",
"username": "Balazs_Hetenyi"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | Installation issue on Mac after downloading MongoDB enterprise | 2020-05-29T09:13:50.802Z | Installation issue on Mac after downloading MongoDB enterprise | 1,224 |
null | [
"realm-studio"
] | [
{
"code": "",
"text": "I’m having weird problems opening a Realm I created in my Android emulator\n“Unable to open a realm at path ‘…/adbfiles/files/default.realm’: Invalid top array (ref: 5416, size: 11)”Apparently it can be caused by compatibility issues.Where do I find a table or similar where I can check which Realm Studio version works for which Realm version?In this specific case I’m using Realm 7.0.0 beta and the latest Realm Studio 3.10.0Thanks.",
"username": "Ivan_Schuetz"
},
{
"code": "",
"text": "Historically Realm Studio has always had compatibility with all versions of the Realm file format released after its inception. So - there hasn’t been a real need for a table like this. This is not the case with Realm core 6 (which is used by Realm Java 7). There’s no official release of Realm Studio, which is compatible with Realm core 6 - but we’ve made a beta release Release 3.9.0-beta.0 · realm/realm-studio · GitHub which you can use until an official release is out.",
"username": "kraenhansen"
},
{
"code": "",
"text": "Just wanted to mention that the latest version of Realm Studio (v3.11.0) uses the latest database file format: Release 3.11.0 · realm/realm-studio · GitHub",
"username": "kraenhansen"
}
] | Compatibility table Realm Studio <-> Realm library? | 2020-03-28T20:02:49.305Z | Compatibility table Realm Studio <-> Realm library? | 3,890 |
null | [
"vscode"
] | [
{
"code": "Unable to list databases: not master and slaveOk=false",
"text": "Hit here ,I have been using the RoboMongo for exploring the diff Mongo server for work. I tried to configure the new Mongo extension for vscode. During the connection config thing went well and got a green. But when I try to list the db by double-clicking the connection profile got the following error.Unable to list databases: not master and slaveOk=falseI am not sure where I can run this. Since I don’t see a default console as I do in the roboMongo",
"username": "Karthikeyan_Annamala"
},
{
"code": "MongoDB: Launch MongoDB Shellrs.slaveOk()",
"text": "Hello @Khalifa_Ali_Al-Thani welcome to the forum!Based on the error message I assume that you try to connect to a secondary without having reads allowed.To fix this follow this steps (I write this assuming you are using VS Code)image2016×431 255 KBThis should fix the error message you have posted.Hope that helps\nMichael",
"username": "michael_hoeller"
}
] | VScode extension error: not master and slaveOk=false | 2020-05-27T09:40:20.799Z | VScode extension error: not master and slaveOk=false | 4,262 |
null | [
"aggregation",
"indexes",
"performance"
] | [
{
"code": "db.getCollection('mycoll').getIndexes()\n[\n {\n \"v\":1,\n \"key\":{\n \"_id\":1\n },\n \"name\":\" *id* \",\n \"ns\":\"mydb.mycoll\"\n },\n {\n \"v\":1,\n \"key\":{\n \"networkId.$id\":1,\n \"status\":1,\n \"alarmType\":1\n },\n \"name\":\"networkId.$id_1_status_1_alarmType_1\",\n \"ns\":\"mydb.mycoll\"\n },\n {\n \"v\":2,\n \"key\":{\n \"status\":1\n },\n \"name\":\"status_1\",\n \"ns\":\"mydb.mycoll\",\n \"background\":true\n }\n]\n",
"text": "In load MongoDB log file, we can see a lot of slow queries. Most of them are not using the best index and consequently the best execution plan is not used. However, when I run the same queries by myself in mongo shell, the correct index is used. So why for same query, we don’t have same execution plan ? MongoDB V:3.4.23Kindly find the log below using wrong index :2020-05-25T04:55:51.624+0000 I COMMAND [conn301319] command mydb.mycoll command: aggregate { aggregate: “mycoll”, pipeline: [ { $match: { networkId.$id:{ $in: [ ObjectId(‘5e0ed9eb60b2533bda7a0fa8’) ] }, status: “0”, alarmType: “1” } }, { $group: { _id:\n{ networkId: “$networkId” }, alarmCount: { $sum: 1 } } } ] } planSummary: IXSCAN { status: 1 } keysExamined:35350 docsExamined:35350 numYields:280 nreturned:0 reslen:135 locks:{ Global: { acquireCount:{ r: 574 }, acquireWaitCount: { r: 92 }, timeAcquiringMicros: { r: 1590945 } }, Database: { acquireCount:{ r: 287 }}, Collection: { acquireCount:{ r: 286 }\n} } protocol:op_query 1980msIndex On collection:",
"username": "Anand_Singh"
},
{
"code": "",
"text": "Could you please repost your code using the code or pre html element so that we get proper indentation? That would make it easier to understand it.",
"username": "steevej"
},
{
"code": "",
"text": "can you check it again ? I have updated the logs .",
"username": "Anand_Singh"
},
{
"code": "",
"text": "I really do not know why it uses the index ‘status’. However I would ask myself a few questions.I know you can specify index hint to find() but I do not know if there is an equivalent for $match. I tried to find it but to no avail. But in this particular case since you only counting alarms, you could use find() and provide the hint.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the reply.Find you answers below\n1-Yes it is need we are removing data from “mycoll” on the basis of status .\n2-No\n3-staus and alarmType both have the same number of possible values . for status we have 0,1,2 and for alarmType we have 1,2,3 .So that swapping the order of alarmType and status does not make any changes.\n4.We have both the key present for all the records .",
"username": "Anand_Singh"
}
] | MongoDB is choosing the wrong index / execution plan | 2020-05-27T16:15:23.104Z | MongoDB is choosing the wrong index / execution plan | 3,815 |
[
"ops-manager"
] | [
{
"code": "2020-05-27T05:34:02.586+0000 [JettyHttpPool-35] INFO com.xgen.svc.mms.res.user.UserResource [UserResource.java.authV1:1027] - Login attempt from addr=\"xxxxxxx\" username=\"xxxxxxx\" result=SUCCESS\nxxxxxxxxx - - [27/May/2020:05:34:02 +0000] \"POST /user/v1/auth HTTP/1.1\" 200 285 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:02 +0000] \"GET / HTTP/1.1\" 303 0 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:02 +0000] \"GET /user/login HTTP/1.1\" 303 0 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:03 +0000] \"GET /user HTTP/1.1\" 200 2388 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:04 +0000] \"GET /uiMsgs HTTP/1.1\" 200 2 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:04 +0000] \"GET /static/dist/registration_v2.min.js.map HTTP/1.1\" 404 2484 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:04 +0000] \"GET /static/assets/css/nprogress.min.css.map HTTP/1.1\" 404 2484 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:04 +0000] \"GET /static/assets/css/thirdparty.min.css.map HTTP/1.1\" 404 2484 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:04 +0000] \"GET /static/dist/styles.min.css.map HTTP/1.1\" 404 2484 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\nxxxxxxxxx - - [27/May/2020:05:34:04 +0000] \"GET /static/dist/bem-components.min.css.map HTTP/1.1\" 404 2484 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36\"\n",
"text": "I wanted to try Ops Manager so I installed it on an Ubuntu18 following the suggested test deployment by having the mongo application DB on the same serverOps Manager started successfully, I connected directly via the public IP address and went through the setup webpages, during which I also configured a FQDN for Ops Manager (routed via CloudFront with SSL offloading). I finished the configuration and the page displayed was “setup new project”. I then logged out.\nI have been unable to log back in. I used forgotten password to reset the password several times, but with the same result. I also tried bypassing cloudfront by opening the website using the host’s public IP address, but the login result was the same…After clicking login the page refreshes and shows the login page again.Taking a look at the logs…\nmms0.logI can see my authentication is successful.looking at mms0-access.log I can see the requests which look ok, except for some 404s on fetching map filesAny suggestions on how to resolve this?",
"username": "Blair_Anson"
},
{
"code": "",
"text": "Only way to resolve this was a complete reinstall of Ops Mgr.\nIt turns out that you must either create a Project (or add an additional users which also creates a Project) in Ops Mgr before your first logout.",
"username": "Blair_Anson"
}
] | Ops Manager new install, unable to login | 2020-05-27T05:57:58.380Z | Ops Manager new install, unable to login | 2,891 |
|
null | [
"morphia-odm"
] | [
{
"code": "",
"text": "This weekend I pushed two new releases to maven central: 1.6.0-RC1 and 2.0.0-RC1. These are two momentous releases that I’m personally terribly excited about. The 1.6 release, and indeed the branch, serves two main purpose:There are, of course, a few bug fixes included in the release as well.The 2.0 RC marks what I hope is the API complete update for the release. If you’ve been curious but have been holding back, I encourage you to give this release a spin. I’ve been using it for a while now and moving back to the 1.x API honestly makes me a little sad. The official document site can be found at https://morphia.dev/. Full release notes can be found at the follow locations:1.6.0: Release Version 1.6.0-RC1 · MorphiaOrg/morphia · GitHub\n2.0.0: Release Version 2.0.0-RC1 · MorphiaOrg/morphia · GitHubIssues can be filed on the github site. Happy coding.",
"username": "Justin_Lee"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Morphia 1.6.0-RC1 and 2.0.0-RC1 releases | 2020-05-28T21:18:26.619Z | Morphia 1.6.0-RC1 and 2.0.0-RC1 releases | 5,064 |
null | [] | [
{
"code": "",
"text": "Hello,I am looking for an app for discourse to make reading and getting push messages / notifications more easy while not at my lap/desktop.Discourse has an official app:Unfortunately both are more or less a wrapper for the website. Beside these two there are other apps but none really add value to the use of the native website unless you want a bookmark manager for multiple Discourse sites. I focused on iOS alternativea, @Stennie_X told me something similar for Android.The Discourse UI is generally fine on mobile without an app, some minor issues like narrow buttons are there in case your screen is small (e.g. 4,7\"). So using “Add to Homescreen” is an option to have an app icon to launch the site and fewer taps to get the information needed. Unfortunately this approach doesn’t deliver O/S level notifications on app icon badges or notification.Is there anyone around who found an app which provides these notifications or maybe someone already wrote a notification wrapper for discourse?I’d be happy to hear about your findings, or if you too would like to have such an app or wrapper?Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | App recommendation for the forum / discourse? | 2020-05-28T19:41:21.787Z | App recommendation for the forum / discourse? | 4,012 |
Subsets and Splits