image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "charts" ]
[ { "code": "", "text": "Hi,What is the probability of Charts being exposed through API’s to dynamically build the charts as per use case from the Application User’s perspective and not the Application Developer’s perspective. Something like API’s to connect to data source, chart type, their various attributes and configuration and what we get in response in the embedded html for a chart, automatically created?Is this in Charts roadmap forward? This would highly improve the Charts usability in application development.", "username": "shrey_batra" }, { "code": "", "text": "Hi @shrey_batra! The probability of this is very high - an API allowing you to create and modify charts is already on our longer term roadmap. However we have a few things higher on our list, so it may not happen for about a year.Could you share a bit more info on how you would use this feature? This will help us with the planning and design.Tom", "username": "tomhollander" }, { "code": "", "text": " Could you share a bit more info on how you would use this feature? This will help us with the planning and design.We want user to create and customize the charts, changing the query a simple way. right now i heavily utilize embedded charts and URL parameters. But this is really insecure way to do it.And also we need export chart data, I wrote a lambda function, and copy/paste query to generate chart so lambda send it as CSV to the browser… again, this are so horrible, insecure hacks.… etc.I have more requests about charts, anyway.", "username": "coderkid" }, { "code": "", "text": "Thanks @coderkid. For any more feature requests, please submit/vote on them at feedback.mongodb.com. While we’ve heard the request a bit, I don’t think there’s an existing open suggestion for an API to create/manipulate charts, so feel free to submit that for others to vote on.Tom", "username": "tomhollander" }, { "code": "", "text": "Hii @tomhollander,Sorry for the late reply. I mean to use Charts in a more generic framework. Imaging we have an application which ingests data / has a lot of data. We want to use that data to create charts/insights. Normally we use open source JS frameworks. I want to eliminate that dependency, and open up charts API, so that any user can come, ingest the data and create charts - without exposing the hidden data layer of MongoDb server connections through the Charts application. Also a lot more application permission control over who can query what data and create what type of charts / maintain dashboards, etc through a multi tenant application built on mongo and charts…", "username": "shrey_batra" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Exposing MongoDB Charts via API's
2020-04-28T06:44:56.976Z
Exposing MongoDB Charts via API’s
3,239
null
[]
[ { "code": "", "text": "hello guys, facing issue in launching mongo shell. please help me with this.\nI installed by following mongodb documentation “Mac OS installation” --MongoDB Enterprise edition\nI am getting following error\n", "username": "kiran_kumar_50167" }, { "code": "", "text": "You are trying to run mongod on your local hostYou have to use Vagrant", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I’ve highlighted the main points:\nimage1890×373 236 KB\n", "username": "007_jb" }, { "code": "/data/db/usr/local/var/mongodb", "text": "As mentioned in the official documentationStarting with macOS 10.15 Catalina, Apple restricts access to the MongoDB default data directory of /data/db . On macOS 10.15 Catalina, you must use a different data directory, such as /usr/local/var/mongodb .", "username": "kiran_kumar_50167" }, { "code": "", "text": "I installed by following mongodb documentation “Mac OS installation” --MongoDB Enterprise editionBut yet you said that you followed the documentation. It would also be useful to include the version of your OS in future threads otherwise we wouldn’t know.", "username": "007_jb" }, { "code": "", "text": "Yes my mistake I was not sure about that to mention… sorry", "username": "kiran_kumar_50167" }, { "code": "", "text": "Hi @kiran_kumar_50167,So, your issue is resolved, right ?~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "no I could not? can u help?", "username": "kiran_kumar_50167" }, { "code": "", "text": "Which step are you on and what is the difficulty now?\n", "username": "007_jb" }, { "code": "/data/db/usr/local/var/mongodb/data/db", "text": "Hi @kiran_kumar_50167In addition to @007_jb,Starting with macOS 10.15 Catalina, Apple restricts access to the MongoDB default data directory of /data/db . On macOS 10.15 Catalina, you must use a different data directory, such as /usr/local/var/mongodb .Did you try to start your mongod instance by specifying different datapath or did you create the /data/db directory manually ?~ Shubham", "username": "Shubham_Ranjan" }, { "code": "mongo --nodb", "text": "@Shubham_Ranjan I’m using masOS Catalina. I have same issue. Even though i changes my terminal shell to bash. I was unable to run - mongo --nodb. As soon as try to run, the modal pops ups and says -“mongo” cannot be opened because the developer cannot be verified.I have followed all the steps of installation but unable to move forward due to above error. Please bring updates for macOS Catalina.", "username": "Nirmal_Patel" }, { "code": "", "text": "Hi @Nirmal_Patel,Please refer this post created by @007_jb.Let us know if the issue still persists.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Can't launch mongo shell (In macOS 10.15 Catalina)
2020-04-27T15:26:36.549Z
Can’t launch mongo shell (In macOS 10.15 Catalina)
15,835
null
[ "atlas-search", "stitch", "app-services-data-access" ]
[ { "code": "$searchBetaexports = function(arg){\n \n var collection = context.services.get(\"mongodb-atlas\").db(\"test\").collection(\"test\");\n var doc = collection.aggregate([\n {\n $searchBeta: \n { \n \"text\": \n {\n \"query\": arg, \n \"path\": [\"title\",\"location\"],\n \"fuzzy\": { \"maxEdits\": 1, \"maxExpansions\": 10}\n } \n } \n },\n ]);\n \n return doc;\n};\nLocation16436 Unrecognized pipeline stage name: '$searchBeta'\nApply When$searchBeta{\n \"roles\": [\n {\n \"name\": \"non-owner\",\n \"apply_when\": {},\n \"insert\": false,\n \"delete\": false,\n \"read\": true,\n \"write\": false,\n \"fields\": {},\n \"additional_fields\": {}\n }\n ],\n \"filters\": [],\n \"schema\": {}\n}\n{\n \"roles\": [\n {\n \"name\": \"non-owner\",\n \"apply_when\": {},\n \"insert\": false,\n \"delete\": false,\n \"write\": false,\n \"fields\": {\n \"title\": {}\n },\n \"additional_fields\": {\n \"read\": true\n }\n }\n ],\n \"filters\": [],\n \"schema\": {}\n}\n", "text": "First of all let me say that $searchBeta is really good.I’ve been using it in a function querying a test collection with great results.My problems began when I tried added Permission Rules to the collection, initially just to separate anonymous users from those with some metadata fields.All of a sudden my function above now only works for the System User. For all others it throws an error:After much tinkering with the permissions, I found that the bug happens both whenIn other words: $searchBeta only runs with a vanilla permission role: no Apply When rule and either read or r/w all fields.This is a minimal example to reproduce the bug:These rules workThese don’t work", "username": "Dalmo_Mendonca" }, { "code": "", "text": "Hi Dalmo – As you note, $searchBeta is only available in System functions currently. There are a few aggregation stages like this which are covered in our Aggregation API reference. Our current best practices are to use $searchBeta in a system function and make necessary access checks within the logic of the function.", "username": "Drew_DiPalma" }, { "code": "error: css failed to load", "text": "Thanks! I missed that table… So actually the bug is in the fact that it does work when there are no rules set? Because I was able to run it as users.On a side note I just had an UI bug error: css failed to load (or something like that) and it was telling me to contact you @Drew_DiPalma directly! It went away after refreshing the page. ", "username": "Dalmo_Mendonca" }, { "code": "", "text": "Hi Dalmo – It is actually a bit nuanced, when there are full R/W permissions open on a collection we treat the request the same as a System request so it was the addition of rules that changed the behavior. As a note we are also looking to improve the interplay of Search/Rules in the future.As for the CSS issue we’ll keep an eye out for that bug and feel free to drop me a message if you run into anything unexpected in the UX.", "username": "Drew_DiPalma" }, { "code": "", "text": "when there are full R/W permissions open on a collection we treat the request the same as a System requestAlright, but in the case I described, the default rule was readonly, not full R/W. It makes no difference in this case really, but just for the record.", "username": "Dalmo_Mendonca" } ]
$searchBeta is not a function if there are Permission Rules
2020-04-24T13:42:19.826Z
$searchBeta is not a function if there are Permission Rules
3,815
null
[ "stitch" ]
[ { "code": "", "text": "Hi everyone, I am trying to implement the sample function provided at https://docs.mongodb.com/stitch/logs/api/ to export my trigger execution logs. I am getting an error stating that the method “authenticate” is not defined, and I suspect the URL is actually wrong as it won’t resolve.Do you know what the value for this variable should be?const ADMIN_API_BASE_URL = “https://stitch.mongodb.com/api/admin/v3.0”;Thanks!", "username": "Carlos_Alvidrez" }, { "code": "authenticate()formatQueryString()ADMIN_API_BASE_URL", "text": "Hi @Carlos_Alvidrez, welcome!I am getting an error stating that the method “authenticate” is not defined, and I suspect the URL is actually wrong as it won’t resolve.Please make sure that you have included/declared the helper functions mentioned in the beginning on the page. There are two helper functions listed authenticate() and formatQueryString().Do you know what the value for this variable should be?The value of the URL should be as declared before the function on Get Recent Logs section. If you’re getting an error that ADMIN_API_BASE_URL is not found, please make sure that you have declared this variable.Regards,\nWan", "username": "wan" } ]
ADMIN_API_BASE_URL Not Found!
2020-04-29T01:34:04.046Z
ADMIN_API_BASE_URL Not Found!
1,754
null
[ "queries" ]
[ { "code": "{\n \"_id\": ObjectId(\"...\"),\n \"name\": \"Abc\",\n \"items\": [\n {\n \"name\": \"123\",\n \"subItems\": [\n {\n \"data\": \"qwerty\"\n },\n {\n \"data\": \"qwezxc\"\n }\n },\n {\n \"name\": \"456\",\n \"subItems\": [\n {\n \"data\": \"qwerty\"\n },\n {\n \"data\": \"zxcvbm\"\n }\n ]\n }\n ]\n}\nitemsubItem$unwindsubItems.dataqwe{ \"data\":\n [\n { \"_id\": ObjectId(\"...\"), \"name\": \"Abc\", \"items\": { \"name\": \"123\", \"subItems\": { \"data\": \"qwerty\"} } },\n { \"_id\": ObjectId(\"...\"), \"name\": \"Abc\", \"items\": { \"name\": \"123\", \"subItems\": { \"data\": \"qwezxc\"} } },\n { \"_id\": ObjectId(\"...\"), \"name\": \"Abc\", \"items\": { \"name\": \"456\", \"subItems\": { \"data\": \"qwerty\"} } },\n ],\n \"count\": 30 // e.g. if $limit: 3, but the collection above had more matching results\n}\n$match -> $unwind -> $match -> $facet \\\n |-> $skip -> $limit\n |-> $count\n$count$unwind$count$limit", "text": "I have a collection that looks sort of like this - each entry has an embedded array (and items of the array have an embedded array as well):I need to run different kinds of requests for these objects - sometimes returning results as the entire objects, but sometimes I need to search and return a result per item, or even per a subItem (in an $unwind sort of way). Like, if I search subItems.data for qwe, I may need the result like this:And with paging, I preferably need the total count, or at least availability of more results.At first I’ve been using aggregations for this, something along the lines ofBut even if I only have ~20k items in the collection, the query takes a good minute since, naturally, the $count after $unwind is incredibly heavy as the query loses any and all caching at that point. Even when I tried replacing $count with a bigger $limit to simply know if there are more results, the query is still too slow for practical use in e.g. an admin dashboard.Are there any ways to achieve these kinds of results within a few seconds with just this collection, or do I more or less have to denormalize the data into several collections for each of the embedded levels that I need to “unwind” by? So that I can just use the normal search and not aggregations (which are, admittedly, not intended for this purpose in the first place).I am already duplicating the data for some of the queries, but the amount of these kinds of embedded arrays and queries grows across the database, as does the amount of items in general, so I’m apprehensive about excessive duplicating due to the size of the database and having to be very careful with data consistency since it’s all handled by the application code and not the database.I’d appreciate any ideas regarding this, I’m rather stuck due to my limited experience in the Mongo way of doing things. I do realize SQL is in general better suited for these sorts of queries, but we’re overall too invested in Mongo.", "username": "Artemiy" }, { "code": "$filter$map$reducefindexplain", "text": "The queries you are intending to perform involves filtering specific elements in an array as well as projecting some fields as the query result. Secondly, you are looking for performance with your queries.Querying:There are specific operators to filter and project array fields. You can use both the Aggregation Framework as well as the MongoDB Query Language (MQL) for these queries.Aggregation has array operators, like $filter, $map and $reduce to filter and transform the array data. With MQL, using the find method, you can use array query and projection operators.Query Performance:The performance of queries can be improved using indexes. Both, the aggregation and MQL queries can use indexes to make the queries run fast. Both the filtering and sorting operations can use indexes. You can create indexes on array fields too (these are called as Multikey Indexes).You can also generate the query plans for both the MQL and aggregations using the explain method to verify the index usage.", "username": "Prasad_Saya" } ]
How to search for objects within embedded arrays with paging?
2020-05-04T20:57:23.714Z
How to search for objects within embedded arrays with paging?
7,339
null
[ "java" ]
[ { "code": "Caused by: java.lang.OutOfMemoryError: unable to create new native thread\n\tat java.lang.Thread.start0(Native Method)\n\tat java.lang.Thread.start(Thread.java:717)\n\tat com.mongodb.connection.DefaultServerMonitor.start(DefaultServerMonitor.java:80)\n\tat com.mongodb.connection.DefaultServer.<init>(DefaultServer.java:73)\n\tat com.mongodb.connection.DefaultClusterableServerFactory.create(DefaultClusterableServerFactory.java:72)\nat com.mongodb.connection.BaseCluster.createServer(BaseCluster.java:364)\nat com.mongodb.connection.SingleServerCluster.<init>(SingleServerCluster.java:52)\nat com.mongodb.connection.DefaultClusterFactory.createCluster(DefaultClusterFactory.java:181)\nat com.mongodb.Mongo.createCluster(Mongo.java:738)\nat com.mongodb.Mongo.createCluster(Mongo.java:732)\nat com.mongodb.Mongo.<init>(Mongo.java:298)\nsistemasfiware@smartcity:~$ ulimit -a\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 38966\nmax locked memory (kbytes, -l) unlimited\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 999999\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 4999999\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\n", "text": "Hello, I am working with a MongoDB dataBase and a Jhipster Java system based which enters smartCity information in the MongoDB . When the system is running for many hours I have the following error:The system is running in a ubuntu 18.04 server with the following parameters:The mongodb runs in a docker\nThe system requirements are the following:\nHardDisk : 19Gb\nRam 9GBIt is supposed that there can not be more threads in the system .Any idea on how to solve the problem ?Thanks\nGorka", "username": "Gorka_Sanz_Monllor" }, { "code": "", "text": "Looks like the JVM is OOM not the host or mongodb.You need to give the app some more heap to play with.", "username": "chris" }, { "code": "", "text": "Each instance of MongoClient does manage a set of threads, but usually that is not a problem. Please check whether you are creating more MongoClient instances than you need, and in particular whether you are calling close() on each one before it goes out of scope. Generally, if your application is connecting to a single MongoDB cluster as a single user, you only need to create a single MongoClient instance for the entire lifetime of your application.", "username": "Jeffrey_Yemin" } ]
Unable to create native threads in MongoDB
2020-05-04T17:52:52.098Z
Unable to create native threads in MongoDB
3,821
null
[ "data-modeling" ]
[ { "code": "", "text": "I’ve been using a MongoDb-based message queue for a while now and recently built out similar functionality in Postgres 12.My current mongo model has the message payload and metadata all in one document (with GridFs implemented as necessary). Very easy to work with and 95% of the time all the data in the document is required (when being read back out).Initially I followed this model with PG and was surprised by how much file bloat there was. It’s due to the MVCC nature of their storage engine and how updates are actually new copies of the updated record. Separating the (never updated) message payload into a separate table significantly reduced the db size and was generally much faster.WiredTiger also implements some form of MVCC (but is clearly more space efficient), and this got me wondering if I should, in fact, split off the message payload and metadata into 2 separate collections - even though this would be a 1-to-1 relationship. The message payload would never need to be re-written when the (comparably small) metadata is updated.Anyone have any experience on large documents needing to be fully re-written when just a tiny subset of the document is changed? Thoughts?", "username": "Nick_Judson" }, { "code": "", "text": "After some initial testing, it appears the answer is a resounding no. The cost of making 2 round-trip writes instead of one sinks this idea pretty quick.If it were possible to do a cross-collection bulk insert then it might be worth investigating further.", "username": "Nick_Judson" } ]
Data modeling and MVCC write throughput
2020-05-02T00:40:37.877Z
Data modeling and MVCC write throughput
2,266
null
[ "atlas" ]
[ { "code": "$or{\n \"ok\": 0,\n \"errmsg\": \"$or is not allowed in this atlas tier\",\n \"code\": 8000,\n \"codeName\": \"AtlasError\",\n \"name\": \"MongoError\"\n}\n", "text": "Hello,I’ve read both https://docs.atlas.mongodb.com/reference/unsupported-commands/ and https://docs.atlas.mongodb.com/reference/free-shared-limitations/ as well as https://docs.mongodb.com/manual/reference/operator/aggregation/or/index.html and https://docs.mongodb.com/manual/reference/operator/query/or/index.html but I cannot understand why I’m given AtlasError below when there’s nowhere to be found that $or is limited/unsupported.Can anyone give me some insight as to why is that, when documentation doesn’t state that the operator is limited/unsupported?Thank you", "username": "RaphaelDDL" }, { "code": "", "text": "You do not provide enough context to understand your issue. What command or action were you doing when you got this error.", "username": "steevej" }, { "code": "", "text": "Inside an aggregate, one of the steps I added was $match with an $or inside.", "username": "RaphaelDDL" }, { "code": "", "text": "Hi Raphael,Can you provide an example of the aggregation pipeline you are executing and confirm the Atlas tier you are using?Thanks,\nStennie", "username": "Stennie_X" }, { "code": "coll.find({$or: [{_id: _id},{id: _id }]})\naggregate\t\t\tcoll.aggregate([\n\t\t\t\t{\n\t\t\t\t\t$match: {\n\t\t\t\t\t\t$or: [{ _id: _id }, { id: _id }, { name: _id }],\n\t\t\t\t\t},\n\t\t\t\t},\n", "text": "I first tried with:And I got the error. Then I tried on aggregateAnd I got same error as well.But oddly enough, I ran today again the aggregate just to make sure before replying and did not give me the error I posted on the first post, instead, it worked.\nI’m using M0 Free Plan.Maybe it had an issue on my end, I don’t know. O.o", "username": "RaphaelDDL" } ]
AtlasError when using $or: Insight on why is limited/unsupported
2020-05-03T21:57:33.139Z
AtlasError when using $or: Insight on why is limited/unsupported
4,090
null
[ "queries" ]
[ { "code": "from pymongo import MongoClient\n\nclient = MongoClient(\"mongodb://localhost:27017\")\ndb = client.get_database('MapView')\nneighborhood = db.bairroSP\nrestaurant= db.resSpec\n\nsearch_restaurant = restaurant.find({'type':['meat','fish']})\n\nfor rest in search_restaurant:\n print(rest)\n", "text": "Hello, i was trying to do a array search using the query language however my db.collection.find() returns nothing this is my code:And here it’s the return:C:\\Users\\André\\AppData\\Local\\Programs\\Python\\Python37\\python.exe\nC:/Users/André/PycharmProjects/PythonDB/testes.pyProcess finished with exit code 0Here are some of the documents from my collection:{\"_id\":1, “location”:{“coordinates”:[-46.665958,-23.563313] ,“type”:“Point”},“type”:[“fish”, “seafood],“preco”:39.25,“name”:“Barú Marisquería”}\n{”_id\":2, “location”:{“coordinates”:[-46.697364,-23.559586] ,“type”:“Point”},“type”:[ “fish”, “seafood”, “meat”],“preco”:30.00,“name”:“Costa Nova”}\n{\"_id\":3, “location”:{“coordinates”:[-46.668611,-23.561079] ,“type”:“Point”},“type”:[“fish”, “seafood”, “meat”],“preco”:30.00,“name”:“Costa Nova”}My objective is to get all documents with this both types of food in the array, doesn’t matter if they’re in the same array or in different arrays", "username": "Andre_Sirikaku" }, { "code": "", "text": "Looks like a simple typo. You specified { tipo : …} in your find and you show document with a field named type.", "username": "steevej" }, { "code": "", "text": "sorry i missed this word when i was translating to make it easy to understand", "username": "Andre_Sirikaku" }, { "code": "", "text": "The way you are writing find( t : [ a , b ] ) means look for an array t that is exactly [ a , b ]. What I understand is that you want all entries that have both a and b. Even [ b , a ] does not match.ConsiderThe last one is a little bit closer that what you want but have an extra that does not have a. One way to do it would be to do a setIntersection and match on the size. Another way could be with 2 match stage pipeline where you match a in the first one and match b on the second one. Probably an elemMatch of some sort could work but could not come up with a working one in a timely manner.", "username": "steevej" }, { "code": "search_restaurant = restaurant.find({'type':{'$in':['meat','fish']}})", "text": "actually i made it work using the ‘$in’ operator, like thissearch_restaurant = restaurant.find({'type':{'$in':['meat','fish']}})thanks for your help", "username": "Andre_Sirikaku" } ]
Problem matching values in array
2020-05-01T19:37:11.436Z
Problem matching values in array
2,820
null
[]
[ { "code": "", "text": "Hi everyone,\nMy name is Mark, and I’m the founder of Ensemblies (ensemblies.com).I’ve created this tool based on the frustrations I’ve had as a mechanical engineer managing hardware projects. But who knows? Maybe it will help you as well. Give it a go! Feel free to email me at [email protected]", "username": "Mark_Chang" }, { "code": "", "text": "Welcome Mark! Great to have your on board… thanks for the update.", "username": "Michael_Lynn" } ]
Hello from Ensemblies: Collaboration Software for Hardware
2020-02-24T19:41:05.981Z
Hello from Ensemblies: Collaboration Software for Hardware
5,188
https://www.mongodb.com/…9312c82f3a9.jpeg
[]
[ { "code": "", "text": "Let’s play a game!\nOnline Scavenger Hunt Social Promo - Twitter1024×512 296 KB\nIt’s time for week four in our series of MongoDB Online Scavenger Hunts. Answer a series of 10 clues by visiting MongoDB resources including our blog, forums, documentation and more and learn about these resources while earning some awesome rewards!Each participant who completes a single scavenger hunt will earn:Each Monday we will post a new set of clues giving you the chance to earn additional badges and other rewards.We will soon be announcing additional rewards for completing multiple scavenger hunts with a special reward for those who complete all our weekly scavenger hunts this spring and summer.Every scavenger hunt will run from Monday until 5pm US Eastern time on Friday.Have an idea for a clue or question that should be included? Let us know!MongoDB Online Scavenger Hunt #4", "username": "Ryan_Quinn" }, { "code": "", "text": "Week 4 Done !I believe there is an error in question 7 @Ryan_Quinn.\nWe are told to continue with the config of question 6 which is an AWS Cluster in Hong Kong (ap-east-1).\nThe answer corresponds to an AWS M10 Cluster in N. Virginia (us-east-1). This is the default region and the price is different.Apart from that, it’s nice to do ", "username": "Gaetan_MORLET" }, { "code": "", "text": "Thanks. Looking at question 7, while some aspects of the plans vary between regions the “Cloud Provider Snapshots” pricing shown for AWS is listed at the same price in both regions mentioned. The Cloud Provider Snapshots section is listed by provider rather than region and is shown even before you select options in the calculator.", "username": "Ryan_Quinn" }, { "code": "", "text": "You should check the 2nd’s clue entry validation rule because it’s revealing the right answer. ", "username": "BnG" }, { "code": "", "text": "The clue says “The MongoDB Atlas site includes a price calculator”", "username": "BnG" }, { "code": "", "text": "Ah ! I was looking at the wrong place. Thank you ! The calculator doesn’t have the same prices as when you create a cluster directly in Atlas !", "username": "Gaetan_MORLET" }, { "code": "", "text": "MongoDB Online Scavenger Hunt #4 is complete!", "username": "Terrance_Cazy" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Lets Play a Game! - MongoDB Scavenger Hunt #4
2020-05-04T13:18:20.405Z
Lets Play a Game! - MongoDB Scavenger Hunt #4
6,031
null
[ "app-services-user-auth", "stitch" ]
[ { "code": "auth2.grantOfflineAccess().then(async (response) => {\n let credential = new GoogleCredential(response.code)\n await this.$stitch.auth.loginWithCredential(credential) // fails with error exchanging access code with OAuth2 provider\n})\n", "text": "Hi there,\nI’m still trying to solve this issue: Google OAuth Consent Screen shows mongodb.com.I’ve noticed that Browser SDK has a GoogleCredentials that expect authCode as a parameter. I can get this authCode from Google API and then I can pass this credentials to loginWithCredentials.Unfortunately, Stitch server always responds with “error exchanging access code with OAuth2 provider”.\nI know that on browser we are supposed to use GoogleRedirectCredential but if I’m able to use GoogleCredentials this will solve the problem I mentioned above. Any idea why I get this error and if GoogleCredentials work at all with browser?Thanks.", "username": "Dimitar_Kurtev" }, { "code": "", "text": "@Dimitar_Kurtev Can you check your Stitch Auth configuration for Google OAuth? We commonly see this error when the secret is misconfigured.", "username": "Ian_Ward" }, { "code": "<script src=\"https://apis.google.com/js/client:platform.js?onload=startGoogleLogIn\" async defer></script> \n<script> \n function startGoogleLogIn() { \n gapi.load('auth2', function() { \n auth2 = gapi.auth2.init({ \n client_id: 'myClientId', \n ux_mode: 'redirect', \n scope: 'email profile' \n }); \n }); \n } \n</script> \n\n/// later in code...\n\nauth2.grantOfflineAccess().then(async (response) => {\n let credential = new GoogleCredential(response.code)\n await this.$stitch.auth.loginWithCredential(credential)\n /// ..redirect to home page\n}).catch( ... )\n", "text": "Hi @Ian_Ward,\nThanks for your answer. Stitch Google OAuth works when I use GoogleRedirectCredentials. It does not work when I use GoogleCredentials (with the same Secret and Client id).\nMaybe the way I get the AuthCode is not correct, but I could not find a tutorial for Browser SDK.\nThis is how I use gapi to get AuthCode:Is this the correct way to get the AuthCode ?\nThanks.", "username": "Dimitar_Kurtev" }, { "code": "\n \n console.log(\"Processed redirect result.\")\n }\n \n \n if (client.auth.isLoggedIn) {\n // The user is logged in. Add their user object to component state.\n currentUser = client.auth.user;\n this.setState({ currentUser });\n } else {\n // The user has not yet authenticated. Begin the Google login flow.\n const credential = new GoogleRedirectCredential();\n client.auth.loginWithRedirect(credential);\n }\n }\n //end stitch setup\n \n \nrender() {\n const { currentUser } = this.state;\n return !currentUser\n ? <div>User must authenticate.</div>\n : <User profile={currentUser.profile}/>\n }\n \n ", "text": "I believe loginWithRedirect is required.\nDocs -SampleApp -", "username": "Ian_Ward" } ]
Stitch Auth: Is GoogleCredentials working in Browser SDK?
2020-04-19T19:59:03.424Z
Stitch Auth: Is GoogleCredentials working in Browser SDK?
2,954
https://www.mongodb.com/…4_2_1024x512.png
[ "node-js" ]
[ { "code": "cursor.readConcern()readConcern", "text": "Hello,\nusing nodeJS driver and found that it is missing cursor.readConcern() method.\nwhich is described here:is it another correct way to specify readConcern for queries?", "username": "Edgar_Buchvalov" }, { "code": "readConcern", "text": "Hi @Edgar_BuchvalovI advise my developers to honor this from the provided mongouri option.I am not particularly experienced with the node driver but you can see the MongoClient class has readConcern option available.https://mongodb.github.io/node-mongodb-native/3.5/api/MongoClient.html", "username": "chris" } ]
NodeJs driver missing cursor.readConcern() method
2020-05-04T14:04:00.643Z
NodeJs driver missing cursor.readConcern() method
1,877
null
[ "replication", "sharding" ]
[ { "code": "", "text": "hai everyone, Can anyone help with the below doubts:", "username": "Vijo_Jose" }, { "code": "", "text": "Welcome to the community @Vijo_Jose !We see in your two questions, especially the second, a lack of knowledge of the MongoDB architecture.\nI recommend you take the free M103 MongoDB University course. You will discover how replica sets and sharding work.", "username": "Gaetan_MORLET" }, { "code": "", "text": "Sure i have joined the course today.But I have question . Can you please clarify on that.I have done the shading with 3 config server and 2 shared cluster server between my two locations.\nConfig server have location A 2 server and location B one server and location A is primary.Shared replica set 1 : have location A 2 server and location B one server and location A is primary.\nShared replica set 1 : have location B 2 server and location A one server and location B is primary.and 1 mongo router in Location A and Location B.In the above case : when the connectivity between lose between Location A ad Location B my mongo router my complete system goes down and when am acessily through router am getting below error;“code” : 133,\n“codeName” : “FailedToSatisfyReadPreference”,I want to achieve something like below image. when the connectivity also i shoud be able to read and write on my both locations.\nimage1017×520 138 KB\nCan someone help on this ?", "username": "Vijo_Jose" } ]
Replication and sharding clarification
2020-04-29T10:48:51.918Z
Replication and sharding clarification
1,587
null
[ "sharding" ]
[ { "code": "", "text": "Hi,What is the recommended way of Removing a Mongos Instance? I didn’t find anything on the documentation.Thank You!", "username": "Mohammad_Fahim_Abrar" }, { "code": "", "text": "Hey @Mohammad_Fahim_AbrarHere is the doc for start and stop a mongodbhttps://docs.mongodb.com/manual/tutorial/manage-mongodb-processes/And here are the docs to remove a replica set memberHope this helps", "username": "Natac13" }, { "code": "shutdownmongosconfig", "text": "Hi,I tried the shutdown command to remove mongos router node. But the mongos collection in config database still contains the entry of the removed node. Is that a normal behavior?Thank you!", "username": "Mohammad_Fahim_Abrar" }, { "code": "mongosmongosmongosmongospingupmongos", "text": "Yes. Don’t worry about it. Just shut it down and you’re fine.Internal MongoDB MetadataThe config database is internal: applications and administrators should not modify or depend upon its content in the course of normal operation.The mongos collection stores a document for each mongos instance affiliated with the cluster. mongos instances send pings to all members of the cluster every 30 seconds so the cluster can verify that the mongos is active. The ping field shows the time of the last ping, while the up field reports the uptime of the mongos as of the last ping. The cluster maintains this collection for reporting purposes.", "username": "chris" } ]
Removing a Mongos Node
2020-04-23T07:24:13.078Z
Removing a Mongos Node
2,126
null
[ "stitch" ]
[ { "code": "async function loginWithEmailFunction() {\n\n const app = stitch.Stitch.defaultAppClient\n\n const credential = new UserPasswordCredential(typedEmail.toString(), typedPassword.toString())\n\n const loginRes = await app.auth.loginWithCredential(credential)\n\n .then(authedUser => console.log(`successfully logged in with id: ${authedUser.id}`))\n\n .catch(err => console.error(`login failed with error: ${err}`))\n\n}\nasync function registerWithEmailFunction() {\n\n const emailPasswordClient = stitch.Stitch.defaultAppClient.auth\n\n .getProviderClient(stitch.UserPasswordAuthProviderClient.factory);\n\n const registerRes = await emailPasswordClient.registerWithEmail(typedEmail.toString(),typedPassword.toString()) \n\n .then(() => console.log(\"Successfully sent account confirmation email!\"))\n\n .catch(err => console.log(\"Error registering new user:\", err));\n\n}\n", "text": "Hello. I’ve got Google authentication working, but email/password authentication is not working properly. Here’s my code for register and login. When I run these functions I never enter the .then or .catch expressions, and only very rarely do they actually lead to a registration/login when I look at my list of users in Stitch. I’ve tried these functions without the async prefix as well, that also doesn’t work. Can anyone suggest what I need to do?", "username": "Daniel_Gold" }, { "code": "", "text": "Hi Daniel – In this case you should be either using await or .then() . It seems like the issue here is likely due to a mix of await/promise syntax – Can you try using one or the other and letting us know if that addresses?", "username": "Drew_DiPalma" }, { "code": "function registerWithEmailFunction() {\n\n var typedEmail = document.getElementById(\"emailAddress\").value;\n\n var typedPassword = document.getElementById(\"password\").value;\n\n const emailPasswordClient = stitch.Stitch.defaultAppClient.auth\n\n .getProviderClient(stitch.UserPasswordAuthProviderClient.factory);\n \n emailPasswordClient.registerWithEmail(typedEmail.toString(),typedPassword.toString()) \n\n .then(() => console.log(\"Successfully sent account confirmation email!\"))\n\n .catch(err => console.log(\"Error registering new user:\", err));\n\n}\nfunction loginWithEmailFunction() {\n\n var typedEmail = document.getElementById(\"emailAddress\").value;\n\n var typedPassword = document.getElementById(\"password\").value;\n\n const app = stitch.Stitch.defaultAppClient\n\n const credential = new UserPasswordCredential(typedEmail.toString(), typedPassword.toString())\n\n app.auth.loginWithCredential(credential)\n\n .then(authedUser => console.log(`successfully logged in with id: ${authedUser.id}`))\n\n .catch(err => console.error(`login failed with error: ${err}`))\n\n}\n", "text": "Hi Drew,I tried both options, and neither has fixed the issues. I can’t spot any pattern, these functions very rarely result in the desired action in my list of stitch users.I have amended the functions to the following using .then: however the process doesn’t enter the .then or .catch, it seems to redirect back to the same login page after running .registerWithEmail", "username": "Daniel_Gold" }, { "code": "", "text": "Hi Daniel – Could you also provide some of the surrounding code for how these functions are called? If you like, you can DM me this.", "username": "Drew_DiPalma" }, { "code": " <script>\n\n // const { Stitch, RemoteMongoClient, UserPasswordCredential, getProviderClient, getAuthProviderRoute, UserPasswordAuthProviderClient } = stitch;\n\n const client = stitch.Stitch.initializeDefaultAppClient('<App ID>');\n\n if (client.auth.hasRedirectResult()){\n\n client.auth.handleRedirectResult().then(user => {\n\n \n\n let redirectUrl = localStorage.getItem('redirectUrl');\n\n if (!redirectUrl) {\n\n redirectUrl = \"/index.html\";\n\n }\n\n window.location.replace(redirectUrl);\n\n });\n\n };\n\n // client.auth.logout().then(function() {\n\n // if (document.referrer === './pages/login.html')\n\n // {\n\n // localStorage.setItem('redirectUrl', \"/index.html\"); \n\n // } else {\n\n // localStorage.setItem('redirectUrl', document.referrer);\n\n // }\n\n \n\n // let credential = new stitch.GoogleRedirectCredential();\n\n // client.auth.loginWithRedirect(credential);\n\n // });\n\n </script> \n\n</head>\n\n<body class=\"grey lighten-4\"> \n\n <nav class=\"z-depth-0\">\n\n <div class=\"nav-wrapper container\">\n\n <a href=\"/\">title<span>title</span></a>\n\n <span class=\"right grey-text text-darken-1\">\n\n <i class=\"material-icons sidenav-trigger\" data-target=\"side-menu\">menu</i>\n\n </span>\n\n </div>\n\n </nav>\n\n <ul id=\"side-menu\" class=\"sidenav side-menu\">\n\n <li><a class=\"subheader\">title</a></li>\n\n <li><a href=\"/\" class=\"waves-effect\">Home</a></li>\n\n <li><a href=\"/pages/about.html\" class=\"waves-effect\">About</a></li>\n\n <li><div class=\"divider\"></div></li>\n\n <li><a href=\"/pages/contact.html\" class=\"waves-effect\">\n\n <i class=\"material-icons\">mail_outline</i>Contact</a>\n\n </li>\n\n </ul>\n\n <div class=\"container grey-text center\">\n\n <h5>Welcome</h5>\n\n <br> \n\n <h3 id=\"headerTag\"></h3>\n\n <button class=\"loginBtn loginBtn--facebook\">\n\n Login with Facebook\n\n </button>\n\n <br>\n\n <button class=\"loginBtn loginBtn--google\" id=\"google-login-btn\">\n\n Login with Google\n\n </button>\n\n <br> \n\n <p>-- or --</p>\n\n </div>\n\n \n\n <div class=\"row\">\n\n <form class=\"sign-in container section\" id=\"form-username-password\">\n\n <div class=\"divider\"></div>\n\n <div class=\"input-field\">\n\n <input placeholder=\"\" value=\"[email protected]\" id=\"emailAddress\" type=\"text\" class=\"validate\"> \n\n <label for=\"emailAddress\">Email Address</label>\n\n </div>\n\n <div class=\"input-field\">\n\n <input placeholder=\"\" value=\"PwdStrong1\" id=\"password\" type=\"text\" class=\"validate\">\n\n <label for=\"password\">Password</label>\n\n </div>\n\n <div class=\"input-field center\">\n\n <button class=\"btn-small\" id=\"emailPwd-login-btn\">Login</button>\n\n <button class=\"btn-small\" id=\"emailPwd-register-btn\">Register</button>\n\n </div>\n\n </form>\n\n </div>\n\n \n\n <p><a id=\"emailPwd-reset-btn\">I've forgotten my password</a></p>\n\n \n\n</body>\n\n<script src=\"https://code.jquery.com/jquery-3.4.1.min.js\"></script>\n\n<script>\n\nif (client.auth.isLoggedIn) {\n\n headerTag.innerText = \"Logged in as \" + client.auth.currentUser.id;\n\n} else {\n\n headerTag.innerText = \"Not logged in.\";\n\n}\n\n</script>\n\n<script>\n\n$(document).ready(function(){\n\n // function logoutFunction() {\n\n // document.getElementById(\"logout-btn\").innerHTML = \"Logging out!\";\n\n // client.auth.logout()\n\n // }\n\n $('#google-login-btn').click(function() {\n\n document.getElementById(\"google-login-btn\").innerHTML = \"Logging in!\";\n\n let credential = new stitch.GoogleRedirectCredential();\n\n client.auth.loginWithRedirect(credential);\n\n }) \n\n $('#emailPwd-login-btn').click(function() {\n\n var typedEmail = document.getElementById(\"emailAddress\").value;\n\n var typedPassword = document.getElementById(\"password\").value;\n\n const app = stitch.Stitch.defaultAppClient\n\n const credential = new stitch.UserPasswordCredential(typedEmail.toString(), typedPassword.toString())\n\n console.log('login:', typedEmail.toString(),typedPassword.toString());\n\n app.auth.loginWithCredential(credential) // loginWithCredential\n\n .then(authedUser => console.log(`successfully logged in with id: ${authedUser.id}`))\n\n .catch(err => \n\n console.error(`login failed with error: ${err}`))\n\n })\n\n $('#emailPwd-register-btn').click(function() {\n\n var typedEmail = document.getElementById(\"emailAddress\").value;\n\n var typedPassword = document.getElementById(\"password\").value;\n\n const emailPasswordClient = stitch.Stitch.defaultAppClient.auth\n\n .getProviderClient(stitch.UserPasswordAuthProviderClient.factory);\n\n \n\n console.log('register:', typedEmail.toString(),typedPassword.toString());\n\n emailPasswordClient.registerWithEmail(typedEmail.toString(),typedPassword.toString()) \n\n .then(() => console.log(\"Successfully sent account confirmation email!\"))\n\n .catch(err => \n\n console.log(\"Error registering new user:\", err));\n\n })\n\n $('#emailPwd-reset-btn').click(function() {\n\n var typedEmail = document.getElementById(\"emailAddress\").value;\n\n if (typedEmail === \"\")\n\n {\n\n alert(\"enter email address\");\n\n } else {\n\n const emailPassClient = stitch.Stitch.client.auth\n\n .getProviderClient(stitch.UserPasswordAuthProviderClient.factory);\n\n emailPassClient.sendResetPasswordEmail(typedEmail).then(() => {\n\n console.log(\"Successfully sent password reset email!\");\n\n }).catch(err => {\n\n console.log(\"Error sending password reset email:\", err);\n\n });\n\n }\n\n })\n\n })\n\n</script>\n", "text": "Hi Drew, I’m not sure if you got the information I sent. I’ve included my login html here:", "username": "Daniel_Gold" } ]
Local-userpass not working
2020-04-19T16:45:44.724Z
Local-userpass not working
2,812
https://www.mongodb.com/…bf_2_1024x75.png
[ "mongodb-shell" ]
[ { "code": "", "text": "\nmongo error1146×84 4.96 KB\n\nI’m trying to connect to the sandbox for the MongoDB University into course, but keep getting syntax errors when I use the copied command from Atlas to connect.MongoDB Shell version 4.2.3", "username": "Lillie_Sauer" }, { "code": "mongo mongodb+srv://m999student:[email protected]/testdb", "text": "Last time I connected to the Atlas cluster I used something like this from Mongo Shell (and it worked):mongo mongodb+srv://m999student:[email protected]/testdbNote that there are no quotes around the connect string.", "username": "Prasad_Saya" }, { "code": "MongoDB Enterprise >$#mongolist databaseslist collections", "text": "Hi @Lillie_Sauer welcome to the community!You are trying to connect to your MongoDB instance from a shell that is already connected.The MongoDB Enterprise > bit is the MongoDB shell prompt. Your operating system prompt would normally end with a $ or #.The SyntaxError bit you’re getting is the mongo shell not being able to interpret what you’re trying to say.Try typing in list databases or list collections and you will see the databases in your system or the collections in your current database.", "username": "Doug_Duncan" }, { "code": "MongoDB Enterprise >$#MongoDB Enterprise >mongo mongodb+srv://m999student:[email protected]/testdbmongolocalhost27017list databaseslist collectionsshow dbsshow collections", "text": "The MongoDB Enterprise > bit is the MongoDB shell prompt. Your operating system prompt would normally end with a $ or # .Yes, the MongoDB Enterprise > is the Mongo Shell prompt. So, as @Doug_Duncan mentioned you have to enter the connection string (e.g., mongo mongodb+srv://m999student:[email protected]/testdb) from the operating system command prompt.The Mongo Shell utility program allows you connect to the MongoDB database server using a connection string, e.g., just typing mongo from OS command prompt will connect to the MongoDB server at the default localhost at port 27017. In the screen shot @Lillie_Sauer had posted, is already connected to the server perhaps using the default host and port.Try typing in list databases or list collections and you will see the databases in your system or the collections in your current database.Actually, from the Mongo Shell prompt, the commands are show dbs (prints all the database names on the MongoDB server) and show collections (prints all the collection names in the current database).", "username": "Prasad_Saya" }, { "code": "show dbsshow collectionslistshowdb.adminCommand( { listDatabases: 1 } )db.runCommand( { listCollections: 1.0 } )", "text": "@Prasad_Saya you are correct. The shell commands are indeed show dbs and show collections. I got the list and show confused because you can also run db.adminCommand( { listDatabases: 1 } ) and db.runCommand( { listCollections: 1.0 } ) to get similar information.", "username": "Doug_Duncan" }, { "code": "", "text": "@Doug_Duncan, @Prasad_Saya\nThanks for your help! That makes sense. However, is there a way to disconnect from a database inside the mongo shell? Or does it stay connected until you connect to a new shell through the command prompt?", "username": "Lillie_Sauer" }, { "code": "use database_namemongomongodb> db\ntest\n> use testing\nswitched to db testing\n> db\ntesting\n", "text": "is there a way to disconnect from a database inside the mongo shell? Or does it stay connected until you connect to a new shell through the command prompt?The shell will stay connected to the database while it is open. You can connect to another database from the same shell using the command use database_name. This will change your context to the database named database_name. If you don’t supply a database when connecting with the mongo shell you will connect to the test database by default. You can see your current database connection, while in the mongo shell by typing db.", "username": "Doug_Duncan" }, { "code": "quit()", "text": "If you are in the Mongo Shell and want to exit it you can type:\nquit()\nMongoDB Docs", "username": "DavidSol" }, { "code": "quit()CTRL+C", "text": "However, is there a way to disconnect from a database inside the mongo shell? Or does it stay connected until you connect to a new shell through the command prompt?As, @Doug_Duncan mentioned you can switch between databases on the same server. To quit the Mongo Shell, use quit() or CTRL+C.You can connect to multiple shells at any given time. Each shell is started from an individual command window. This allows multiple tasks, like be connected to multiple databases on your server, or altogether multiple MongoDB servers (e.g., one local and one remote).More at Mongo Shell Quick Reference.", "username": "Prasad_Saya" }, { "code": "// Connect to a cluster\nc0 = Mongo(\"mongodb://u:[email protected]/database?authSource=admin&replicaSet=s0\")\n// Get DB in connection string\nd0 = c0.getDB(c0.defaultDB)\n// Get another db on the cluster\nd0a = c0.getDB('admin')\n\n// Connect to another cluster\nc1 = Mongo(\"mongodb://u:[email protected]/database?authSource=admin&replicaSet=s0\")\n\n// Close connections\nc0.close()\nc1.close()\n", "text": "@Lillie_Sauer If you really want to disconnect from the db you can start in nodb mode and create a connection object yourself using Mongo() .You can also connect to multiple mongodb clusters in the same shell.You would think that using db.getMongo().close() would work. It does not appear to actually disconnect the socket. Using my example I actually see the connection close in the mongod logs.", "username": "chris" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB Shell SyntaxError: unexpected token: string literal
2020-02-24T19:40:49.628Z
MongoDB Shell SyntaxError: unexpected token: string literal
48,147
null
[ "aggregation", "compass" ]
[ { "code": " [{\n$lookup: {\n from: 'RoleEntity',\n let: {\n anotherRoleId: \"$roleId\"\n },\n pipeline: [{\n $match: {\n \"serverId\": 3,\n \"lastLoginTime\": {\n $lt: 1588435200000\n },\n \"createTime\": {\n $gte: 0,\n $lte: 1588435199999\n }\n }\n },\n {\n $match: {\n $expr: {\n $eq: [\"$roleId\", \"$$anotherRoleId\"]\n }\n }\n },\n ],\n as: 'roles'\n}}, {\n$match: {\n \"roles\": {\n $gt: {\n $size: 0\n }\n }\n}}, {\n$group: {\n _id: \"$curChapterId\",\n count: {\n $sum: 1\n }\n}}]\n SELECT curChapterId,count(*) FROM RleChapterEntity where roleId IN (SELECT roleId from RoleEntity where ...) GROUP BY curChapterId\n", "text": "It’s the tips on MongoDB Compass “Unrecognized option to $lookup: let”How can i query without the ‘let’ label?The pseudo-SQL statement what i want is:Very thanks for your help.", "username": "11126" }, { "code": "", "text": "There is an open issue for the error you are getting: https://jira.mongodb.org/browse/COMPASS-4111But, you can still export the pipeline to Java - without the Use Builders option (just not select this check box).", "username": "Prasad_Saya" } ]
"Unrecognized option to $lookup: let" when i export pipeline to Java
2020-05-04T07:22:53.375Z
&ldquo;Unrecognized option to $lookup: let&rdquo; when i export pipeline to Java
2,938
null
[ "charts" ]
[ { "code": "db.collection.find( { $where: function() { today = new Date(); // today.setHours(0,0,0,0); return (this._id.getTimestamp() >= today) } } );", "text": "Hi, have been toying with charts for the last week considering it as an option to include it in our customer portals. We deliver a chatbot service for our customers and have a portal where our customers can modify flows, nlp and we want to add a better reporting solution.Enabled charts, create some data sources and started creating some portals. What is really nice is that it’s easy to create nice graphs quite quickly and share dashboards. Created a small server to call the shared graphs and looking at taking that into production.Things I have questions about, maybe misunderstanding on my side, or are not working as I expected.User rights on Atlas. I f I want to give non technical people access to editing portals / graphs I need to add them to the project. When I do this they get way to much access (can see security settings, bill info, start stitch apps). I would like to add a user that just has access to charts.I would like an option to controle things more from an automation point of view. Now setting up a portal with embedded charts is quite a bit of manual work. Things like embedding a dashboard would be nice. Also when I copy a dashboard and then change the data source, the queries disappear.A ‘where’ query was not recognized\ndb.collection.find( { $where: function() { today = new Date(); // today.setHours(0,0,0,0); return (this._id.getTimestamp() >= today) } } );When I embed charts I cannot remove the background color. I can alter the background color for the iframe, but the graphs keep their white backgrounds.", "username": "Arnold_Ligtvoet" }, { "code": "$wherethemetheme", "text": "Hi @Arnold_Ligtvoet -Thanks for checking out Charts and for your great questions!While we are looking to make permissions more granular, have you looked at granting the Project Data Access Read Only permission to users? That will allow them to build charts but gives minimal additional permissions. Alternatively if you grant Project Read Only access, those users can be given access to individual dashboards but they cannot create their own charts.We do plan to provide an API for automating charts eventually. One way you can embed dashboards now is by generating a public link and loading that in an iframe. The loss of queries when you switch data sources in a chart is something we’ll also look at.Correct, $where is unsupported in Charts, although you can use simple Javascript expressions in the query bar that would let you accomplish the same thingFor iframe-embedded charts, if you specify the theme parameter in the query string, the chart will show with a transparent background unless loaded full screen. If you omit the theme parameter we always show a light background - we did this to retain compatibility with how the product worked before we introduced themes.Let me know if you have any more questions.\nTom", "username": "tomhollander" }, { "code": "", "text": "Hi Tom,thanks for the answers. I will try 3 and 4. On the permissions I have now created a test user on my private email address. This user now has ‘Project Data Access Read Only’ and on the access manager the account shows as ‘Organization Read Only’ (also tested with ‘Organization Member’).The things I notice:When I login with that account:That seems way too much info for a charts user…", "username": "Arnold_Ligtvoet" }, { "code": "", "text": "Thanks @Arnold_Ligtvoet, good feedback. We’re looking into this. Also you may want to watch/vote on this Feedback Engine suggestion which is asking for something very similar.Tom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Charts questions
2020-04-29T09:30:16.796Z
Charts questions
2,999
null
[ "dot-net" ]
[ { "code": "", "text": "Hi,I am not sure if there is any reason for removing logging in the latest c# driver.Why do I see logging or events necessary:Are there any future plans to include event subscriptions for c# client driver?With regards,\nNithin B.", "username": "Nithin_Bandaru" }, { "code": "", "text": "Hi @Nithin_Bandaru, welcome!I am not sure if there is any reason for removing logging in the latest c# driver.Can you clarify which ‘latest’ version of the MongoDB .NET/C# driver that you’re referring to ? Also, could you provide a snippet code and version that is able to generate logs previously? So we could do side by side comparison between versions and pin-point the specific logs that you’re referring to.Are there any future plans to include event subscriptions for c# client driver?Please see MongoDB ChangeStreams and Change Events.Regards,\nWan.", "username": "wan" } ]
Logging or Events for C# driver
2020-05-02T04:38:31.460Z
Logging or Events for C# driver
4,544
null
[ "atlas" ]
[ { "code": "", "text": "I’m new to mongodb. I’m trying to connect to my database by using shell with this command\nmongo “mongodb+srv://m220-fgqax.mongodb.net/test” --username myth\nbut its not working.need help.Thanks", "username": "myth_saziv" }, { "code": "", "text": "What error are you getting?\nDid you pass --password option", "username": "Ramachandra_Tummala" }, { "code": "mongomongo --version", "text": "Hi @myth_saziv,What error or message do you get when trying to connect?Have you whitelisted your connection IP address in Atlas?Also, what specific version of the mongo shell are you using (as reported by mongo --version?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks.nope i didn’t get that option to pass password .i am getting this error \nproblemmongodb960×111 12.2 KB\n", "username": "myth_saziv" }, { "code": "", "text": "Thanks.i whitelisted my connection ip address.and i am getting this error white trying to connect and version of shell is 4.2.6.\nproblemmongodb960×111 12.2 KB\n", "username": "myth_saziv" }, { "code": "", "text": "You are already at mongo prompt (>)\nPlease exit/quit and try from your os command prompt", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks… it worked.", "username": "myth_saziv" } ]
Can't connect to my database
2020-05-03T03:34:57.352Z
Can&rsquo;t connect to my database
2,246
null
[ "security" ]
[ { "code": "use admin\ndb.createUser( {\nuser: \"nombreUsuarioAdmin\",\npwd: \"contrasenaUsuarioAdmin\",\nroles: [{role: \"root\",db: \"admin\"}]\n})\n", "text": "Hi everyone,I’m a total beginner when it comes to databases and MongoDB in particular.\nI installed MongoDB via the current Debian buster package.I tried to create an admin user as in the MongoDB documentation. I typed:The response to this command is:Fri May 1 19:51:45.706 TypeError: Property ‘createUser’ of object test is not a function.", "username": "Stefan_Schmelz" }, { "code": "", "text": "Can you provide a screenshot of what you are doing so that we have some context?Are you in the shell? Look like it, but I am not sure. It could be the node driver. If in the shell, could you just enter db. It should print admin as you wrote use admin but it thinks it is an object of type test. So may be somewhere you did db = ….", "username": "steevej" }, { "code": "dbdbmongo", "text": "The response to this command is:Fri May 1 19:51:45.706 TypeError: Property ‘createUser’ of object test is not a function.It almost seems like your db shell variable has gotten overwritten. What is returned if you type db in the shell? You should get the name of your current database.Have you tried closing your mongo shell and restarting it? This will bring everything back to a known good state.", "username": "Doug_Duncan" }, { "code": "version() // mongo shell version\ndb.version() // MongoDB server version \n", "text": "Hi @Stefan_Schmelz,What is the output of:Regards,\nStennie", "username": "Stennie_X" }, { "code": "sudo apt install -y mongodbuse admin\ndb.createUser({user:\"admin\",pwd: passwordPrompt(),roles:[{role: \"userAdminAnyDatabase\", db: \"admin\"}], readWriteAnyDatabase})\n", "text": "I am in the shell and running version 3.6.3 [according to version()].\nThe database is running ao a raspberry pi3 with a 64 bit raspbian buster.\nI installed mongodb with sudo apt install -y mongodb and opened the mongo shell after install.the only commands i entered were:and if i test it hosting the database on my computer I get an error like this:\n2020-05-01T23:07:40.066+0200 E QUERY [thread1] ReferenceError: passwordPrompt is not defined :\n@(shell):1:29", "username": "Stefan_Schmelz" }, { "code": "", "text": "Perhaps it would help if you could point me to a beginner safe explanation of the mongoDB user management and maybe a good book that isnt that expensive?", "username": "Stefan_Schmelz" }, { "code": "passwordPrompt()pwd", "text": "As for passwordPrompt() according to documentation.Starting in MongoDB 4.2, you can use passwordPrompt() as the value for the pwd instead of specifying the password.", "username": "steevej" }, { "code": "", "text": "As formongoDB user managementSee https://university.mongodb.com/, course M001 and M103 to start and then M310.", "username": "steevej" }, { "code": "", "text": "Hi Stefan,The documentation is comprehensive (and free!). Just ensure you are referring to the correct version to match your MongoDB deployment: MongoDB 3.6 Manual.If you are trying to follow tutorials or documentation designed for newer MongoDB versions, these will often rely on newer server and shell features that may not be present in prior releases.If you want an offline version of the manual, there are also links to HTML and EPUB on the manual home page (link above).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I realized that the version I get for the Raspberry PI is 2.4. I managed to create an admin user, but is there a way to build it from source? I would prefer a recent version, but if you can tell me that there is no downside to an older verison i will be happy with the old one.", "username": "Stefan_Schmelz" } ]
Can't create admin user: ‘createUser’ of object test is not a function
2020-05-01T19:37:31.662Z
Can&rsquo;t create admin user: ‘createUser’ of object test is not a function
8,556
null
[ "mongodb-shell", "atlas" ]
[ { "code": " /opt/mongodb/mongodb-linux-x86_64-3.4.2/bin/mongo \"mongodb://tmx-mongolog-test-shard-00-00.tqcbm.mongodb.net:27017,tmx-mongolog-test-shard-00-01.tqcbm.mongodb.net:27017,tmx-mongolog-test-shard-00-02.tqcbm.mongodb.net:27017/test?replicaSet=atlas-o7jz9u-shard-0\" --authenticationDatabase admin --username xxxxxxxxxx --password yyyyyyyyyy\nMongoDB shell version v3.4.2\nconnecting to: mongodb://tmx-mongolog-test-shard-00-00.tqcbm.mongodb.net:27017,tmx-mongolog-test-shard-00-01.tqcbm.mongodb.net:27017,tmx-mongolog-test-shard-00-02.tqcbm.mongodb.net:27017/test?replicaSet=atlas-o7jz9u-shard-0\n2020-05-01T14:15:13.969-0500 I NETWORK [thread1] Starting new replica set monitor for atlas-o7jz9u-shard-0/tmx-mongolog-test-shard-00-00.tqcbm.mongodb.net:27017,tmx-mongolog-test-shard-00-01.tqcbm.mongodb.net:27017,tmx-mongolog-test-shard-00-02.tqcbm.mongodb.net:27017\n2020-05-01T14:15:14.126-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:14.126-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 1 checks in a row.\n2020-05-01T14:15:14.790-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:14.790-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 2 checks in a row.\n2020-05-01T14:15:15.461-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:15.461-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 3 checks in a row.\n2020-05-01T14:15:16.129-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:16.129-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 4 checks in a row.\n2020-05-01T14:15:16.803-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:16.803-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 5 checks in a row.\n2020-05-01T14:15:17.476-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:17.476-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 6 checks in a row.\n2020-05-01T14:15:18.147-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:18.147-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 7 checks in a row.\n2020-05-01T14:15:18.809-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:18.809-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 8 checks in a row.\n2020-05-01T14:15:19.478-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:19.479-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 9 checks in a row.\n2020-05-01T14:15:20.147-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:20.147-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 10 checks in a row.\n2020-05-01T14:15:20.816-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:20.816-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 11 checks in a row.\n2020-05-01T14:15:21.480-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:22.151-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:22.819-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:23.487-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:24.156-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:24.823-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:25.485-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:26.154-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:26.817-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:27.487-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:27.487-0500 I NETWORK [thread1] All nodes for set atlas-o7jz9u-shard-0 are down. This has happened for 21 checks in a row.\n2020-05-01T14:15:28.154-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n2020-05-01T14:15:28.823-0500 W NETWORK [thread1] No primary detected for set atlas-o7jz9u-shard-0\n", "text": "Hello, I have created a 3 node Atlas cluster and am able to connect from my laptop using compass so i know its operational, however when connecting from a 3.4.2 command shell on linux i get errors that no primary is detected. The linux server and my laptop are whitelisted , but why do i get 2 different results ?", "username": "Pierre_Evans" }, { "code": "--ssl", "text": "Add the --ssl flag. You must have lost it in the copy/paste.Why not use a more current shell version? Then you can use the mongodb+srv uri.", "username": "chris" }, { "code": "", "text": "–ssl does not work from 3.4.2 - Error parsing command line: unrecognised option ‘–ssl’\nI’m running several 3.4.2 instances - the point of doing this is to migrate from 3.4.2 on-prem to Atlas\nDo I need to miograte each db from 3.4.2 to 3.6 before migrating to Atlas ? The documentation says we can migrate from 3.4.2 so that was my initial attempt", "username": "Pierre_Evans" }, { "code": "mongo/opt/mongodb/mongodb-linux-x86_64-3.4.2/", "text": "–ssl does not work from 3.4.2 - Error parsing command line: unrecognised option ‘–ssl’Hi Pierre,This indicates you have a mongo shell without TLS/SSL support (which is required for connecting to Atlas). Based on your path of /opt/mongodb/mongodb-linux-x86_64-3.4.2/, I suspect you have installed the generic Linux tarball which does not include TLS/SSL (or any other external library dependencies).What Linux distro & version are you using? If you can install a packaged version from MongoDB, it will include TLS/SSL support. See MongoDB Community Edition installation tutorials.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Also to connect to a 3.6 cluster you need at least 3.6 shell. And it can just be mongo shell on your laptop for instance. No need to install a newer shell on your source server.What are you attempting to do with the shell on the source server? That may yield some better advise.", "username": "chris" }, { "code": "", "text": "Stennie, thanks for pointing out something i’ve been missing. I’m running on RHEL7 , wan’t aware of TLS/SSL requirements way back when this instance was created.\nRegards,\nPierre", "username": "Pierre_Evans" }, { "code": "", "text": "Hi Chris,I’ve got an existing 3.4.2 instance running on Linux and we’re getting ready to move it to Atlas.I’ve been trying to connect from the command shell just to verify all the networking/firewall ports etc are working etc prior to using the Data Migration tool.Regards\nPierre", "username": "Pierre_Evans" }, { "code": "", "text": "One (hopefully final) followup question… do I need to upgrade to TSL/SSL support if all i’m going to do is use the Data Migration tool to move data up to Atlas ?\nThanks for all the tips…\nPierre", "username": "Pierre_Evans" }, { "code": "", "text": "do I need to upgrade to TSL/SSL support if all i’m going to do is use the Data Migration tool to move data up to Atlas ?Hi Pierre,TLS/SSL support on the source deployment is optional but recommended for security best practices.The target Atlas cluster will have always have TLS/SSL enabled.You only need local MongoDB tool versions with TLS/SSL support if you are trying to connect to your Atlas cluster from the command line.Atlas also has a Data Explorer feature if you want to verify data has migrated without installing any additional local tools.For more information, see Migrate or Import Data into Your Cluster.Regards,\nStennie", "username": "Stennie_X" } ]
Unable to connect from 3.4.2 shell,but can from compass
2020-05-01T19:36:40.379Z
Unable to connect from 3.4.2 shell,but can from compass
3,045
null
[]
[ { "code": "", "text": "Let’s play a game!We’re back with the third in our series of online scavenger hunts from MongoDB. Answer a series of 10 clues by visiting MongoDB resources including our blog, forums, documentation and more and learn about these resources while earning some awesome rewards!Each participant who completes a single scavenger hunt will earn:Each Monday we will post a new set of clues giving you the chance to earn additional badges and other rewards.We will soon be announcing additional rewards for completing multiple scavenger hunts with a special reward for those who complete all our weekly scavenger hunts this spring and summer.Every scavenger hunt will run from Monday until 5pm US Eastern time on Friday.Have an idea for a clue or question that should be included? Let us know!Scavenger Hunt Week 3", "username": "Ryan_Quinn" }, { "code": "", "text": "Week 3 done !\nI discovered a lot of cool things Does the code on the Twitch page still work ? I failed to apply it.\nWe will soon be announcing additional rewards for completing multiple scavenger hunts with a special reward for those who complete all our weekly scavenger hunts this spring and summer.So sad to have missed the first week… no chance of doing it again ? ", "username": "Gaetan_MORLET" }, { "code": "", "text": "Hello Gaetan,So sad to have missed the first week… no chance of doing it again ? I recently learned from @Jamie that they like to keep it on a weekly bases, no reopen.Michael", "username": "michael_hoeller" }, { "code": "", "text": "Just a quick update @michael_hoeller - We’re discussing some more options beyond reopening past weeks to make sure folks who are later to the challenge can still participate and earn cumulative badges & rewards. Keep an eye out for updates ", "username": "Jamie" }, { "code": "", "text": "Thanks for the update!\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Thanks for catching that. The code featured on the Twitch page has been updated (as has the corresponding clue/answer).As Jamie mentioned we are discussing options for those who missed out on week one. We’ll be sharing a post later in the week with more details and announcing some of the next tier of rewards.", "username": "Ryan_Quinn" }, { "code": "", "text": "Thanks for another fun week with the MongoDB Scavenger Hunt. This week’s form has now been closed. Check back Monday for installment #4 of our Scavenger Hunt series.", "username": "Ryan_Quinn" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Lets Play a Game - MongoDB Scavenger Hunt #3
2020-04-27T14:11:02.318Z
Lets Play a Game - MongoDB Scavenger Hunt #3
5,743
null
[ "mongodb-shell" ]
[ { "code": "PS C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongo.exe\nrs0:SECONDARY> db.collection.find()\nError: error: { \"ok\" : 0, \"errmsg\" : \"not master and slaveOk=false\", \"code\" : 13435 }\n", "text": "Today I can open a connection to a mongodb secondary like this:If I run the rs.slaveOk(), then my queries work.If I close the session, and start a new one, my queries don’t work again.How can I make the change of slaveOk() permanent, so any new connection made to a slave is allowed to read?", "username": "MACKENZIE_CLARK" }, { "code": "rs.slaveOk().mongorc.js", "text": " Hi @MACKENZIE_CLARK, you can add rs.slaveOk() to your .mongorc.js file and that should do what you’re trying to do.On Linux/Mac the file is in your home directory. I’m not sure where it is on Windows if you use that platform, but a quick search should return it.", "username": "Doug_Duncan" }, { "code": ".mongorc.js", "text": "Thanks @Doug_Duncan!Slightly different question: is the .mongorc.js respected by programming language drivers? e.g. the C# driver.", "username": "MACKENZIE_CLARK" }, { "code": "", "text": "No, mongorc.js is just for the mongo shell.", "username": "chris" }, { "code": "readPreferencesecondarysecondaryPreferred", "text": "You would set your readPreference on your connection to either secondary or secondaryPreferred to get secondary reads.For C# the Read Preference Class page might be of help.", "username": "Doug_Duncan" } ]
How do I make slaveOk() permanent?
2020-04-30T21:52:52.644Z
How do I make slaveOk() permanent?
5,857
null
[ "compass" ]
[ { "code": "", "text": "According to https://jira.mongodb.org/browse/COMPASS-3705 and I experience the same, the drag functionality for re-ordering stages in the Aggregation Builder is broken and fixed only in 1.21.0.This version isn’t offered on the download page – is there a source for Compass with this issue fixed?", "username": "Christoph_Lange" }, { "code": "", "text": "Hi! You can find the Compass source code at GitHub - mongodb-js/compass: The GUI for MongoDB.-Sheeri", "username": "Sheeri_Cabral" }, { "code": "", "text": "Ah, cool. Thank you!Doesn’t solve the problem though, at least not on Mac. – but I can report that in the Jira issue.", "username": "Christoph_Lange" }, { "code": "", "text": "Hi @Christoph_Lange,FYI, Compass 1.21.0 is now released and available via auto-update within Compass (you should be prompted automatically) or from the MongoDB Download Centre.Regards,\nStennie", "username": "Stennie_X" } ]
Compass Aggregation Builder fixed in 1.21.0 -- where to get?
2020-02-25T19:18:37.469Z
Compass Aggregation Builder fixed in 1.21.0 &ndash; where to get?
1,533
null
[ "replication" ]
[ { "code": "2020-04-30T09:55:27.777+0100 I SHARDING [repl-writer-worker-1] Marking collection service-cases.case as collection version: <unsharded>\n2020-04-30T09:55:27.781+0100 E STORAGE [repl-writer-worker-8] WiredTiger error (-31802) [1588236927:781053][4724:0x7fb2815c7700],\n file:service.45user.45accounts/index-58--9181491801672360292.wt, WT_SESSION.open_cursor: __desc_read, \n351: service.45user.45accounts/index-58--9181491801672360292.wt does not appear to be a WiredTiger file: WT_ERROR: non-specific WiredTiger error \nRaw: [1588236927:781053][4724:0x7fb2815c7700], file:service.45user.45accounts/index-58--9181491801672360292.wt, WT_SESSION.open_cursor: __desc_read, \n351: service.45user.45accounts/index-58--9181491801672360292.wt does not appear to be a WiredTiger file: WT_ERROR: non-specific WiredTiger error\n2020-04-30T09:55:27.781+0100 E STORAGE [repl-writer-worker-8] Failed to open a WiredTiger cursor. Reason: UnknownError: -31802: \nWT_ERROR: non-specific WiredTiger error, uri: table:service.45user.45accounts/index-58--9181491801672360292, config: overwrite=false\n2020-04-30T09:55:27.781+0100 E STORAGE [repl-writer-worker-8] This may be due to data corruption. Please read the documentation \nfor starting MongoDB with --repair here: http://dochub.mongodb.org/core/repair\n2020-04-30T09:55:27.781+0100 F - [repl-writer-worker-8] Fatal Assertion 50882 at src/mongo/db/storage/wiredtiger/wiredtiger_session_cache.cpp 101\n2020-04-30T09:55:27.781+0100 F - [repl-writer-worker-8] \n\n\n2020-04-30T15:40:32.062+0100 F STORAGE [initandlisten] An incomplete repair has been detected! This is likely becausrepair operation unexpectedly failed before completing. MongoDB will not start up again without --repair.\n2020-04-30T15:40:32.062+0100 F - [initandlisten] Fatal Assertion 50922 at src/mongo/db/storage/storage_engine_.cpp 85\n2020-04-30T15:40:32.062+0100 F - [initandlisten]\n\n***aborting after fassert() failure\n", "text": "After getting below errors I remove server from replication But still cannot start or run repaircan you please assist", "username": "lital_yeheskel" }, { "code": "", "text": "Hi @lital_yeheskel,If you’ve already run repair. You’ll probably have to restore from backup. I assume this is not a replica set?You’ll want to check your storage for errors before this. As this could be disk level corruption.", "username": "chris" }, { "code": "", "text": "repair is not working and monodb cannot be started.\nplease advice", "username": "lital_yeheskel" }, { "code": "", "text": "Check your storage for errors and recover from a backup.", "username": "chris" }, { "code": "", "text": "If you hadn’t removed it you could have cleared the data directory and synchronized.As you have removed it. Remove the files in the data directory, start it and add it as you would a new member.", "username": "chris" }, { "code": "", "text": "Thanks for your fast good info.\nIt was an encryption issue (could say storage issue) that we didnt managed to recover.\ndue to that i didn’t managed to repair mongo or start the service.\neventually i needed to reinstall mongo and add the node again.", "username": "lital_yeheskel" } ]
Cannot start replica set member or run repair
2020-04-30T15:18:48.215Z
Cannot start replica set member or run repair
6,818
null
[ "aggregation" ]
[ { "code": "", "text": "Problem Description:We are using mongoDB Community Version 4.0.14 version as our database.\nHW sizing is 16 core CPU, 32 GB RAM and 1 TB HDD.\nWe have data stored in two collections coll_A ( from collection - 2GB) and coll_B (to collection 5 GB) on a single node mongodb server,\nWe have to join these two collection based on certain predefined conditions and out the data in third collection coll_result.Appropriate Indexes are build on both the collections.\nEach of the collection contains millions of records and we use the following pipeline stages to out the result records.\n#1. $match runID (job id)\n#2. $redact on multiple and / or conditions filter initial data and keep relevant\n#3. $lookup on multiple and / or conditions join on the second collection\n#4. $unwind result\n#5. $redact on multiple and / or conditions filter all the data\n#6. $match specific condition for system from which the data is polled. ( Data is polled from multiple downstream systems)\n#7. $addFields\n#8. $project\n#9. $project with cursor - batchsize 50 , allowDiskUse: trueWe have a few observations:Some time it takes more the 3 to 6 hours to get the output.\nAs the data in coll_A , coll_B increases the time keeps on increasing.Need guidance for how to resolve the long query time problem.Sample aggregation pipelineThanks & Regards", "username": "Ranjit_Kolte" }, { "code": "$or$cond: [ \n { $or: [ \n { $eq:[ \"$Masterid\", \"\" ]}, \n { $eq:[ \"$VKORG\", \"\" ]}, \n { $eq:[ \"$VTWEG\", \"\" ]}, \n { $eq:[ \"$SPART\", \"\" ]}, \n { $eq:[ \"$KUNNR\", \"\" ]}, \n { $eq:[ \"$VBELN\", \"\" ]}, \n { $eq:[ \"$POSNR\", \"\" ]}, \n { $eq:[ \"$UDATE\", \"\" ]} \n ]}, \n\"$$PRUNE\", \"$$KEEP\" ]\n$lookup$and$lookup$cond$matchnull$redact$lookup", "text": "Hi @Ranjit_Kolte, welcome!Need guidance for how to resolve the long query time problem.You could try to simplify the multiple $or on stage #2. For example, it could be written similarly as:Similarly in the other parts of the pipeline. For example, in the $lookup pipeline match you could simplify the nested $and as well. Looking at the $lookup stage it looks like it could benefit from a single reference field. This should simplify matching ~12 fields into one field.\nAlso, the $cond on stage #7 is likely redundant, as on stage #6, the $match should already filtered cases for the null condition.Having said all the above, I’d recommend to reconsider the schema of the collections. Especially to look for ways to remove the double $redact stages, and potentially simplifying/eliminating the $lookup stage. Please see Building With Patterns: A Summary to see how different schema patterns may be suitable for your use case.I would also suggest to use explain to view detailed information regarding the execution plan of the aggregation pipeline. This should help you to create the appropriate indexes. See also Return Information on Aggregation Pipeline operation.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Queries take way too long for processing sometimes days and return no result
2020-04-23T13:47:56.098Z
Queries take way too long for processing sometimes days and return no result
3,746
null
[ "data-modeling" ]
[ { "code": "", "text": "I’m utilizing a local database to sync certain data from external APIs. The local database would be used to serve the web application. The data I’m syncing is different for each user who would be visiting the web app. Since the sync job is periodically writing to the DB while users are accessing their data from the web page, I’m wondering what would give me the best performance here.Since the sync job is continuously writing to the DB, I believe the collection is locked until it’s done. I’m thinking that having multiple collections would help here since the lock would be on a particular collection that is being written to rather than on a single collection every time.Is my thinking correct here? I basically don’t want reads to get throttled since the write operation is continuously locking up one collection.", "username": "Anish_Sana" }, { "code": "", "text": "Since the sync job is continuously writing to the DB, I believe the collection is locked until it’s done. I’m thinking that having multiple collections would help here since the lock would be on a particular collection that is being written to rather than on a single collection every time.Welcome to the community @Anish_Sana!WiredTiger, the default storage engine in all modern versions of MongoDB, uses document-level concurrency control. Continuously writing to a collection does not prevent reads. See: Does a read or write operation ever yield?.There are some administrative commands that require an exclusive lock at the collection level.For more information see FAQ: Concurrency.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks, @Stennie_X! This is exactly what I was looking for.", "username": "Anish_Sana" } ]
MongoDB Performance: single collection vs multiple collections
2020-04-30T20:31:50.171Z
MongoDB Performance: single collection vs multiple collections
4,411
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 3.6.18 is out and is ready for production deployment. This release contains only fixes since 3.6.17, and is a recommended upgrade for all 3.6 users.Fixed in this release:3.6 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Luke_Chen" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 3.6.18 is released
2020-05-01T02:53:20.397Z
MongoDB 3.6.18 is released
1,689
null
[ "ops-manager", "licensing" ]
[ { "code": "", "text": "Where can I found the OpsManager license agreement?", "username": "Marcos_Blabla" }, { "code": "", "text": "Welcome to the community @Marcos_Blabla!The Ops Manager license agreement is the Customer Agreement which you agree to when downloading Ops Manager. There’s an explicit checkbox on the download form:Check here to indicate that you have read and agree to the terms of the Customer Agreement.Per the terms of the Customer Agreement (section “2. Subscriptions”), you can install the software for free evaluation and development purposes, but use for any other purpose (including production) requires an Enterprise Advanced subscription. Please refer to the Customer Agreement for full legal terms and conditions.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
OpsManager License Agreement
2020-04-30T20:31:55.263Z
OpsManager License Agreement
2,800
null
[ "security", "legacy-realm-server" ]
[ { "code": "", "text": "We are using Realm Server and have a question about security. I’m not too worried about iOS, but on Android, hacking an APK is pretty common.Within the app, there are ways to keep from showing certain data even when they log in to the synced realm. However, if the APK was hacked (theoretically) and they had the credentials for the synced realm, could they then see and/or modify the data in the synced realm?Thanks. --Kurt", "username": "Kurt_Libby" }, { "code": "", "text": "@Kurt_Libby If a bad actor has the plaintext username & password for a syncUser of the realm object server then they will be able to login and download the synced realm unless you employ some other authentication and security mechanism like JWT, 2FA, or some other JWT implementation.We do not store the plaintext password of the syncUser on the device so even if a bad actor compromises the device they should still not be able to login() as long as you require them to.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm security question
2020-04-22T22:44:58.550Z
Realm security question
3,559
null
[ "production", "pymodm-odm" ]
[ { "code": "", "text": "We’re pleased to announce the 0.4.3 release of PyMODM - the Pythonic ODM for working with MongoDB!This release includes a fix for Python 3.8 support.For a full list of the issues resolved in this release, visit https://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=13381&version=27387.To install:$ python -m pip install pymodmTo upgrade:$ python -m pip install --upgrade pymodmDon’t know what PyMODM is and want to learn? Check out our Getting Started guide.Complete documentation for the project can be found at Read the Docs.If you’ve found a problem or want to request a feature, please open a new Jira\nticket in the PYMODM project: https://jira.mongodb.org/browse/PYMODM. You can\nalso get involved with the community and ask questions in the MongoDB Community Forum.", "username": "Prashant_Mital" }, { "code": "", "text": "", "username": "system" } ]
PyMODM 0.4.3 Released
2020-04-30T19:03:37.341Z
PyMODM 0.4.3 Released
3,193
null
[ "connecting", "php" ]
[ { "code": "<?php\nrequire 'vendor/autoload.php';\ntry {\n $mc = new MongoDB\\Client('mongodb://localhost/');\n echo \"<p>ClientOK</p>\";\n $dbs = $mc->listDatabases();\n echo \"<p>List</p>\";\n} catch ( **Exception** $ **e** ) {\n echo \"Exception: \", $e->getMessage(), \"\\n\";\n}\nserverSelectionTryOnce", "text": "Hi,I’m trying out PHP driver for the first time and run into this error. I’ve installed MongoDB extension version 1.7.4. It seems the connection to MongoDB is ok (i.e. not sure if a connection is really made from MongoDB\\Client), but the call to listDatabases() failed.\nAm I missing something? Please help!Btw, I’ve been using PyMongo and it’s working fine, just couldn’t get PHP to work.Client OK\nException: No suitable servers found (serverSelectionTryOnce set): [connection refused calling ismaster on ‘localhost:27017’]Here’s a snippet from phpinfo():$ php -i | grep -i mongo\nMongoDB support => enabled\nMongoDB extension version => 1.7.4\nMongoDB extension stability => stable\nlibmongoc bundled version => 1.16.2\nlibmongoc SSL => enabled\nlibmongoc SSL library => OpenSSL\nlibmongoc crypto => enabled\nlibmongoc crypto library => libcrypto\nlibmongoc crypto system profile => enabled\nlibmongoc SASL => enabled\nlibmongoc ICU => enabled\nlibmongoc compression => enabled\nlibmongoc compression snappy => enabled\nlibmongoc compression zlib => enabled\nlibmongocrypt bundled version => 1.0.3\nlibmongocrypt crypto => enabled\nlibmongocrypt crypto library => libcrypto\nmongodb.debug => /var/log/php-mongodb.debug.log => /var/log/php-mongodb.debug.log", "username": "Deyoung_Hong" }, { "code": "", "text": "Any help from the MongoDB team on PHP driver?\nDo you need any info from me?Btw, I had worked on a custom filesystem using WiredTiger and Keith Bostic was very helpful a few years back. See GitHub - deyohong/UNFS.", "username": "Deyoung_Hong" }, { "code": "mongodb.debug", "text": "The error message suggests that the driver can’t connect to a MongoDB instance locally, indicating that it isn’t running or isn’t accepting any connections. Please confirm that you can connect to that address via another method (e.g. mongo shell) and retry.If this doesn’t work, enabling the debug log will help with diagnosing where exactly this goes wrong. To do so, you can change the mongodb.debug INI setting. Please make sure to redact any sensitive information (passwords, documents) contained in the debug log before sharing it in a public or semi-public forum.", "username": "Andreas_Braun" }, { "code": "$ php -f testdb.php # this works\n$ curl http://localhost/testdb.php # DB connection not working as shown in this thread\n", "text": "Thanks Andreas for responding.As mentioned, mongo shell and pymongo are working fine. As a matter of fact, running php as command line also works:But if I invoke it via the local apache server then it doesn’t connect:I’ve also set mongodb.debug pointing it to /tmp/debug.log but nothing is shown there.Whenever I run ‘php -i’, I got this annoying error:\nPHP Warning: Module ‘mongodb’ already loaded in Unknown on line 0\nPHP Notice: PHP Startup: file created in the system’s temporary directory in Unknown on line 0When running from command line, I could see a connection is made from /var/log/mongodb/mong.log, but when invoking from http connection, nothing is happening in that log.What am I missing?Thanks,\nDH", "username": "Deyoung_Hong" }, { "code": "", "text": "I figured out the annoying warnings and errors coming from ‘php -i’ command. It’s because ‘extension=mongodb.so’ is specified in both /etc/php.ini and /etc/php.d/50-mongodb.ini, so it tries to load twice. I’m not sure where mongodb.debug can be set as. I thought its value would be a filepath but setting it as something like ’ mongodb.debug=/tmp/debug.log’ gives it the PHP Notice error. I changed it to ‘mongodb.debug=stderr’ and got the debug messages when running ‘php -f testdb.php’ from command line.I still don’t get anything (from mongod or debug logs) at all when running ‘curl http://localhost/testdb.php’ Note that it’s the same testdb.php file.I appreciate any help here.", "username": "Deyoung_Hong" }, { "code": "serverSelectionTryOnce", "text": "Hi,Let me repost the problem.Hope someone can help me resolve this issue with connecting to MongoDB using PHP via http.Thanks!\nDHThe basic setup:CentOS 7 (release 7.7.1908)\nMongoDB v4.2.5, new installation, no authentication.\nPHP version 7.4.5\nDriver install via ‘pecl install mongodb’\nSeeing this error:No suitable servers found (serverSelectionTryOnce set): [connection refused calling ismaster on ‘127.0.0.1:27017’]\nIs there a solution to this problem that I’m not aware of? Is there some mismatch versions of dependent modules or libraries?Please see more info below. The driver seems to be installed correctly, since the PHP standalone command works but via HTTP, it does not work. And the var_dump shows MongoDB\\Client ok on either way.Thanks for any help.Here’s a simple PHP test script:PROBLEM connecting to the database via HTTP (using curl):But it works in PHP command mode (not sure why or what’s the difference than previous?):More info:", "username": "Deyoung_Hong" }, { "code": "mongodb.debugstdoutstderr", "text": "Good to know you figured out the warnings yourself.The notice about a file being created in the system’s temporary directory is due to you specifying a non-existent directory in the mongodb.debug INI setting. Please note that this setting takes a directory or stream (e.g. stdout or stderr, as you’ve discovered). If using a directory, please make sure the directory exists. The driver will create a new file in that directory (IIRC it creates one per CLI call and one for each FPM process that is spawned, but please don’t quote me on that).From the error description it looks like the connection can’t even be opened when accessing the script through HTTP, which is definitely curious. A debug log will help us figure out where this fails, which hopefully points us in the right direction.Please also allow for some time between responses, especially on weekends. Thank you for your patience.", "username": "Andreas_Braun" }, { "code": "", "text": "If set mongodb.debug=/tmp:If set mongodb.debug=stderr:", "username": "Deyoung_Hong" }, { "code": "[2020-04-28T07:37:43.805985+00:00] socket: TRACE > TRACE: _mongoc_socket_capture_errno():68 setting errno: 13 Permission denied\n[2020-04-28T07:37:43.805997+00:00] socket: TRACE > TRACE: _mongoc_socket_errno_is_again():631 errno is: 13\n", "text": "This suggest a permission error with your HTTP server. This may be related to SElinux restrictions, which was covered in an older GitHub issue, mongodb/mongo-php-driver#484. It also seems related to the following Stack Overflow thread for MySQL, which shows that this isn’t particularly unique to MongoDB: apache - php can't connect to mysql with error 13 (but command line can) - Stack Overflow", "username": "jmikola" }, { "code": "", "text": "Thanks Jeremy for the tip. I will look into how SELinux affects this. MySQL is working fine through HTTP though.", "username": "Deyoung_Hong" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
PHP error: No suitable servers found
2020-04-16T20:39:28.835Z
PHP error: No suitable servers found
39,662
null
[ "java" ]
[ { "code": "", "text": "Hello,I’m currently developping a java project with mongoDB and I would like to know if there was any solution to have the mongo java driver without using maven or graddle. Like having the jar of the driver and add it like an external libraries.Thanks.", "username": "Jean_MATHIEU" }, { "code": "", "text": "You can download the driver JAR file from here: https://mvnrepository.com/artifact/org.mongodb/mongo-java-driver/3.12.0Just place it in the classpath while you compile and run your Java program.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you very much !", "username": "Jean_MATHIEU" } ]
Java Driver without Maven
2020-04-30T11:49:19.851Z
Java Driver without Maven
3,793
null
[]
[ { "code": "", "text": "Dear All,I am very new to MongoDB and I have build a test setup having a test MongoDB database and a test collection in it. I needed help on the following 2 points :-I have a few .pdf, .doc. .jpg etc … files on the Desktop of the Windows PC (i.e. “C:\\Desktop\\Documents”) and each one of them are less that 3 MB in size. I wanted to insert them into the normal test collection (i.e. table) of the test database that I have created. I read it somewhere that you don’t require a GridFS to be setup if the media files are less that 16 MB. Kindly help me with the syntax or command or steps on how to insert these less than 3 MB files in the normal collection of a the MongoDB database ?Kindly guide me from where I can download the free JDBC driver for MongoDB database ?Kindly help.", "username": "Neville_Monteiro" }, { "code": "", "text": "Kindly guide me from where I can download the free JDBC driver for MongoDB database ?You can download the driver from this site: MongoDB Java Driver.", "username": "Prasad_Saya" }, { "code": "", "text": "Hello Prasad,\nThank you for your reply. Can you guide me to the actual link from where I can download the JDBC driver as I am not able to locate it.The environment is as under :-\nMongoDB instance is running on Linux system\nJava Developers systems are Windows PCThe Java Developer need to connect their java programs from Windows PC to the MongoDB instance on the linux system.Kindly help.", "username": "Neville_Monteiro" }, { "code": "", "text": "See this link: Java Driver without Maven", "username": "Prasad_Saya" } ]
How to insert less than 3 MB pdf, .jpg, doc files in a normal collection and free JDBC driver for download
2020-04-30T08:59:27.138Z
How to insert less than 3 MB pdf, .jpg, doc files in a normal collection and free JDBC driver for download
1,952
null
[ "replication" ]
[ { "code": "", "text": "Let’s suppose i have a Replica Set (primary and a bunch of secondaries)… I want to automate registering a new mongodb node as part of the replica set. In other words, as a new node comes online, I would like to be able to self register to the Replica Set.Is it possible to add a new member to a mongodb Replica Set without executing the rs.add() shell command from the primary node’s side?What does the mongo community recommend as a best solution to this issue? The problem I’m trying to solve is data replication across multiple nodes, but the nodes come and go dynamically. I’m trying to use mongodb replication as an alternative to data replication at the application layer (e.g. not necessarily using replication as a fault tolerance mechanism but more as a data sourcing mechanism).", "username": "Jonathan_Whitaker" }, { "code": "", "text": "Bumping this. Anyone have any suggestions?", "username": "Jonathan_Whitaker" }, { "code": "", "text": "In my opinion, if the following was possible it could be a security risk.Is it possible to add a new member to a mongodb Replica Set without executing the rs.add() shell command from the primary node’s side?However, I am sure it would be very easy to write a script that call mongo shell with the appropriate --eval.", "username": "steevej" }, { "code": "mongoreplSetReconfigrs.add()rs.remove()rs.reconfig()", "text": "Is it possible to add a new member to a mongodb Replica Set without executing the rs.add() shell command from the primary node’s side?Hi Jonathan,You cannot change the replica set configuration without sending a reconfiguration command to the current primary.However, if you want to add/remove members programatically to a self-hosted deployment (rather than using the mongo shell), you can use the replSetReconfig command. Shell helpers like rs.add(), rs.remove(), and rs.reconfig() are all wrappers around building a replica set configuration document to be applied with this server command.As @steevej noted, there are security considerations if new instances have access to reconfigure the replica set. You would typically want to perform admin activities using automation tooling rather than having your application self-register. In the unfortunate event someone is able to compromise your application, you would not want an attacker to also gain admin access to your deployment.This approach may also cause performance or stability issues depending on the size of your data, how frequently you are thinking of adding/removing members, and your read preferences / write concerns.I recommend having a stable core of voting replica set members, with additional casual members configured as non-voting secondaries.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "When the topic was first posted I asked my self what possible use case could be accomplished with such a procedure. So I came up with the following.Could this be a good backup strategy?For example, every day you bring up a different member which get re-synced. Once it catches up with the oplog (hopefully it does) it is shutdown and restarted as a non member. Now you have live backups You could have other secondaries but everything gets replicated all the time. Now you have a secondary but that is frozen in time. Kind of a mongodump but potential more efficient as the sync process might be better that the dump. Kind of an analytic node with infinite slaveDelay.What’s your thoughts?May be Jonathan_Whitaker could give us some clues on his own use-case.", "username": "steevej" } ]
Programmatically add replica set member when it comes online
2020-04-23T18:06:21.377Z
Programmatically add replica set member when it comes online
2,358
null
[ "indexes" ]
[ { "code": "", "text": "Hi all,I have a aggregation which is fairly straightforward which is $match filedA, with #GTE 0 for fieldB. Not that fieldB cannot be negative the $sort on FiledB.The indexes are AB and B. Field A has high cardinality (almost unique) and there are about 1 million documents in the collection. For some reason Mongo chooses the index on fieldB, and does a complete index scan with a fetch to each document to check the value of fieldA.Any ideas why this may be. It does work find after removing the index on fieldB and the performance is literally thousands of times better.Is Mongo aware of stats such as volumes, cardinality etc as would be the case (for example) in DB2.Anything welcome, even flaming ", "username": "John_Cark" }, { "code": "", "text": "Welcome to the community @John_Cark !You don’t give us very precise information to understand where it may come from. Can you share your aggregation pipeline with us ? And also all of your indexes please.A temporary solution, maybe we won’t look any further, you can use hint to force your query to use a specific index.\nIt’s not flaming but it works ", "username": "Gaetan_MORLET" }, { "code": "", "text": "Hi Gaetan,\nThanks very much for coming back to me on this and I hope that my reply to you is not too verbose \nAt the bottom is the log text for the COMMAND in question. While the issue has now been solved (see below), we went through some stages:RegardsJohn2020-04-28T15:22:31.487+0000 I COMMAND [conn4450390] command spline_galactic.lineages_v4 command: aggregate { aggregate: “lineages_v4”, pipeline: [ { $match rootOperation.path: “hdfs://########/bigdatahdfs/########/raw/flex/UG/FN_UDF_DTL/2020/04/25/v1/_INFO”, timestamp: { $gte: 0 } } }, { $sort: { timestamp: 1 }], cursor: {}, $db: “spline_galactic”, $clusterTime: { clusterTime: Timestamp(1588087347, 2), signature: { hash: BinData(0, C952C5C5FDE91EB8C511A8CDDEF672DD983E5), keyId: 6784207444567392257 } }, lsid: { id: UUID(“12fb896b-abda-4462-a00b-41e32d4100b7”) } } planSummary: IXSCAN { timestamp: 1 } keysExamined:957557 sExamined:957557 cursorExhausted:1 numYields:7483 nreturned:0 reslen:241 locks:{ Global: { acquireCount: { r: 7485 } }, Database: { acquireCount: { r: 7485 } Collection: { acquireCount: { r: 7485 } } } storage:{ data: { bytesRead: 81822514, timeReadingMicros: 142455 } } protocol:op_msg 3046ms", "username": "John_Cark" }, { "code": "", "text": "For some reason Mongo chooses the index on fieldB. Any ideas why this may be.This might be the reason: Sort and Non-prefix Subset of an IndexAn index can support sort operations on a non-prefix subset of the index key pattern. To do so, the query must include equality conditions on all the prefix keys that precede the sort keys.What is an index prefix? See Compound Indexes - PrefixesIt does work find after removing the index on fieldB…I am not sure about this.But, I just tried the aggregation query on a small dataset with similar data and indexes (both the indexes). The query optimizer always used the index on both fields a+b, and used the index on both the match and sort operations.I am using MongoDB version 4.2.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @John_Cark,\nIt is possible that the planner had not yet reevaluated the plan for the query. Check this page appropriate for your mongo version: https://docs.mongodb.com/manual/core/query-plans/I once encountered a similar issue, albeit on 3.4. I thought this should have evaluated to using a different index but it had not. Using db.collection.getPlanCache().clear() helped in that situation. One with more finesse would have been planCacheClear using the query shape.It may not help in your scenario, but I thought I would mention it.", "username": "chris" }, { "code": "", "text": "Thank you Prasad. Indeed there is an equals predicate (rootOperationPath) before the SORT field which is timestamp. Given the high cardinality of the former field though, I would have expected it to use the compound key even if it had to do a small sort after fetching.As a matter of interest though, I had a similar situation today where we needed a new index. The only difference was hat there were three fields in the index and the search was with equality sorting on the third one. In this case it worked fine.BTW using 4.0RegardsJohn", "username": "John_Cark" } ]
Index selection - perceived wrong choice
2020-04-29T10:48:11.790Z
Index selection - perceived wrong choice
2,551
null
[ "production", "ruby" ]
[ { "code": "", "text": "I’m pleased to announce that we have released version 2.12.0 of the MongoDB Ruby Driver.This release includes two major feature additions: Client-Side Encryption and Extended JSON Parsing. For a full list of changes, see the release notes for versions 2.12.0.rc0 and 2.12.0 on GitHub.", "username": "Emily_Giurleo" }, { "code": "", "text": "", "username": "system" } ]
Ruby Driver version 2.12.0 has been released
2020-04-21T16:55:38.449Z
Ruby Driver version 2.12.0 has been released
2,274
null
[ "cxx" ]
[ { "code": "mongocxx::instance inst{}; mongocxx::client conn{ mongocxx::uri{\"my connection string\"} }; bsoncxx::builder::stream::document document{};", "text": "Hello everybody,\nI’m creating an application in Qt Creator. The librarys are correctly included, but when I write mongocxx::instance inst{}; mongocxx::client conn{ mongocxx::uri{\"my connection string\"} }; bsoncxx::builder::stream::document document{};I get lots of errors:\nUploading: image.png…The compiler settings are:\nUploading: image.png(1)…Does anyone have an idea? Please let me know if you need my .pro file too.", "username": "Simon_Reitbauer" }, { "code": "", "text": "Hi @Simon_Reitbauer,Both of the images that you attached on the post above have broken links. It’d be helpful if you could post the errors again.Also, it would be helpful to know which MongoDB C++ driver version and Qt version that you’re using.Regards,\nWan.", "username": "wan" } ]
MongoDB linking errors in Qt Creator
2020-04-26T08:42:37.169Z
MongoDB linking errors in Qt Creator
1,581
null
[ "php" ]
[ { "code": "", "text": "hi I got problems with loading mongodb extesion in php. php could not load the extension at all. I’ve used many ways to add dll in php like extesion=php_mongodb.ext or php=mongodb or mongodb.extesion_dir and so on. none of them worked. help me please. I am inside of a important project for my career.thank you very much.", "username": "Amir_Hadi" }, { "code": "php.iniextension=mongodb.so\n", "text": "Hi @Amir_Hadi, welcome!php could not load the extension at all. I’ve used many ways to add dll in php like extesion=php_mongodb.ext or php=mongodb or mongodb.extesion_dir and so on. none of them worked. help me pleaseYou could follow the installation steps at MongoDB PHP driver: Installation. Within your php.ini file, you should add a line as below:After successfully installing the extension I’d also recommend to install MongoDB PHP Library, the high-level abstraction for the MongoDB PHP driver.Regards,\nWan", "username": "wan" } ]
Php mongodb extension
2020-04-27T07:47:15.920Z
Php mongodb extension
1,747
null
[]
[ { "code": "", "text": "Hi folks,MongoDB cares about our global community and we want to do what we can to help. We’re offering free Atlas credits for developers working on projects to tackle problems arising from the COVID-19 pandemic. Check the blog post here for more details on how to apply.If you have questions about the program itself, feel free to leave them here. If you have any questions about using Atlas, please head over to our Cloud category and use the tag “covid19” in your post.Best,Jamie", "username": "Jamie" }, { "code": "", "text": "COVID-19 Campaign Ending | MongoDB “Submit Project” broken?", "username": "Suren_Konathala" }, { "code": "", "text": "Hi @Suren_Konathala,I just tested the form and it appears to be working fine. What error message or behaviour are you seeing and what is your specific browser & version?If you happen to be using any ad blocking plugins, please try disabling those temporarily in case they are interfering with the form.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Seeing this on Chrome and Firefox\n\nScreen Shot 2020-04-28 at 8.08.29 PM2284×1620 309 KB\n", "username": "Suren_Konathala" }, { "code": "", "text": "Hi Suren,I’ll share your feedback with our web team, but I’m not able to replicate this issue in the latest version of Chrome or Firefox on MacOS.If you can provide any more specific details (such as your O/S version and browser versions tested), that would be helpful to try to reproduce the issue.Can you also confirm that you are not using any adblocking or privacy software that might interfere with page rendering?Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Worked on Safari v13.1 though.Using it on Mac Mojave 10.14.6\nChrome 81.0\nFirefox 75.0", "username": "Suren_Konathala" }, { "code": "", "text": "Hi Suren,Thanks for the extra info. Our web team has another similar report and is investigating.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Helping Developers Tackle COVID-19
2020-03-26T16:26:00.408Z
Helping Developers Tackle COVID-19
3,309
null
[]
[ { "code": "", "text": "Hi,\nI have 2 questions:", "username": "Student_al_St" }, { "code": "_id_id_id_id", "text": "Through Atlas or Compas UI, The objectID field is write protected. Meaning you cannot modify it, how can I do the same on any other document field?Hi @Student_al_St,The _id value cannot be modified because it is the unique primary key for a document. In a cluster environment _ids are used in the replication oplog to identify data changes. Modifying the _id value would compromise the idempotent design guarantee of the current oplog format: operations must produce the same results whether applied once or multiple times to the target dataset.You cannot mark other fields as read-only or protected at a server level. If there is a more natural unique (and immutable) value for your documents, you can provide your own value for _id on insertion instead of using the default ObjectID.For Atlas, is it possible to link your own S3 storage?Primary storage in Atlas is part of the managed service.If you have existing data in S3 in other formats such as JSON, BSON, CSV, Avro, or Parquet you might be interested in using Atlas Data Lake to query this data.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to set a documents field read-only?
2020-04-30T01:38:05.655Z
How to set a documents field read-only?
5,512
https://www.mongodb.com/…5381bd7de5e8.png
[ "security", "atlas" ]
[ { "code": "", "text": "I was hoping to get some clarification. We are using an M2 cluster of MongoDb Atlas. I’ve read this link which states Atlas encrypts all cluster storage and snapshot volumes, ensuring the security of all cluster data at rest.My understanding if we want to encrypt the data with our key then would need to upgrade to M10.On the assumption our M2 cluster is encrypted I asked MongoDb chat support on what encryption type is used (eg 256 encryption). I was surprisingly given the response that the M2 cluster was not encrypted (See snapshot below).Is this correct? It seems what they have stated is a contradiction? Am I able to get clarification that our M2 cluster is encrypted at REST? In addition what encryption type is used? And finally as a newb what is the key difference between encrypting the whole disk vs us providing our key? Many thanks in advance.\nScreen Shot 2020-04-28 at 9.25.09 pm738×722 72.1 KB\n", "username": "Ka_Tech" }, { "code": "", "text": "Welcome to the community @Ka_Tech!MongoDB Atlas always uses cloud provider storage encryption by default. This is volume-level encryption at rest (for example, EBS Encryption on AWS). In free/shared tier clusters (M0, M2, M5) the underlying MongoDB instances are shared so you cannot configure encryption options. The industry standard for cloud provider encryption is AES-256, but you can confirm the exact details referring to AWS, GCP, or Azure documentation as appropriate.If you have a dedicated cluster (M10+), you can enable and configure the Enterprise Encryption at Rest feature which is cluster-specific encryption for additional security including user-managed encryption keys. With this feature enabled, your data files will be encrypted using the Encrypted Storage Engine (which is in addition to the underlying cloud provider storage encryption).For example, if you are using Encryption at Rest with AWS KMS:Atlas encrypts your data at rest using encrypted storage media. Using keys you manage with AWS KMS, Atlas encrypts your data a second time when it writes it to the MongoDB encrypted storage engine. You use your AWS Customer Master Key (CMK) to encrypt the MongoDB master encryption keys. Oplog data is also encrypted with your CMK.For more information, please refer to How does MongoDB Atlas secure my data? in the Atlas FAQ. The MongoDB Atlas Security white paper also goes into further detail.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie, thanks so much for your reply. Now I understand the difference why M10 only supports custom keys given M0, M2, M5 are on shared tier clusters.Don’t know why the MongoDb rep stated it wasn’t encrypted but your response with the supported links alleviates my concerns. Thanks again!", "username": "Ka_Tech" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas - Encryption at Rest
2020-04-28T12:30:32.659Z
MongoDB Atlas - Encryption at Rest
12,246
null
[ "python", "atlas", "field-encryption" ]
[ { "code": "", "text": "Greetings everyone.We like to connect our mongodb atlas over pymongo type of driver for our program to utilize it. We couldn’t find any documentation of that without Mongo atlas username and password openly written in python file.If there is how we can achieve that to at least protect the password.Other one is we like to run client field-level encryption with AWS as KMS but for Atlas, there is no documentation of limitation for this kind of actions while a lot of functions limited at several levels of Atlas so how we can test these functions and be sure they work before merging them to live version.", "username": "Picon_bello" }, { "code": "$ export APP_URI='mongodb://user:[email protected]/?tls=true'\n$ python3\n>>> import os\n>>> client = MongoClient(os.environ['APP_URI'])\n>>> client.admin.command('ping')\n{'ok': 1}\n", "text": "The most common pattern to avoid storing the password in the application code is to put the entire connection string in an environment variable like this:As for testing Field Level Encryption, I suggest testing your application using one of Atlas’ low cost development tiers, M10 or M20.You may also be interested in reading the documentation page Read/Write Support with Automatic Field Level Encryption which covers all the supported read/write operations as well as the query limitations.", "username": "Shane" }, { "code": "", "text": "We are curious can we merge existing databases to Automatic Field Level Encryption what I mean is if we migrate or change existing server settings with Automatic Field Level Encryption does existing data will get encrypted? Since it’s beta feature I am not sure how will that work?", "username": "Picon_bello" } ]
Connection without password in open text and Client field level encryption
2020-04-05T04:29:46.222Z
Connection without password in open text and Client field level encryption
2,202
null
[ "queries" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"xxxxxxxxxx\"\n },\n \"logs\": [\n {\n \"request\": {\n \"alternate_intents\": true,\n \"context\": {\n \"conversation_id\": \"xxx\",\n \"system\": {\n \"dialog_turn_counter\": 1,\n \"dialog_request_counter\": 1,\n \"session_id\": \"xxx\",\n \"assistant_id\": \"xxxx\",\n \"skill_reference_s\": \"test Skill\"\n },\n \"metadata\": {}\n },\n \"input\": {\n \"options\": {\n \"return_context_s\": \"true\"\n },\n \"text\": \"por cuál por el primero \",\n \"message_type_s\": \"text\"\n }\n },\n \"response\": {\n \"intents\": [\n {\n \"intent\": \"Consulta_Motivo_Renuncia\",\n \"confidence\": 0.36768287420272827\n },\n {\n \"intent\": \"Solicita_Rut\",\n \"confidence\": 0.324138343334198\n },\n {\n \"intent\": \"Ejemplos\",\n \"confidence\": 0.2719978451728821\n },\n {\n \"intent\": \"Solicita_Numero_de_Serie\",\n \"confidence\": 0.26375070810317996\n },\n {\n \"intent\": \"Solicita_Direccion_Cliente\",\n \"confidence\": 0.25889307856559757\n },\n {\n \"intent\": \"Solicita_Email_Cliente\",\n \"confidence\": 0.25696601867675783\n },\n {\n \"intent\": \"Protocolo_Renuncia_Parte3\",\n \"confidence\": 0.2559263825416565\n },\n {\n \"intent\": \"Explica_Protocolo\",\n \"confidence\": 0.2487402230501175\n },\n {\n \"intent\": \"Respuesta_Cliente_Positiva\",\n \"confidence\": 0.24280670285224915\n },\n {\n \"intent\": \"Solicita_Telefono_Contacto\",\n \"confidence\": 0.2413155049085617\n }\n ],\n\"intent\": \"Respuesta_Cliente_Positiva\",\n \"confidence\": 0.4656783819198609\n", "text": "Hi. I am new to mongo and we are in our company starting new developments with this engine.\nI have the following jsonand need to list all intents for each “conversation_id” for exampleI would really appreciate if you can give me an example of a query to get that informationRegardsRodrigo", "username": "Rodrigo_Ceballos" }, { "code": "", "text": "SinceI am new to mongoI suggest https://university.mongodb.com/.", "username": "steevej" }, { "code": "", "text": "Welcome to the community @Rodrigo_Ceballos !As @steevej said, it would be better for you to start with the MongoDB University free courses, follow the developer learning path. It’s the best for you.If you really need help with your query, can you provide us a complete sample document ?\nThe one you shared is incomplete, at the end we don’t necessarily know what the “logs” array looks like, are there a lot of elements in it ? More than once object with the same “conversation_id” field ?Finally to be honest, it’s a big document, I’m not convinced of the data schema. The last course of the developer learning path is about data modeling, I strongly encourage you to follow it.", "username": "Gaetan_MORLET" }, { "code": "db.collection.aggregate()", "text": "@Rodrigo_CeballosYour document structure has arrays, nested documents (a.k.a. objects or embedded documents or sub-documents) and nested arrays.In general, to query a collection you use the MongoDB Query Language (MQL)'s db.collection.find method.For more complex querying and results, you use the Aggregation-Framework. The result you are looking for can be built using an aggregation query with the db.collection.aggregate() method.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi, I’m on it (MongoDB University free courses). Is that we needed to make a report with this json in a way more urgently.https://drive.google.com/open?id=1W-YdRdzMyl8-EKxTiG8wUEnjppeYS4NVhere is the document in case you can give me an idea. If the document is optimal or not.Thank you. RegardsRodrigo", "username": "Rodrigo_Ceballos" }, { "code": "", "text": "I would like to know if with that document, I have a way to bring me all “intent”: “Deseo_Renuncia” and confidenceRegards Rodrigo.", "username": "Rodrigo_Ceballos" }, { "code": "", "text": "Hi @Rodrigo_CeballosWith your current data schema, it’s complicated. You can’t just display “intention”: “Deseo_Renuncia” and confidence with a simple find(). You would have to use the Aggregation Framework, and even with that it could be tedious.12 437 lines in a single document, that was bound to be a problem one day .\nYou made the mistake of thinking that with MongoDB you can store all relationships in a single document, too much denormalization. Technically it’s possible but in practice it leads to many many problems for queries or for your indexing strategy.I definitely recommend that you follow M320.", "username": "Gaetan_MORLET" } ]
How to get nested values from JSON
2020-04-27T17:36:54.860Z
How to get nested values from JSON
10,579
null
[ "kafka-connector" ]
[ { "code": "", "text": "I am getting an error trying to source data from Mongo using com.mongodb.kafka.connect.MongoSourceConnector. I suspect the problem is that the Mongo I am trying to use is too old, but I cannot find a list of the supported versions in the documentation or on Github.mongo: v3.0.12\nkafka-connect-mongodb: v1.0.1Failed to resume change stream: exception: Unrecognized pipeline stage name: ‘$changeStream’ 16436 (com.mongodb.kafka.connect.source.MongoSourceTask)Based on the error message I’m guessing the minimum version is 3.6, but I can’t find anything about minimum supported versions in the documentation or repo.Does anybody know where I can find the official list of supported versions? (And if not, should I open a ticket?)", "username": "Joseph_Zack" }, { "code": "", "text": " Hi @Joseph_Zack and welcome to the community forums.While I’ve not used the Kafka connecter, I can state that change streams were introduced to MongoDB in the 3.6 version, so that would be the earliest oldest version that you could potentially use. Note that MongoDB 3.6, while still supported, is quite old and once 4.4 comes out support for 3.6 will be ending soon after.Is there a reason that you’re still using MongoDB 3.0.12? This version has been out of support for over two years now. A lot of features, security patches and performance optimizations have been made in this time.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Joseph_Zack,The Kafka Source Connector builds on the Change Streams feature which requires a MongoDB 3.6+ replica set deployment. If you do not require failover or redundancy, you can use a single-member replica set deployment.I could not find specific mention of supported MongoDB versions in the current Kafka Connector readme or documentation, so reported this as a documentation improvement: KAFKA-102: Document supported versions of MongoDB.As @Doug_Duncan mentioned, the MongoDB 3.0 release series is a few years past end of life and no longer maintained or supported. I strongly recommend upgrading your deployment to a supported MongoDB server release series (currently 3.6 or newer) for continued support as well as stability and performance improvements.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you @Stennie_X / @Doug_Duncan! I’m integrating with a 3rd party system so I don’t have direct control, but I will ask about upgrading. I’ll ask to upgrade to latest, so this information is very useful to help me make my case!I am very new to Mongo, so I’m still catching up. Thank you for your help!", "username": "Joseph_Zack" }, { "code": "", "text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Kafka Connector - Minimum supported MongoDB version?
2020-04-28T18:54:56.735Z
Kafka Connector - Minimum supported MongoDB version?
2,397
null
[ "node-js", "production" ]
[ { "code": "", "text": "The MongoDB Node.js team is pleased to announce version 3.5.7 of the driverWork earlier this year left some dead code in our operations code, resulting in this warning message reported by multiple users. While we still have a few cycles in our codebase yet, this will quiet Node.js 14’s circular dependency warnings.Drivers use an implicit session for all operations where an explicit session is not provided. A subtle bug was introduced when session support was implemented where implicit sessions were created and assigned to operations even if they were about to sit in a queue waiting for execution. This results in the driver creating many sessions rather than reusing pooled ones. The fix is to ensure a session is only checked out of the pool when the operation is about to be written to a server.Reference: MongoDB Node.js Driver\nAPI: Index\nChangelog: node-mongodb-native/HISTORY.md at 3.5 · mongodb/node-mongodb-native · GitHubWe invite you to try the driver immediately, and report any issues to the NODE project.Thanks to all who contributed to this release!The MongoDB Node.js team", "username": "mbroadst" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Node.js Driver 3.5.7 Released
2020-04-29T12:13:01.319Z
MongoDB Node.js Driver 3.5.7 Released
3,377
null
[ "cxx" ]
[ { "code": "cmake_minimum_required(VERSION 3.5)\nproject(testThread)\n\nset (CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -pthread -Wall -Werror -std=c++14\")\n\nset(source_dir \"${PROJECT_SOURCE_DIR}/src/\")\n\nfile(GLOB source_files \"${source_dir}/*.cpp\")\n\nset(CMAKE_PREFIX_PATH \"/usr/local\")\nfind_package(mongocxx CONFIG REQUIRED)\n\nadd_executable (testThread ${source_files})\n", "text": "I am on Ubuntu 18.04, have installed the MongoCXX driver to location: /usr/local/I have tried the following test script using the command line argument given in the tutorial, and it works finec++ --std=c++11 test.cpp -o test $(pkg-config --cflags --libs libmongocxx)However i cannot seem to get it to work with CMAKE. I am fairly new to CMAKE, and must be doing something stupid, but cant figure out what.Below is my CMAKE file:The CMAKE file builds successfully, which indicates that CMAKE has found the package, but it when i compile my program, it fails with the following error:“fatal errror: bsoncxx/builder/stream/document.hpp: No such file or directory”Does anyone have any idea why this is happening, or what i can do to debug this problem?Any help would be greatly appreciated", "username": "arif_saeed" }, { "code": "", "text": "@arif_saeed The CMake package configuration files changed in the most recent release. What C++ driver release are you using and what was the output of the CMake command for your own project? In particular I’d like to see what it reported when it found the C++ driver.", "username": "Roberto_Sanchez" } ]
CMake linking problem with mongocxx on Ubuntu
2020-04-29T09:29:53.641Z
CMake linking problem with mongocxx on Ubuntu
2,770
null
[ "queries", "performance" ]
[ { "code": "db.collection.find({'SecondEvent.Code': \"1111\"}, \n {_id: 0, 'TrackTime': 1}).sort({'TrackTime.0': -1}).limit(1)\ndb.collection.find({'SecondEvent.Code': \"1111\"}, \n {_id: 0, 'TrackTime': 1}).sort({'TrackTime.0': -1}).limit(1)[0].TrackTime[0].valueOf()\n", "text": "Hello Mongo newbie here. I have a query which works fine but when running against a lot of data and wide date range it can take many seconds or even minutes. I narrowed down the bottle neck which is the following snippet that lies within a couple of for loops. In my current date range the whole query takes less than a second…However what I want to do is get the actual value of the TrackTime which means doing the following…The whole query now takes about 12 seconds! Is there any other way of getting the time value out for the result without the performance falling over like this?Any help is much appreciatedThanks", "username": "Alex_Pickard" }, { "code": "_id", "text": "I’m curious what the documents in your collection look like.Also do you have any indexes beyond the default _id?", "username": "Natac13" }, { "code": "{\n\"_id\" : LUUID(\"5e4cf57f-2b1d-2d4a-b158-b54c43b17364\"),\n\"Event\" : {\n \"Code\" : \"22\",\n \"Description\" : \"example description\"\n},\n\"Confirmed\" : [ \n NumberLong(636469684832760000), \n 0\n],\n\"Exported\" : [ \n NumberLong(636449680517180000), \n 0\n],\n\"Location\" : \"locationA\",\n\"SecondEvent\" : {\n \"_id\" : LUUID(\"b838f2c5-033c-c64b-95b7-b636cca4d2bf\"),\n \"Code\" : \"1111\",\n \"Description\" : \"example description\",\n \"Default\" : true,\n \"Location\" : {\n \"Name\" : \"Default\",\n \"Code\" : \"AAA\"\n },\n \"EndOfLife\" : false\n},\n\"Ships\" : {\n \"Id\" : \"123456789-id\",\n \"Name\" : \"xyz\",\n \"Number\" : \"123456789\",\n \"Parts\" : [ \n {\n \"Codes\" : [ \n {\n \"Number\" : \"987654321\",\n \"Name\" : \"xyz\",\n \"DeletedDate\" : null\n }\n ],\n \"TracNumbers\" : [ \n {\n \"Number\" : \"123456789\",\n \"Name\" : \"abc\",\n \"DeletedDate\" : null\n }, \n {\n \"Number\" : \"987654321\",\n \"Name\" : \"xyz\",\n \"DeletedDate\" : null\n }\n ]\n }\n ],\n \"Status\" : 3\n},\n\"Identifier\" : {\n \"_id\" : null,\n \"Number\" : \"123456789\",\n \"Code\" : \"987654321\",\n \"TrackNumber\" : \"123456789\"\n},\n\"TrackTime\" : [ \n NumberLong(636445689000000000), \n 60\n],\n\"UserEmail\" : \"[email protected]\",\n\"Version\" : \"0.0.2\"\n", "text": "Hi, and thanks. There are other indexes around various values I even created one for the Tracking Time but didn’t make any difference (Update: although this won’t make any difference anyway as I’m not selecting based on the TrackTime). An example document is as follows…}Thanks", "username": "Alex_Pickard" }, { "code": "ScanScan", "text": "So I am wondering, with this example where is the field Scan that you are querying for?\nAnd yes you are right the TrackTime index will not help right now. Do you have one for Scan?", "username": "Natac13" }, { "code": "", "text": "He is not selecting TrackTime, however he is projecting and then sorting. An index will help very much because the query will become covered if the index also include SecondEvent.Code.", "username": "steevej" }, { "code": "SecondEvent.CodeScanScan", "text": "@steevej You are absolutely correct. My mistake. However the field SecondEvent.Code in the query was originally Scan; @Alex_Pickard must of changed it . Therefore I was under the impression that the stand alone TrackTime index would not help as the query was off Scan.\nWould TrackTime be a multi-key index? As the field is an array?", "username": "Natac13" }, { "code": "", "text": "Considering there is 2 edits on the first post that is most likely:must of changed itMulti index yes. But it looks like he is intereseted in index 0. Probably the start time or creating time. Which might be better be a different field to avoid multi-index.", "username": "steevej" }, { "code": "", "text": "Apologies, I didn’t change that when I copied the document. In the original snippet “Scan” is actually “SecondEvent.Code”, and yes there is an index for that.", "username": "Alex_Pickard" }, { "code": "db.collection.find({'SecondEvent.Code': \"1111\"}, \n{_id: 0, 'TrackTime': 1}).sort({'TrackTime.0': -1}).limit(1)\n", "text": "Hi. I did reply earlier but it’s still under review for some reason. Basically I said “Apologies, I didn’t change that when I copied the document. In the original snippet “Scan” is actually “SecondEvent.Code”, and yes there is an index for that.”I’ve also tried creating a multi index for both SecondEvent.Code and TrackTime.0 but no improvement. As I said before it’s fast when not doing anything with the result returned…It’s simply when adding [0].TrackTime[0].valueOf() to the end to get the time value out of the NumberLong, or even just simply adding [0] causes the massive increase in time to the query.", "username": "Alex_Pickard" }, { "code": "TrackTimedb.collection.aggregate( [\n { \n $match: { \"SecondEvent.Code\": \"1111\" } \n },\n { \n $project: { _id: 0, maxValue: { $max: \"$TrackTime\" } } \n }\n] )\n$project$addFields\"SecondEvent.Code\"SecondEvent.Code", "text": "@Alex_Pickard The following Aggregation query prints the maximum value of the array field TrackTime, efficiently.If you want all the other fields in the document you can substitute $project with the $addFields.If there are a large number of documents in the collection, it helps to have an index on the field \"SecondEvent.Code\". If you have other compound indexes with this field, then the SecondEvent.Code must be the first key in that index.", "username": "Prasad_Saya" }, { "code": "db.collection.find({'SecondEvent.Code': \"1111\"}, \n{_id: 0, 'TrackTime': 1}).sort({'TrackTime.0': -1}).limit(1)\n", "text": "Thanks but this is no better performance wise. It actually shows the same slow 12 seconds run time even before I get the TrackTime value. The original snippet was less than half a second before getting the value…It’s only when trying to get the TrackTime[0] value to do something with it that it’s slowing everything down for some reason.", "username": "Alex_Pickard" }, { "code": "", "text": "I am very very surprise withIt’s only when trying to get the TrackTime[0] value to do something with it that it’s slowing everything down for some reason.All the hard work is already done. Something else must be happening.On a side note, since you are mostly interested in TrackTime[0], could you projectrather than{_id: 0, ‘TrackTime’: 1}", "username": "steevej" }, { "code": "var toDate = new Date()\nvar epochTicks = 621355968000000000\nvar toTicks = ((toDate.getTime() * 10000) + epochTicks);\nvar count = 0\nvar shipNum = 621355968000000000\n\nfor (i = 0; i < 4000; i++) {\n count = count + 123456789\n shipNum = shipNum + 123456789\n db.tracking_temp.insert({\n \"SecondEvent\" : {\n \"Code\" : \"1111\"\n },\n \"Ships\" : {\n \"Name\" : \"xyz\",\n \"Number\" : NumberLong(shipNum)\n },\n \"TrackTime\" : [ \n NumberLong(toTicks + count), \n 60\n ]\n })\n}\nvar ships = db.tracking_temp.find({}, {_id:0, 'Ships.Number': 1}).toArray()\nvar shipsCount = ships.length\n\nfor (i = 0; i < shipsCount; i++) {\n var shipsNumber = ships[i].Ships.Number\n\n var minTick = db.tracking_temp.find({$and: [{'Ships.Number': shipsNumber}, {'SecondEvent.Code': \"1111\"}]}, \n {_id: 0, 'TrackTime': 1}).limit(1)//[0]\n \n print()\n}", "text": "I know it’s weird. If you want to see it in “action” run the first part which I’ve stripped lots of stuff out of, to create 4000 records in a temp collection. Then run the second part against it. Try with the [0] commented out first. For me this runs in about 1/20th of a second. When uncommenting the [0] it takes about 8-10 seconds!…then run this. Doesn’t print or do anything but will show the jump in run duration.", "username": "Alex_Pickard" }, { "code": "", "text": "Thanks. I will certainly try this over the week end. I let you know on my findings.", "username": "steevej" }, { "code": "\nprint()\n\nprint( db.tracking_temp.find(query, project).limit(1) )\n", "text": "@Alex_Pickard, here are my findings.I was able to observe the difference you explained. But I was curious so rather that just \nprint()\n I did \nprint( db.tracking_temp.find(query, project).limit(1) )\n\nNot being too familiar with .js, I was surprised that documents were not printed but the following:Which is probably some kind of promise that is not executed when you do not do [0] but it is when you do. Said otherwise with [0], you do not execute the query and you do not transfer any document over the wire. Wonderful world of async I guess.", "username": "steevej" } ]
Getting data from query result taking a long time
2020-04-23T10:28:14.257Z
Getting data from query result taking a long time
12,664
null
[ "sharding" ]
[ { "code": "", "text": "Hi Experts,I have 10 shards MongoDB cluster on AWS EC2 machine with mongo version 3.6.11, in that rate of increase of data files on one of the shard is high, I want to know when collection & index wt files will generate. Number of DBs, collection, indexes and data size are less in that shard as compare to others. This is causing me connection spike issue. Details are as below:No of DBs in each shard:\n6171 DBs in db0\n6182 DBs in db1\n6142 DBs in db2\n6260 DBs in db3\n5823 DBs in db4\n5694 DBs in db5\n5181 DBs in db6\n5052 DBs in db7\n2117 DBs in db8\n2156 DBs in db9Index Size of each shard:\nIndex size on db0 in bytes = 100757651456\nIndex size on db1 in bytes = 91092033536\nIndex size on db2 in bytes = 100978843648\nIndex size on db3 in bytes = 89782575104\nIndex size on db4 in bytes = 85394440192\nIndex size on db5 in bytes = 87024951296\nIndex size on db6 in bytes = 89708388352\nIndex size on db7 in bytes = 87563587584\nIndex size on db8 in bytes = 47799222272\nIndex size on db9 in bytes = 48416301056wt file counts:\nData dir file counts for db0 : 115912\nData dir file counts for db1 : 103572\nData dir file counts for db2 : 101192\nData dir file counts for db3 : 115447\nData dir file counts for db4 : 80676\nData dir file counts for db5 : 89648\nData dir file counts for db6 : 115500\nData dir file counts for db7 : 104279\nData dir file counts for db8 : 120104\nData dir file counts for db9 : 151194Please let me know, why and when wt files will create or increase, data size for this shard is approx 250GB and for others it is around 365GB, so data in collection is also not high to create more wt files for a collection. I am using xfs mount for data directory, mongo version 3.6.11 & server is ubuntu1604. Thanks in advanceRegards,\nSaurabh", "username": "Saurabh_Singh" }, { "code": "", "text": "Hi expert,may I get solution for this.Regards,\nSaurabh", "username": "Saurabh_Singh" }, { "code": "", "text": "Hi Sourabh,As per my understanding , .wt file created for every new collection and index.", "username": "satvant_singh" } ]
Data file WiredTiger (*.wt) are high on one Mongo shard
2020-04-13T12:32:26.624Z
Data file WiredTiger (*.wt) are high on one Mongo shard
2,330
null
[ "replication" ]
[ { "code": "", "text": "Hi team,I’m testing auto failover with the below steps:Step 1: Setup 3 instanceInstance type: Amazon Linux AMI\nmongod version: 3.6.9\nmongod engine: mmapv1Step 2: Set up a three nodes replication: Node A (primary), Node B (secondary), Node C (arbiter).Step 3: Step down Node A. With this step, I expect Node B to become primary instantly. But the outcome is not that:2020-04-29T10:40:52.185+0900 I COMMAND [conn10] Received replSetStepUp request\n2020-04-29T10:40:52.185+0900 I REPL [conn10] Starting an election due to step up request\n2020-04-29T10:40:52.185+0900 I REPL [conn10] skipping dry run and running for election in term 3\n2020-04-29T10:40:52.258+0900 I REPL [replexec-10] VoteRequester(term 3) received a yes vote from 3.114.210.6:27018; response message: { term: 3, voteGranted: true, reason: “”, ok: 1.0, operationTime: Timestamp(1588124445, 1), $clusterTime: { clusterTime: Timestamp(1588124445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }\n2020-04-29T10:40:52.258+0900 I REPL [replexec-10] election succeeded, assuming primary role in term 3\n2020-04-29T10:40:52.258+0900 I REPL [replexec-10] transition to PRIMARY from SECONDARY\n2020-04-29T10:40:52.258+0900 I REPL [replexec-10] Resetting sync source to empty, which was secondaryMemberIP:27018\n2020-04-29T10:40:52.258+0900 I REPL [replexec-10] Entering primary catch-up mode.\n2020-04-29T10:40:52.259+0900 I REPL [replexec-10] Member secondaryMemberIP:27018 is now in state SECONDARY\n2020-04-29T10:40:52.259+0900 I REPL [replexec-10] Caught up to the latest optime known via heartbeats after becoming primary.\n2020-04-29T10:40:52.259+0900 I REPL [replexec-10] Exited primary catch-up mode.\n2020-04-29T10:50:22.001+0900 I REPL [rsSync] transition to primary complete; database writes are now permittedAs you can see, in this example it took 10 minutes for Node B to complete transitioning to primary since the stepup request. As a result, during this transition time, my services are dead and couldn’t response to client request.Is this normal behavior?Thanks for your help.", "username": "Huy" }, { "code": "", "text": "Can you paste below information -rs.conf()\nrs,status() and logs from secondary and arbiter nodes.", "username": "satvant_singh" }, { "code": "{\n \"_id\" : \"replSet1\",\n \"version\" : 6,\n \"protocolVersion\" : NumberLong(1),\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"host1:27018\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 1,\n \"host\" : \"host2:27018\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 2,\n \"host\" : \"host3:30000\",\n \"arbiterOnly\" : true,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 0,\n \"tags\" : {\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"5ea7d565f6f9fe4300763108\")\n }\n}\n{\n \"set\" : \"replSet1\",\n \"date\" : ISODate(\"2020-04-29T06:38:02.630Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(4),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1588142273, 1),\n \"t\" : NumberLong(4)\n },\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1588142273, 1),\n \"t\" : NumberLong(4)\n },\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1588142273, 1),\n \"t\" : NumberLong(4)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1588142273, 1),\n \"t\" : NumberLong(4)\n }\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"host1:27018\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 81474,\n \"optime\" : {\n \"ts\" : Timestamp(1588142273, 1),\n \"t\" : NumberLong(4)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1588142273, 1),\n \"t\" : NumberLong(4)\n },\n \"optimeDate\" : ISODate(\"2020-04-29T06:37:53Z\"),\n \"optimeDurableDate\" : ISODate(\"2020-04-29T06:37:53Z\"),\n \"lastHeartbeat\" : ISODate(\"2020-04-29T06:38:01.939Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2020-04-29T06:38:02.118Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"host2:27018\",\n \"syncSourceHost\" : \"host2:27018\",\n \"syncSourceId\" : 1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 6\n },\n {\n \"_id\" : 1,\n \"name\" : \"host2:27018\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 82263,\n \"optime\" : {\n \"ts\" : Timestamp(1588142273, 1),\n \"t\" : NumberLong(4)\n },\n \"optimeDate\" : ISODate(\"2020-04-29T06:37:53Z\"),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1588142063, 1),\n \"electionDate\" : ISODate(\"2020-04-29T06:34:23Z\"),\n \"configVersion\" : 6,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 2,\n \"name\" : \"host3:30000\",\n \"health\" : 1,\n \"state\" : 7,\n \"stateStr\" : \"ARBITER\",\n \"uptime\" : 515,\n \"lastHeartbeat\" : ISODate(\"2020-04-29T06:38:01.940Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2020-04-29T06:38:01.713Z\"),\n \"pingMs\" : NumberLong(1),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 6\n }\n ],\n \"ok\" : 1,\n \"operationTime\" : Timestamp(1588142273, 1),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1588142273, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"4YEfqovQck2B9BmvHKchzML/oDg=\"),\n \"keyId\" : NumberLong(\"6803576634675822593\")\n }\n }\n}\n2020-04-29T15:34:23.780+0900 I COMMAND [conn2] Received replSetStepUp request\n2020-04-29T15:34:23.780+0900 I REPL [conn2] Starting an election due to step up request\n2020-04-29T15:34:23.780+0900 I REPL [conn2] skipping dry run and running for election in term 4\n2020-04-29T15:34:23.809+0900 I REPL [replexec-24] VoteRequester(term 4) received a no vote from host3:30000 with reason \"can see a healthy primary (host1:27018) of equal or greater priority\"; response message: { term: 4, voteGranted: false, reason: \"can see a healthy primary (host1:27018) of equal or greater priority\", ok: 1.0 }\n2020-04-29T15:34:23.832+0900 I REPL [replexec-13] VoteRequester(term 4) received a yes vote from host1:27018; response message: { term: 4, voteGranted: true, reason: \"\", ok: 1.0, operationTime: Timestamp(1588142062, 1), $clusterTime: { clusterTime: Timestamp(1588142062, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }\n2020-04-29T15:34:23.832+0900 I REPL [replexec-13] election succeeded, assuming primary role in term 4\n2020-04-29T15:34:23.832+0900 I REPL [replexec-13] transition to PRIMARY from SECONDARY\n2020-04-29T15:34:23.832+0900 I REPL [replexec-13] Resetting sync source to empty, which was host1:27018\n2020-04-29T15:34:23.833+0900 I REPL [replexec-13] Entering primary catch-up mode.\n2020-04-29T15:34:23.833+0900 I REPL [replexec-27] Member host1:27018 is now in state SECONDARY\n2020-04-29T15:34:23.833+0900 I REPL [replexec-27] Caught up to the latest optime known via heartbeats after becoming primary. Target optime: { ts: Timestamp(1588142062, 1), t: 3 }. My Last Applied: { ts: Timestamp(1588142062, 1), t: 3 }\n2020-04-29T15:34:23.833+0900 I REPL [replexec-27] Exited primary catch-up mode.\n2020-04-29T15:37:33.666+0900 I REPL [rsSync] transition to primary complete; database writes are now permitted\n2020-04-29T15:34:23.753+0900 I COMMAND [conn56] Attempting to step down in response to replSetStepDown command\n2020-04-29T15:34:23.753+0900 I REPL [conn56] transition to SECONDARY from PRIMARY\n2020-04-29T15:34:23.753+0900 I NETWORK [conn56] Skip closing connection for connection # 51\n2020-04-29T15:34:23.753+0900 I NETWORK [conn56] Skip closing connection for connection # 33\n2020-04-29T15:34:23.753+0900 I NETWORK [conn56] Skip closing connection for connection # 30\n2020-04-29T15:34:23.753+0900 I NETWORK [conn56] Skip closing connection for connection # 10\n2020-04-29T15:34:23.754+0900 I NETWORK [conn31] end connection host2:26834 (7 connections now open)\n2020-04-29T15:34:23.754+0900 I NETWORK [conn25] end connection host2:26308 (6 connections now open)\n2020-04-29T15:34:23.754+0900 I NETWORK [conn21] end connection host1:59150 (5 connections now open)\n2020-04-29T15:34:23.754+0900 I REPL [conn56] Handing off election to host2:27018\n2020-04-29T15:34:23.754+0900 I NETWORK [conn56] Error sending response to client: SocketException: Broken pipe. Ending connection from 127.0.0.1:59134 (connection id: 56)\n2020-04-29T15:34:23.754+0900 I NETWORK [conn56] end connection 127.0.0.1:59134 (4 connections now open)\n2020-04-29T15:34:23.756+0900 I NETWORK [listener] connection accepted from 127.0.0.1:59152 #57 (5 connections now open)\n2020-04-29T15:34:23.756+0900 I NETWORK [conn57] received client metadata from 127.0.0.1:59152 conn57: { application: { name: \"MongoDB Shell\" }, driver: { name: \"MongoDB Internal Client\", version: \"3.6.9\" }, os: { type: \"Linux\", name: \"Amazon Linux AMI release 2018.03\", architecture: \"x86_64\", version: \"Kernel 4.14.94-73.73.amzn1.x86_64\" } }\n2020-04-29T15:34:23.759+0900 I ACCESS [conn57] Successfully authenticated as principal Nexiv on admin\n2020-04-29T15:34:25.467+0900 I REPL [replexec-21] Member host2:27018 is now in state PRIMARY\n2020-04-29T15:34:26.030+0900 I REPL [rsBackgroundSync] sync source candidate: host2:27018\n2020-04-29T15:34:26.030+0900 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to host2:27018\n2020-04-29T15:34:26.032+0900 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)\n2020-04-29T15:34:27.521+0900 I NETWORK [conn30] end connection host2:26560 (4 connections now open)\n2020-04-29T15:34:41.015+0900 I NETWORK [PeriodicTaskRunner] Socket closed remotely, no longer connected (idle 60 secs, remote host host1:27018)\n2020-04-29T15:34:56.030+0900 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open\n2020-04-29T15:34:56.030+0900 I REPL [replication-0] Blacklisting host2:27018 due to error: 'NetworkInterfaceExceededTimeLimit: Operation timed out' for 10s until: 2020-04-29T15:35:06.030+0900\n2020-04-29T15:34:56.031+0900 I REPL [replication-0] could not find member to sync from\n2020-04-29T15:35:06.035+0900 I REPL [rsBackgroundSync] sync source candidate: host2:27018\n2020-04-29T15:35:06.035+0900 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to host2:27018\n2020-04-29T15:35:06.037+0900 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)\n2020-04-29T15:35:33.549+0900 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replSet1/host2:27018,host1:27018\n2020-04-29T15:35:33.549+0900 I NETWORK [listener] connection accepted from host1:34080 #58 (5 connections now open)\n2020-04-29T15:35:33.549+0900 I NETWORK [conn58] received client metadata from host1:34080 conn58: { driver: { name: \"MongoDB Internal Client\", version: \"3.6.9\" }, os: { type: \"Linux\", name: \"Amazon Linux AMI release 2018.03\", architecture: \"x86_64\", version: \"Kernel 4.14.94-73.73.amzn1.x86_64\" } }\n2020-04-29T15:35:33.549+0900 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to host1:27018 (1 connections now open to host1:27018 with a 5 second timeout)\n2020-04-29T15:35:33.551+0900 I NETWORK [LogicalSessionCacheRefresh] Successfully connected to host2:27018 (1 connections now open to host2:27018 with a 5 second timeout)\n2020-04-29T15:35:33.551+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:34.053+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:34.554+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:35.055+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:35.556+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:36.035+0900 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open\n2020-04-29T15:35:36.035+0900 I REPL [replication-0] Blacklisting host2:27018 due to error: 'NetworkInterfaceExceededTimeLimit: Operation timed out' for 10s until: 2020-04-29T15:35:46.035+0900\n2020-04-29T15:35:36.035+0900 I REPL [replication-0] could not find member to sync from\n2020-04-29T15:35:36.057+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:36.558+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:37.059+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:37.560+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:38.061+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:38.562+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:39.063+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:39.563+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:40.065+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:40.566+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:41.067+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:41.568+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:42.069+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:42.570+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:43.071+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:43.572+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:44.073+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:44.574+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:45.077+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:45.578+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:46.037+0900 I REPL [rsBackgroundSync] sync source candidate: host2:27018\n2020-04-29T15:35:46.037+0900 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to host2:27018\n2020-04-29T15:35:46.039+0900 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)\n2020-04-29T15:35:46.079+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:46.580+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:47.081+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:47.582+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:48.083+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:48.584+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:49.086+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:49.587+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:50.087+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:50.596+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:51.101+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:51.602+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:52.103+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:52.603+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:53.104+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:53.606+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:53.606+0900 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Could not find host matching read preference { mode: \"primary\" } for set replSet1\n2020-04-29T15:36:16.038+0900 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open\n2020-04-29T15:36:16.038+0900 I REPL [replication-0] Blacklisting host2:27018 due to error: 'NetworkInterfaceExceededTimeLimit: Operation timed out' for 10s until: 2020-04-29T15:36:26.038+0900\n2020-04-29T15:36:16.038+0900 I REPL [replication-0] could not find member to sync from\n2020-04-29T15:36:26.040+0900 I REPL [rsBackgroundSync] sync source candidate: host2:27018\n2020-04-29T15:36:26.040+0900 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to host2:27018\n2020-04-29T15:36:26.043+0900 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 3ms (1 connections now open to host2:27018)\n2020-04-29T15:36:56.040+0900 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open\n2020-04-29T15:36:56.040+0900 I REPL [replication-0] Blacklisting host2:27018 due to error: 'NetworkInterfaceExceededTimeLimit: Operation timed out' for 10s until: 2020-04-29T15:37:06.040+0900\n2020-04-29T15:36:56.040+0900 I REPL [replication-0] could not find member to sync from\n2020-04-29T15:37:06.042+0900 I REPL [rsBackgroundSync] sync source candidate: host2:27018\n2020-04-29T15:37:06.042+0900 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to host2:27018\n2020-04-29T15:37:06.044+0900 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)\n2020-04-29T15:37:33.642+0900 I REPL [rsBackgroundSync] Changed sync source from empty to host2:27018\n2020-04-29T15:37:33.645+0900 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to host2:27018\n2020-04-29T15:37:33.647+0900 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (2 connections now open to host2:27018)\n2020-04-29T15:39:23.808+0900 I NETWORK [conn33] end connection host2:50454 (4 connections now open)\n2020-04-29T15:40:33.549+0900 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replSet1/host2:27018,host1:27018\n2020-04-29T15:34:25.520+0900 I REPL [replexec-2] Member host1:27018 is now in state SECONDARY\n2020-04-29T15:34:25.578+0900 I REPL [replexec-2] Member host2:27018 is now in state PRIMARY\n2020-04-29T15:39:12.463+0900 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist\n", "text": "Here are the logs. host1, host2, host3 are the nodes’s respective public ip address.rs.conf()rs.status()Primary Log:Secondary Log:Arbiter Log:", "username": "Huy" }, { "code": "2020-04-29T15:35:33.551+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:34.053+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:34.554+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:35.055+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:35.556+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:36.035+0900 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open\n2020-04-29T15:35:36.035+0900 I REPL [replication-0] Blacklisting host2:27018 due to error: ‘NetworkInterfaceExceededTimeLimit: Operation timed out’ for 10s until: 2020-04-29T15:35:46.035+0900\n2020-04-29T15:35:36.035+0900 I REPL [replication-0] could not find member to sync from\n2020-04-29T15:35:36.057+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:36.558+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:37.059+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:37.560+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:38.061+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:38.562+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:39.063+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:39.563+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:40.065+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:40.566+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:41.067+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:41.568+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:42.069+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:42.570+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:43.071+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:43.572+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:44.073+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:44.574+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:45.077+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:45.578+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:46.037+0900 I REPL [rsBackgroundSync] sync source candidate: host2:27018\n2020-04-29T15:35:46.037+0900 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to host2:27018\n2020-04-29T15:35:46.039+0900 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)\n2020-04-29T15:35:46.079+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:46.580+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:47.081+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:47.582+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:48.083+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:48.584+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:49.086+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:49.587+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:50.087+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:50.596+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:51.101+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:51.602+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:52.103+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:52.603+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:53.104+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n2020-04-29T15:35:53.606+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1\n", "text": "Hi ,It is a network issue , Secondary host was unable to connect Primary host . Monitor your network during test. Also it took only 3 minute to failover.", "username": "satvant_singh" } ]
Transition from Secondary to Primary took long time
2020-04-29T03:51:21.315Z
Transition from Secondary to Primary took long time
3,891
https://www.mongodb.com/…83fa9865b83f.png
[ "compass" ]
[ { "code": "", "text": "I am currently using mongodb compass, I want to create index with type ‘text’. In compass GUI, dropdown titled ‘Select a type’ does not show type ‘text’. See the image attached.\n\nScreenshot 2020-04-28 at 11.14.22 PM856×758 52.4 KB\n\nPlease help to create the text type using mongodb compass.", "username": "Delvin_P" }, { "code": "", "text": "You cannot create text indexes using Compass, yet. See https://jira.mongodb.org/browse/COMPASS-520", "username": "Prasad_Saya" }, { "code": "", "text": "Then how we can? So text search is not there in mongodb?", "username": "Delvin_P" }, { "code": "mongo", "text": "You can use other tools like mongo shell, to create text index and run your Text Search.", "username": "Prasad_Saya" } ]
Not able to set index type 'text' using mongodb compass
2020-04-28T18:55:05.472Z
Not able to set index type &lsquo;text&rsquo; using mongodb compass
3,742
null
[ "replication" ]
[ { "code": "", "text": "Hi Team,Can it possible to setup a 2 node mongodb cluster? with Automatic failover option?please reply.Thanks,", "username": "pankaj_kumar" }, { "code": "", "text": "Can it possible to setup a 2 node mongodb cluster?Technically, yes. Is it useful, no.with Automatic failover option?No. You need another node. This helps determine which host is unavailable in the case of a network partition, among other things.The minimum recommended configuration for a replica set is a three member replica set with three data-bearing members: one primary and two secondary members. In some circumstances (such as you have a primary and a secondary but cost constraints prohibit adding another secondary), you may choose to include an arbiter. An arbiter participates in elections but does not hold data (i.e. does not provide data redundancy).", "username": "chris" }, { "code": "", "text": "but in our requirements. we have only two nodes? can it possible in anyway to setup the nodes using only 2 nodes", "username": "pankaj_kumar" }, { "code": "", "text": "This is not how it works. If you want replication and availability as provided by replica sets then you need 3 nodes. At a minimum you need 2 data nodes and an arbiter.Whoever is responsible for your requirements needs to read up on MongoDB replication.", "username": "chris" }, { "code": "", "text": "Yes , Pankaj ,You can setup 2 Node Mongodb Cluster with the help of 1 Arbiter node. In this setup you will have 2 Data bearing Node and 1 Non-data bearing node(Arbiter). But this setup act as automatic fail-over always.Node 1 : Mongod1 (Primary)\nNode 2 : Mongod2(Seconadry) , ArbiterorNode 1 : Mongod1(Primary) . Arbiter\nNode2 : Mongod2(Seconadry)But this setup will not fall in automatic fail-over category always.Node 1 : Mongod1 (Primary)\nNode 2 : Mongod2(Seconadry)\nAppSer : Arbiter", "username": "satvant_singh" }, { "code": "", "text": "AppSer : Arbiterbut ARBITER is also a node right ? arbiter is a 3rd node. but i have only two node setup in two different virtual machine", "username": "pankaj_kumar" }, { "code": "", "text": "but ARBITER is also a node right ?An arbiter is a lightweight node that doesn’t hold any data. This node is used in voting to break ties.An even number of nodes is not recommended in a production environment. In your scenario of using two data bearing nodes (a primary and a secondary), should one of those instances stop working, you will no longer be able to write to your database as MongoDB needs a majority (the floor of 50% + 1 nodes from the replica set) to be participating (connectable from the other nodes). This is why @satvant_singh was recommending an arbiter on your application server, as long as that’s not on one of the servers hosting your database instances.", "username": "Doug_Duncan" }, { "code": "", "text": "Yes , @Doug_Duncan Explain very well ! Arbiter just help during election and only exchange credentials from other node, and No Data.", "username": "satvant_singh" }, { "code": "", "text": "an arbiter on your application server,in my case i am setting the mongo server on the vm only. how can we set up Arbiter on the aplication server ?", "username": "pankaj_kumar" }, { "code": "mongodmongodmongors.addArb(...)rs.addArb()", "text": "how can we set up Arbiter on the aplication server ?You would add MongoDB to your application server as you did for the primary and secondary instances that are running in the VMs. Once the mongod process is running on the application server you would connect to the primary mongod instance via a mongo shell and run rs.addArb(...). You can learn how to use the rs.addArb() method in the documentation.", "username": "Doug_Duncan" }, { "code": "", "text": "Yes, You have setup on VM machine , to complete the quorum of 3 nodes add another node(Arbiter) on App server if you can’t afford new VM. Process is already clarified by @Doug_Duncan. Still you need help we are here to help you.", "username": "satvant_singh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Setup a 2 node mongodb replica set
2020-04-27T05:15:09.180Z
Setup a 2 node mongodb replica set
16,188
null
[ "dot-net" ]
[ { "code": "public class PrNumbers : ListValueObject<PrNumbers>\n {\n public PrNumbers(IEnumerable<PrNumbers> items) : base(items) { }\n }\npublic class PrNumbers : ListValueObject<PrNumbers>\n {\n public PrNumbers(IEnumerable<PrNumbers> prnumbers) : base(prnumbers) { }\n }\n", "text": "Hi,I just saw a very strange behaviour, maybe you can explain… but it looks like a bug.\nI have some very simple valueobjects as classes in my code.\nSome of them can be created with a public constructor passing an IEnumerable with parameter name “items”.\nFor my purpose I don’t need any custom BsonClassMaps because automap works fine so far…This does not workMongoDB.Bson.BsonSerializationException: Creator map for class …PrNumbers has 1 arguments, but none are configured.This does workThe only change is just the parameter name… nothing else… any idea why?Best regards,\nSimon", "username": "Simon_Schweizer" }, { "code": "", "text": "Hi @Simon_Schweizer, welcome!I just saw a very strange behaviour, maybe you can explain… but it looks like a bug.Looks like the exception happened on MongoDB.Bson/Serialization/BsonCreatorMap.cs#L147.Could you provide a minimal reproducible example application of the above ? Also, which version of MongoDB .NET/C# driver are you using ?Regards,\nWan.", "username": "wan" } ]
C# automap issue when parameter name is "items"
2020-04-23T13:48:08.354Z
C# automap issue when parameter name is &ldquo;items&rdquo;
2,678
null
[ "cxx" ]
[ { "code": "version.hpp headers0.0.0-DBUILD_VERSION=3.5.00.0.0", "text": "While all versions of the CXX driver up to and including 3.4.1 have ended up with valid data in the bson and mongo version.hpp headers, strangely, the 3.5.0 release consistently ends up with 0.0.0 when I compile it. I’m able to work around the issue by using a cmake flag of -DBUILD_VERSION=3.5.0, but I’m presently at a loss as to why it’s not working as it did previously.Tracing through the embedded python script that sets these variables does indeed result in 0.0.0 for me.Curious if this is just me, or if anyone else is seeing the same behavior.", "username": "Allan_Bazinet" }, { "code": "VERSION_CURRENTbuild/build/python ../etc/calc_release_version.py >VERSION_CURRENT", "text": "@Allan_Bazinet are you building from Git or from a release tarball? If you are building from a release tarball, ensure that you did not inadvertently remove the VERSION_CURRENT file from the build/ sub-directory. If you are building from Git, then you need to run (from within the build/ sub-directory, the command python ../etc/calc_release_version.py >VERSION_CURRENT.It seems that this may not be properly documented for building the C++ driver from a Git working directory. I’m looking into that.", "username": "Roberto_Sanchez" }, { "code": "!VERSION_CURRENT.gitignorebuild", "text": "It would seem that adding !VERSION_CURRENT to the .gitignore file in the build directory would be warranted for the release tarball.", "username": "Allan_Bazinet" }, { "code": "", "text": "Ah, thanks, I see the problem now. I’m building from the release tarball, but there’s a .gitignore file in the build directory, so when we checked the tarball into our build system, it didn’t check in the VERSION_CURRENT file.", "username": "Allan_Bazinet" }, { "code": "!VERSION_CURRENT.gitignorebuild", "text": "@Roberto_Sanchez Perhaps consider adding !VERSION_CURRENT to the .gitignore file in the build directory?", "username": "Allan_Bazinet" }, { "code": "", "text": "@Allan_Bazinet there is an ongoing internal team discussion related to how we might simplify the build process and reduce the likelihood of issues like this arising. However, we also have goals related to release process improvement and automation which we need to balance.", "username": "Roberto_Sanchez" } ]
CXX driver 3.5.0 identifies as version 0.0.0
2020-04-21T18:38:42.203Z
CXX driver 3.5.0 identifies as version 0.0.0
1,991
null
[ "data-modeling", "security" ]
[ { "code": "", "text": "I’m trying to find a good way to manage permissions for a high number of mongo documents.There may be any number of groups/users/applications that would have different permissions (view,edit,etc) and I won’t/don’t know the number upfront… and they can change.What I don’t want to do: Update a ton of doc’s every time a new group/app/user etc is modified/deleted/added.I was thinking the extended-reference-pattern may work… not really sure.\nI’m not sure what design pattern should be used.What I want to do: Apply group/user/and or application permissions to a large set of mongo documents.Idea 1: Use mysql to control the relationship (con’s duplication of data, 2 diff systems)Idea 2: Add metadata to each document (con’s: would have to update a ton of doc)Idea 3: Create another collection to manage the relationship (leaning toward this)I keep going in circles… posting here out of desperation. If anyone has a design pattern or any ideas I’d really appreciate it.I’m dealing with high number of mongo docs (1 million+) and am trying to avoid the situation where I need to update a high number of doc’s every time a new group/application/permission is added for a doc or subset of doc’s.I’d really appreciate if anyone could point me into the right direction.", "username": "Dana_Shaw" }, { "code": "// image doc\n{\n _id : ObjectId(...)\n filename: \"some image.png\",\n owner: ....\n …\n groups: [\n {\n group id: \"foo\"\n },\n {\n group id: \"bar\"\n },\n … \n ],\n … \n}\n// group foo\n{\n _id : ObjectId(...)\n name: \"foo\"\n …\n read : true\n write : false\n \n … \n}\n", "text": "One thing I was thinking… maybe use the attribute pattern along with the extended-reference-pattern.The only drawback I can see with this approach is if I wanted specific permissions for a specific document… but am thinking maybe this is the way to go.", "username": "Dana_Shaw" } ]
Creating group/application/user permissions around MongoDB documents
2020-04-28T20:39:49.099Z
Creating group/application/user permissions around MongoDB documents
1,756
null
[ "legacy-realm-cloud" ]
[ { "code": "", "text": "Before you decide to use Realm, consider the following (I wish somebody would have told me this back when I was hesitating between Realm and other options):Not worth it.", "username": "Trunks99" }, { "code": "", "text": "Hi @Trunks99,Thank you for the feedback. I appreciate that your experience was suboptimal and I have shared this with the team.I agree that your experience was disappointing and below our expectations as well.Over the past few quarters we have been ramping up our Realm support team and onboarding Professional & Enterprise customers to the MongoDB Support Portal, where questions are prioritised according to support subscriptions and SLA.The Realm Support portal is nominally for operational issues related to Realm Cloud (the $30/month Standard plan), but there hasn’t been a strong gate on what questions can be asked there. Code-related questions (such as the one you originally asked) are better directed to community forums.The support team has to prioritise issues according to support subscriptions, so while they will try to assist they cannot investigate every development question.It would be excellent if you can provide more detail on what documentation you were unable to find.We are aware that more code examples and sample applications are needed (and some of my colleagues are working on this), but specific feedback would be helpful.Subscription is based on active instances, so the easiest way to cancel is to deprovision your instances. However, lack of clear UX for this is a definite oversight for Realm Cloud.Realm product and development teams are currently focused on the upcoming launch of MongoDB Realm, which will provide a much richer experience for cloud sync and data management.I hope you’ll consider giving MongoDB Realm a try when it is available. We have incorporated user feedback to provide a platform for the future and there will be better clarity for our support experience.My colleagues on the support team will also follow up with you on your recent experience.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,Thank you for your reply.I think a lot of people have already provided enough information on what’s missing in terms of documentation. We need more example codes in general, and the documentation needs to be clear on key points like why we should use full-sync instead of query-based sync Especially since the latter is still documented and there are even example codes for it.Self-hosting and ROS have been deprecated but are still mentioned in the documentation. It’s a simple edit to avoid your users being confused.I haven’t really looked at the documentation or been on the forum since January after not hearing back from support – I’ve been busy doing other stuff.Even after all the frustration, I’m still willing to work with Realm simply because moving to another service would require a lot work and I don’t currently have the time for that.I’m sincerely hoping that things will change for the better with MongoDB Realm.I’m back to working on my app, so I will do my share and provide you guys with feedback.", "username": "Trunks99" }, { "code": "", "text": "Adding more info as I get familiar with the documentation again: A lot of the code in the documentation is deprecated. Example: https://docs.realm.io/sync/using-synced-realms/designing-an-architecture-with-multiple-realms", "username": "Trunks99" }, { "code": "", "text": "Just a note about doc: although I’ve developed with realm-js I have to peer to Java (Android) and Swift doc occasionally because some info is just for one platform and not on the other.", "username": "Ondrej_Medek" }, { "code": "", "text": "I think it would extraordinarily helpful to indicate what code is depreciated or outdated and maybe suggest a workaround if you have one. Some of the code in that documentation is actually still valid.", "username": "Jay" }, { "code": "", "text": "Good to see you back.Yes, I agree. I’ll update this thread progressively.It’s important to highlight in the documentation the difference between a Realm url starting with “realm://” vs “realms://”. It seems pretty intuitive for some, but still, what happens when you use a server url starting with “https://” and a Realm url starting with “realm://”?Deprecated Code in the documentation and fix: swift - SyncConfiguration deprecated, what is the proper use of SyncUser.configuration()? - Stack Overflow", "username": "Trunks99" } ]
To anybody considering Realm
2020-04-22T03:13:49.952Z
To anybody considering Realm
4,201
https://www.mongodb.com/…4_2_1024x512.png
[ "atlas" ]
[ { "code": "", "text": "I’m moving to Atlas after years of working fine with mLab.I am planning to build a MongoDB/Atlas App with 1000s of users who are able to login and order items. I was thinking that should be easy in Atlas but NO!\nsays:\nLimit 100 users1316×284 23.2 KB\nAre they looking as “Users” as Admins and not as Customers?Should I just login Customers and skip all the “User” authorization?Looking for some good advice on this subject.", "username": "Alan_Gruskoff" }, { "code": "", "text": "\nMax Atlas Users1292×342 26.4 KB\nAs wee see further, Atlas only allows 500 Atlas Users per project.Not good enough. I need a real business database for 1000s of customers.How do we get around these Atlas limitations?", "username": "Alan_Gruskoff" }, { "code": "", "text": "I am not sure but you are misleading application users vs database users.Application Users will be your subscribers and will interact with a web server of some sort.The web server is one database user.The webmaster can be a second user.", "username": "steevej" }, { "code": "", "text": "No. Thats another Topic.Mongo says Atlas can only have 500 Atlas Users per project.\nThats not enough.", "username": "Alan_Gruskoff" }, { "code": "", "text": "Hi @Alan_Gruskoff,Welcome to the community forum. As @steevej notes, the Atlas restriction on Database Users is for accounts that log directly into your MongoDB deployment. Atlas Users are able to monitor and manage your Atlas clusters.Using your original terminology, those are limits on admins rather than a limit on customers.Direct access to your deployment should be limited to trusted database users connecting from a limited range of whitelisted IP addresses.End users of your application (who are logging in and ordering items) should have accounts and permissions managed by your application or API.The same user model should be followed for any database deployments. An application can scale to millions of users, but those are application users rather than database users.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "OK, good explanation, Stennie.So Mongo’s doc are wrongly worded. They don’t look at Customers as Users.\nThey look at Admins as Users, limit 500.Now knowing MongoDB’s verbiage does not match the Real World, I will make adjustments to have Customers login to Authentication.", "username": "Alan_Gruskoff" }, { "code": "", "text": "So Mongo’s doc are wrongly worded. They don’t look at Customers as Users.Hi Alan,Unfortunately many terms (like “users” and “customers”) are overloaded and depend on context.From an application point of view, your end users/customers should not not have direct access to underlying infrastructure like database deployments. Irrespective of whether you are using Atlas, your end users/customers should not translate 1:1 to database users. Your applications validate and manage end user requests to data; the only direct database users should be your applications and your DBAs.The audience for Atlas documentation is your development and admin team, not your customers. From Atlas’ point of view, your admins and applications are Atlas Users and Database Users. Any reference to “customer” in the Atlas documentation is referring to you, as an Atlas customer.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for the Good Advice, Stennie. I got tripped up by the “Users” name.\nOn to the next thing…", "username": "Alan_Gruskoff" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
The 100 User Limit in Atlas
2020-04-27T20:46:01.149Z
The 100 User Limit in Atlas
5,131
null
[ "node-js", "legacy-realm-cloud" ]
[ { "code": "", "text": "Hi,I am using mongodb realm as database. I am trying to sync data from realm object server to local and then trying to use my local realm file after sync for my data provider. But if I am trying to use the local realm file its giving error like incompatible histories. What I have to do use the local realm file after sync. And is there getInstance method in realm for JavaScript. I am doing my developemnt in nodejs.Any help is appreciated.Realm : 5.0.0\nnodejs : 10.18.0", "username": "Ranjan_K" }, { "code": "", "text": "@Ranjan_K Use the Realm.open method described here:Key to Realm Platform is the concept of a synchronized Realm. This guide discusses how to get started with a synced Realm.This will download the realm from the realm object server and then return a realm reference you can use when it is finished downloading", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
JavaScript Realm local realm file sync issues
2020-04-27T17:28:05.658Z
JavaScript Realm local realm file sync issues
2,218
null
[ "aggregation" ]
[ { "code": "db.getCollection('billing_info').aggregate(\n[\n {\n \"$match\": {\n \"billing.receiptinfo.receipt_date\": {\n \"$gte\": \"2020-04-01\",\n \"$lte\": \"2020-04-25\"\n },\n \"billing.receiptinfo.payment_mode\": {\n \"$nin\": [\n \"\",\n []\n ]\n },\n \"billing.receiptinfo.payment_submitted\": {\n \"$lt\": 1\n },\n \"billing.receiptinfo.mho_id\": {\n \"$nin\": [\n null,\n \"\"\n ]\n }\n }\n },\n {\n \"$unwind\": \"$billing.receiptinfo\"\n },\n {\n \"$facet\": {\n \"orderDetails\": [\n {\n \"$project\": {\n \"billId\": \"$billing.billinginfo.billingid\",\n \"orderId\": \"$billing.billinginfo.order_did\",\n \"mhoId\": \"$billing.receiptinfo.mho_id\",\n \"mhoName\": \"$billing.receiptinfo.mho_name\"\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"mhoId\": \"$mhoId\"\n },\n \"info\": {\n \"$push\": \"$$ROOT\"\n }\n }\n }\n ],\n \"countOrder\": [\n {\n \"$group\": {\n \"_id\": \"$mhoId\",\n \"count\": {\n \"$sum\": 1\n }\n }\n }\n ]\n }\n }\n])\n", "text": "I have issue with the following query… Unable to get count correctly… i wanted to have unique count of mhoId", "username": "Sateesh_Ranga" }, { "code": " \"countOrder\": [\n {\n \"$group\": {\n \"_id\": \"$mhoId\",\n \"count\": {\n \"$sum\": 1\n }\n }\n }\n ]\nmhoId\"billing.receiptinfo.mho_id\"", "text": "The field mhoId - does it represent the value from \"billing.receiptinfo.mho_id\", or is it a separate field?", "username": "Prasad_Saya" } ]
Issue with count in $facet
2020-04-28T05:54:09.365Z
Issue with count in $facet
2,408
null
[ "replication" ]
[ { "code": "", "text": "After restart of primary replica member it becomes again primary ideally it should behave as secondary now. Because new primary were there after the failure. But this doesn’t happen.Both have same priorties even i.e 1", "username": "mohit_jain" }, { "code": "", "text": "Both have same priorties even i.e 1This is probably your issue. You should have 3 members. Ideally 3 data members but 1 can be an arbiter.", "username": "chris" }, { "code": "\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"ip-172-19-1-123:27017\",\n\t\t\t\"health\" : 0,\n\t\t\t\"state\" : 6,\n\t\t\t\"stateStr\" : \"(not reachable/healthy)\",\n\t\t\t\"uptime\" : 0,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\t\"t\" : NumberLong(-1)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\t\"t\" : NumberLong(-1)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2020-03-25T08:42:30.681Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2020-03-25T08:42:30.644Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"authenticated\" : false,\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : -1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"ip-172-19-7-214:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 71650,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1585125746, 1),\n\t\t\t\t\"t\" : NumberLong(25)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2020-03-25T08:42:26Z\"),\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1585054112, 2),\n\t\t\t\"electionDate\" : ISODate(\"2020-03-24T12:48:32Z\"),\n\t\t\t\"configVersion\" : 4,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"name\" : \"ip-172-19-7-229:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 7,\n\t\t\t\"stateStr\" : \"ARBITER\",\n\t\t\t\"uptime\" : 71649,\n\t\t\t\"lastHeartbeat\" : ISODate(\"2020-03-25T08:42:30.210Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2020-03-25T08:42:30.137Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 4\n\t\t}\n", "text": "Hi Chris,Thanks for the answer. But I already have Arbitor also having priorty 0.Let me re-explain what happend.Currently:-Initially 172-19-1-123 was my “primary” node. I have forcefully closed the box for testing. Then 172-19-7-214 becomes primary as expected.\nBut after that i have restarted the 172-19-1-123 and it becomes secondary now. That’s also fine.Again i have killed 172-19-7-214 box which is primary now then 172-19-1-123 becomes primary which is also fine.Now the issue is when i again restarted 172-19-7-214 it must be secondary but it becomes primary even after restart so why this is happening. The changes during the downtime of 172-19-7-214 is persited in 172-19-1-123. After restart of 172-19-7-214 it must be secondary(Why again it become primary?)", "username": "mohit_jain" }, { "code": "", "text": "Version , OS ?You’ll have to look at the logs to see what is going on. Has 123 fully synced the oplog before killing 214?", "username": "chris" }, { "code": "", "text": "Can you paste output from rs.conf() & rs.status() ?", "username": "satvant_singh" } ]
After restart of primary replica member, it becomes primary again instead of secondary
2020-03-24T18:20:17.482Z
After restart of primary replica member, it becomes primary again instead of secondary
1,983
null
[]
[ { "code": "", "text": "There is no server based database driver available with Godot Engine. If MongoDB developers are seeing this topic, will they consider the mongodb library for the Godot Engine?", "username": "Metehan_Ozbek" }, { "code": " <ItemGroup>\n <PackageReference Include=\"MongoDB.Driver\" Version=\"2.10.2\" />\n </ItemGroup> \nmongocxx", "text": "Hi @Metehan_Ozbek, welcome!If MongoDB developers are seeing this topic, will they consider the mongodb library for the Godot Engine?Have you tried connecting to MongoDB using Godot C# scripting ? Looks like you can use NuGet packages in Godot, for example:Or alternatively, you could also try to write a GDNative module either in C/C++. For example GDNative C++ , you should be able to write a module with mongocxx driver, see also Tutorial: MongoDB C++ driver.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "I’m using gdscript. I don’t have enough skills because I don’t know C ++ ", "username": "Metehan_Ozbek" }, { "code": "collection.InsertOne(document);Unhandled Exception:\nSystem.TimeoutException: A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [], DnsMonitorException : \"DnsClient.DnsResponseException: Unhandled exception ---> System.IndexOutOfRangeException: Cannot read byte 265, out of range.\n at DnsClient.DnsDatagramReader.ReadByte () [0x0002f] in <57cd9b7b8c7b47269efcc38c43d25c74>:0\n at DnsClient.DnsDatagramReader.ReadLabels () [0x000ff] in <57cd9b7b8c7b47269efcc38c43d25c74>:0\n at DnsClient.DnsDatagramReader.ReadQuestionQueryString () [0x00006] in <57cd9b7b8c7b47269efcc38c43d25c74>:0\n at DnsClient.DnsRecordFactory.ReadRecordInfo () [0x00000] in <57cd9b7b8c7b47269efcc38c43d25c74>:0\n at DnsClient.DnsMessageHandler.GetResponseMessage (System.ArraySegment`1[T] responseData) [0x0008f] in <57cd9b7b8c7b47269efcc38c43d25c74>:0\n at DnsClient.DnsUdpMessageHandler.Query (System.Net.IPEndPoint server, DnsClient.DnsRequestMessage request, System.TimeSpan timeout) [0x00138] in <57cd9b7b8c7b47269efcc38c43d25c74>:0\n at DnsClient.LookupClient.ResolveQuery (System.Collections.Generic.IReadOnlyCollection`1[T] servers, DnsClient.DnsMessageHandler handler, DnsClient.DnsRequestMessage request, System.Boolean useCache, DnsClient.LookupClientAudit continueAudit) [0x000c8] in <57cd9b7b8c7b47269efcc38c43d25c74>:0\n --- End of inner exception stack trace ---", "text": "Hi Wan,\nI have tried what you suggested using C# and the NuGet package. I can successfully access the NuGet package but don’t seem to be able to connect to the database as I get the following error exception when i try to insert a document using:\ncollection.InsertOne(document);\nI can get the same code to work using .NET but fails when run through Godot. Godot is using Mono, is the driver compatible with Mono?", "username": "Chris_Findlay" }, { "code": "mongodb://127.0.0.1:27107/csproj <ItemGroup>\n <PackageReference Include=\"MongoDB.Driver\">\n <Version>2.10.2</Version>\n </PackageReference>\n </ItemGroup>\n", "text": "Hi @Chris_Findlay,but don’t seem to be able to connect to the database as I get the following error exception when i try to insert a documentThe error you listed seems to be related to DnsClient, check that you have connection to the MongoDB server from Godot.In my test, I could connect from Godot to a locally hosted MongoDB using the following URI: mongodb://127.0.0.1:27107/.I can get the same code to work using .NET but fails when run through Godot. Godot is using Mono, is the driver compatible with Mono?MongoDB .NET/C# driver is not being tested with Mono, although I’ve managed to create a simple test to query a document from MongoDB server using the current driver version 2.10.2. I’ve just added below lines to the csproj file:Tested with Godot Engine 3.2.1 stable mono official, Mono v6.8.0.105 on macOS 10.14.6.Regards,\nWan.", "username": "wan" }, { "code": "using Godot;\nusing System;\nusing MongoDB.Driver;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Driver.Core.Clusters;\n\npublic class Mongo : Node\n{\n MongoCRUD db = new MongoCRUD(\"testDb\"); // MongoDb will get ths database if available, if not it will create it.\n", "text": "Hi @wan, thanks for your time.I also added the Nuget package to the csproj file using the same format and has been successful in that part.I had been trying to connect to a remote cluster I had created but I tried connecting to a local host db after reading your reply. Unfortunately, I’m still not connecting, could you share your code that did connect? Here is the code I am using:\nI’m using:\nGodot Mono stable 3.2.1\nMongoDB.Driver 2.10.2\nWindows 10Regards,\nChris", "username": "Chris_Findlay" }, { "code": "private String ConnectionString = \"mongodb://localhost:27017/\";\nprivate String ConnectionString = \"mongodb://127.0.0.1:27017/\"; \nlocalhost", "text": "Hi @Chris_Findlay,Are you still seeing the same error message relating to DnsClient ?From a brief look on your code snippet, it looks right. Could you try changingWithIn my test I couldn’t connect locally using localhost either.\nWhich MongoDB server version are you running ?Regards,\nWan.", "username": "wan" }, { "code": "private String ConnectionString = \"mongodb://127.0.0.1:27017/\";", "text": "Hi @wan, sorry for the delay in getting back. I am currently stuck in isolation in a hotel and have had to re-download everything to my work laptop and start over again.The good news is I am able to connect and update the collection with the connection string:\nprivate String ConnectionString = \"mongodb://127.0.0.1:27017/\";\nHowever, when I substitute in my cluster connection string I get the same error as before. I copied and pasted the same string to an almost identical project in VS2019 and confirmed it is valid. I also tried adding an \"available anywhere’ entry to the IP whitelist to rule that out.I am using the same MongoDB driver for both projects.I’m using:\nGodot Mono stable 3.2.1\nMongoDB.Driver 2.10.2\nMongo server 4.2.5\nWindows 10", "username": "Chris_Findlay" }, { "code": "DnsClient.DnsDatagramReader.ReadByte", "text": "Hi @Chris_Findlay,The DnsClient.DnsDatagramReader.ReadByte error that you are seeing is likely related to either your network setup, or installed DnsClient version. From a quick search, I found DnsClient.NET issues #51, which maybe related to the issue that you’re seeing. Could you try either to install DnsClient v1.3.0, or upgrade MongoDB.Driver to v2.10.3 (which references the DnsClient v1.3.1), and see whether that fixes the issue that you’re seeing.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Driver for Godot Engine
2020-03-19T21:52:43.078Z
Driver for Godot Engine
6,233
null
[ "sharding" ]
[ { "code": "", "text": "Does Hash function return different value for shard keys in case a new shard added or remove from the existing cluster?It will help me to understand that how data/records distribution happens in the scenario when node failure happens in a very big cluster? And how much performance impact would be?", "username": "Deepak_Pengoria" }, { "code": "mongodmongod", "text": "Welcome to the community @Deepak_Pengoria!Does Hash function return different value for shard keys in case a new shard added or remove from the existing cluster?A hash function returns a consistent result value for a given input. In the case of hashed sharding in MongoDB, a single field value is hashed. Changing the number of shards does not affect the shard key.Data in a sharded collection is distributed based on chunks which represent contiguous ranges of shard key values. Chunk ranges are automatically split based on data inserts and updates, and balanced across available shards when the imbalanced distribution of chunks for a collection reaches a migration threshold.If you are new to sharding, I would definitely give consideration to whether hashed sharding is the best approach for your use case. The default range-based sharding approach supports efficient range-based queries based on the shard key index and allows compound shard keys which can be useful for zone-based data locality.Hashed sharding is best suited to distributing inserts based on a monotonically increasing value like an ObjectID or a timestamp. For more info, see Hashed vs Ranged Sharding.It will help me to understand that how data/records distribution happens in the scenario when node failure happens in a very big cluster? And how much performance impact would be?Each shard is backed by a replica set which determines the level of data redundancy and fault tolerance. A production replica set typically includes three data bearing mongod processes, which allows any single mongod to be unavailable (for example, due to maintenance or failure) while still maintaining availability and data replication.You can provision additional replica set members to increase your fault tolerance.For more insight I recommend taking some of the free online courses in the MongoDB for DBA Learning Path at MongoDB University.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for quick response. Could you please point me the link of hashFunction() implementation/source-code which calculates shardKey/partitionKey while partition the records/data?", "username": "Deepak_Pengoria" }, { "code": "mongodb/src/mongodb/hasher.cppconvertShardKeyToHashed()mongo> o = new ObjectId()\nObjectId(\"5ea77f07fcb94966883f5d5e\")\n\n> o2 = new ObjectId()\nObjectId(\"5ea77f0afcb94966883f5d5f\")\n> convertShardKeyToHashed(o)\nNumberLong(\"329356589482501449\")\n\n> convertShardKeyToHashed(o2)\nNumberLong(\"-3285311604932298140\")\n{_id:1}_id{ _id: 'hashed'}", "text": "Could you please point me the link of hashFunction() implementation/source-code which calculates shardKey/partitionKey while partition the records/data?Hi Deepak,You can find the server implementation on GitHub: mongodb/src/mongodb/hasher.cpp.If you just want to see what the hashed values look like, there is a convertShardKeyToHashed() helper in the mongo 4.0+ shell.For example, two ObjectIDs generated in close sequence will be roughly ascending:However, the hashed values will be very different:Since data in a sharded collection is distributed based on contiguous chunk ranges, naive sharding on {_id:1} with default ObjectIDs would result in new inserts always targeting the single shard which currently has the chunk with the maximum _id value. This creates a “hot shard” scenario with inserts targeting a single shard plus the additional overhead of frequently rebalancing documents to other shards. Adding more shards will not help scale inserts with this poor choice of monotonically increasing shard key.If you instead shard based on hashed ObjectId values (using { _id: 'hashed'}), consecutive inserts will land in different chunk ranges and (on average), different shards. Since inserts should be well distributed across all shards, rebalancing activity should be infrequent and only necessary when you add new shards for scaling (or remove existing shards).As mentioned earlier, you should definitely consider whether hashed-based sharding is the best approach for your use case. If there is a candidate shard key based on one or more fields present in all of your documents, you will have more options for data locality using the default range-based sharding approach and Choosing a good shard key.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Questions about hashed sharding
2020-04-26T11:45:41.978Z
Questions about hashed sharding
4,069
null
[ "dot-net", "change-streams" ]
[ { "code": " /// <summary>\n /// GetDateTimeFromBsonTimestamp\n /// Conversion from a BsonTimestamp into DateTime\n /// </summary>\n /// <param name=\"timestamp\">input BsonTimestamp timestamp</param>\n /// <returns>compatible DateTime result</returns>\n public static DateTime GetDateTimeFromBsonTimestamp(BsonTimestamp timestamp)\n {\n var unixEpoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);\n return unixEpoch.AddSeconds(timestamp.Timestamp);\n }\n", "text": "If I have ClusterTime returned from the Change Streams (via the .net driver and using C#), I can get BsonTimestamp values like:Timestamp1 = 1587672311\nIncrement1 = 1\nBsonType1 = Timestamp\nValue1 = 6819000652509741057\n===> DateTime = 4/23/2020 8:05:11 PM (no milliseconds, only seconds granulariyt)Timestamp2 = 1587672311\nIncrement2 = 3\nBsonType2 = Timestamp\nValue2 = 6819000652509741059\n===> DateTime = 4/23/2020 8:05:11 PM (no milliseconds, only seconds granularity)I can make a DateTime from the BsonTimestamp by doing:What I’m struggling with is that this is only second granularity when these two ClusterTime’s (BsonTimestamps) have Value data that shows much more granularity. Can I translate that granularity in my DateTime values? How can BsonTimestamp.Value be used to offer the greatest time granularity in a conversion to DateTime?Please let me know if you have pointers - thanks!JeremyPS - @wan - if you have a pointer here, I’m all ears! Thanks!", "username": "Jeremy_Buch" }, { "code": "", "text": "I ended up finding this:\nhttps://jira.mongodb.org/browse/JAVA-634It looks like the granularity of time is only seconds in BsonTimestamp - 8 bytes total, but 4 bytes are seconds and 4 bytes is otherwise occupied byt ‘count’? This didn’t completely make sense to me yet, but if the ‘Value’ of BsonTimestamp contains nothing more than some variant of a count of seconds, then BsonTimestamp can really only give seconds granularity.Any more light on this subject? Can a BsonTimestamp actually hand out millisecond granularity?", "username": "Jeremy_Buch" }, { "code": "", "text": "Final answer that I’ve found is that ‘sub-second granularity’ is accomplished wit h an indexed count of ordered changes (an ‘ordinal’ or ‘increment’ within the second). This can’t be turned into time, but could be used to create sub-second (false) timestamps solely for the sake of ordering. If the increment is pulled out, it can be used downstream to disambiguate within the second, but there isn’t any greater time granularity than seconds in BsonTimestamp in a pure ‘actual time’ sense.Hope that helps anyone other than me who comes across this Jeremy Buch", "username": "Jeremy_Buch" }, { "code": "", "text": "Hi Jeremy,Timestamps are an internal BSON type used by replication. This type is effectively a BSON Date (which is based on unixtime) combined with a counter to uniquely identify multiple events within the same second.As you have discovered, Timestamps are not intended as a more granular version of the BSON Date for general time tracking use.Regards,\nStennie", "username": "Stennie_X" } ]
How to get BsonTimestamp to DateTime in C# with millisecond granularity?
2020-04-26T04:33:28.294Z
How to get BsonTimestamp to DateTime in C# with millisecond granularity?
6,777
null
[ "data-modeling", "legacy-realm-cloud" ]
[ { "code": "", "text": "hi,\nI have following setup:\nMy users could login via jwt-token. Then I have /common-realm with two classes: User and Team.There I store all user and team details. Every user could be in more then one team. Then for every Team there exist a /team-realm-x.My first question: Is that the correct way to do this?My second question: At the moment I have to use query based realm for the common-realm. Which I read is not the best way to use in production. How can i structure my realms better? Also is it safe to use query based realm? or would it be possible for a user to query the others user / teams in the common realm?Thanks for your inputs!", "username": "rouuuge" }, { "code": "", "text": "Update:i now create for every user a personal realm (full-sync). For the team a create with an administrator a team realm on the root (like /TeamId/). With the permission i can only let the team-members read the profile realm from the user.Now i have a view with the team-members and i like to observer any changes on the profile realms. For that i have to observe multiple realms at the same time, wich seems that it won’t work. Is that true? Is it not possible to make an array with notificationtokes for observing objects from different realms?thank you!", "username": "rouuuge" }, { "code": "let realms = [...]; let tokens = realms.map { $0.objects(ObjectType.self).observe { change in ... } }", "text": "@rouuuge You need an observation per realm, but there’s no reason you couldn’t put all those tokens in one array. For example, something like this:let realms = [...]; let tokens = realms.map { $0.objects(ObjectType.self).observe { change in ... } }", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Structure of Realms with User / Team Management
2020-04-02T00:36:40.866Z
Structure of Realms with User / Team Management
2,435
https://www.mongodb.com/…b_2_1023x253.png
[ "compass" ]
[ { "code": "", "text": "Good morning all,\nI am a new contributor on the forum.\nWe have a school project: Analysis of Tweets according to a theme. We therefore retrieve the tweets via the API and then stored the data in a Mongodb database. We now want to display them on compass for a first analysis. In particular, we want to display the locations of tweets for statistics. But it turns out that the map does not appear all the time …\nWe can see the two occurrences on the screenshots.\nHave you ever encountered this error?\n2020-04-26 15_27_47-Window1635×405 5.46 KB\n\n2020-04-26 15_33_42-Window1645×612 160 KB\nYet the data is the same …Thanks a lot for your help !", "username": "Antoine_Blais" }, { "code": "", "text": "Welcome to the community @Antoine_Blais !I have no idea where it may come from.Do you have your database on Atlas? If yes, you can try to display your points with MongoDB Charts. It’s quick and easy for your preliminary analysis. You also have more possibilities for Geospatial Charts.", "username": "Gaetan_MORLET" } ]
[Compass] - Geolocation which is not displayed all the time, but identical formatting
2020-04-26T22:21:48.090Z
[Compass] - Geolocation which is not displayed all the time, but identical formatting
1,461
null
[ "installation" ]
[ { "code": "", "text": "$ sudo apt-get install -y mongodb-org\nReading package lists… Done\nBuilding dependency tree\nReading state information… Done\nmongodb-org is already the newest version (4.2.6).\nYou might want to run ‘apt --fix-broken install’ to correct these.\nThe following packages have unmet dependencies:\nmongodb-org : Depends: mongodb-org-server but it is not going to be installed\nDepends: mongodb-org-mongos but it is not going to be installed\nDepends: mongodb-org-tools but it is not going to be installed\nE: Unmet dependencies. Try ‘apt --fix-broken install’ with no packages (or specify a solution).:~$ mongo\nMongoDB shell version v4.2.6\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n2020-04-27T12:29:07.313-0500 E QUERY [js] Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6after launch :~$ ss -tlnp\nState Recv-Q Send-Q Local Address:Port Peer Address:Port\nLISTEN 0 80 127.0.0.1:3306 0.0.0.0:*\nLISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*\nLISTEN 0 5 127.0.0.1:631 0.0.0.0:*\nLISTEN 0 128 :80 :\nLISTEN 0 5 [::1]:631 [::]:What need to do?", "username": "elite_masteria" }, { "code": "", "text": "Start a mongod that listen to 127.0.0.1:27017.Or specify the appropriate server when running mongo.", "username": "steevej" }, { "code": "", "text": "Depends: mongodb-org-mongos but it is not going to be installed\nDepends: mongodb-org-tools but it is not going to be installed\nE: Unmet dependencies. Try ‘apt --fix-broken install’ with no packages (or specify a solution).Apt is not happy here. run the fix suggested by the output.Did you try installing mongo from the system’s repos before adding mongodb.org ?", "username": "chris" }, { "code": "sudo apt-get purge mongodb-org*\nsudo apt-get install mongodb\nsudo apt-get update\nmongo --version\n```\"", "text": "fixed by :\" You need to first uninstall the mongodb, you can use:After this, install mongodb through the following commands:And then update:You are done with the installation of mongodb. You can check it by using the below command:", "username": "elite_masteria" }, { "code": "", "text": "Did you try installing mongo from the system’s repos before adding mongodb.org ?nothingfrom this page … like tutorial – https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/and nodejs install mongodb", "username": "elite_masteria" }, { "code": "", "text": ":~$ sudo apt --fix-broken install\n[sudo] password for forperuse:\nReading package lists… Done\nBuilding dependency tree\nReading state information… Done\nCorrecting dependencies… failed.\nThe following packages have unmet dependencies:\nmongodb-org : Depends: mongodb-org-server but it is not installed\nDepends: mongodb-org-mongos but it is not installed\nDepends: mongodb-org-tools but it is not installed\nE: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.\nE: Unable to correct dependencies", "username": "elite_masteria" }, { "code": "", "text": "i don’t know how to do this, can you give to me the solutions, how can to do that with command line?", "username": "elite_masteria" }, { "code": "", "text": "It is not clear what problem you are trying to fix. From some of the post it looks like you are trying to fix an installation issues. My reply was to fixError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :To start a localhost server use the documentation found at https://docs.mongodb.com/manual/reference/program/mongod/To connect to another server with the mongo command consider the manual at https://docs.mongodb.com/manual/reference/program/mongo/", "username": "steevej" }, { "code": "", "text": "What flavor and version of linux are you installing on.", "username": "chris" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Error: couldn't connect to server - Linux -
2020-04-27T18:05:57.694Z
Error: couldn&rsquo;t connect to server - Linux -
15,638
null
[]
[ { "code": "", "text": "Hello MongoDB community we are delighted to have been selected with our SeeDoc project in the Atlas COVID-19 credit program of MongoDB.#covid-19", "username": "Patrick_Biyaga" }, { "code": "", "text": "Congrats and thank you for the work you’re doing!!", "username": "Jamie" } ]
You’ve been accepted: MongoDB COVID-19 Atlas Credit Program
2020-04-26T00:04:44.572Z
You’ve been accepted: MongoDB COVID-19 Atlas Credit Program
2,355
null
[ "graphql", "legacy-realm-cloud" ]
[ { "code": "", "text": "hi,I try to explain my problem, i have to say there aren’t really helpful error or logs that I can poste here. I just try to describe the problem and hope that someone has it too I have a graphql-client with a realm cloud synch. It works perfectly with real time sync. But if I left my browser some minutes without doing something the subscription do not updating anymore. As I said there aren’t any error logs. Just some /auth request to the realm cloud.Have someone an idea?kind regards", "username": "rouuuge" }, { "code": "", "text": "Hi Rouuuge,It looks like a possible Auth timeout. Can you check to see if you have a valid auth token or the subscription will not update? Perhaps, to get more direct support, open a support ticket for this issue? Then one of our support team can help you directly.Shane", "username": "Shane_McAllister" }, { "code": "", "text": "thanks for your answer. I currently rebuild my app to just using the full-synch realms. And if the problem still exists I will open a ticket with a small demo app.", "username": "rouuuge" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Subscription GraphQL not updating anymore after some inactive minutes
2020-04-05T22:35:44.630Z
Subscription GraphQL not updating anymore after some inactive minutes
3,094
null
[ "atlas-functions", "stitch" ]
[ { "code": "", "text": "Hi\nI would like to use bcryptjs node library, for password hashing function, in my custom authentication function on my stitch connected application. I successfully added bcrypt as a dependency , but when i try to use it the function times out . While using the same dependency on my local node js server and using the same code, the function executes almost immediately.Is there something I am missing ? I know that some Node standard libraries are not supported yet, but I don’t get any error from the console, just the time-out.Thank you", "username": "Stefan_Krzovski" }, { "code": "", "text": "Hi Stefan – Unfortunately, due to the way this package is constructed it runs slow in Stitch Functions, we are looking for options to improve. If possible, you might get better performance from the PBKDF2 package.", "username": "Drew_DiPalma" } ]
Using dependency in stitch functions
2020-04-06T19:23:57.672Z
Using dependency in stitch functions
2,167
null
[ "graphql", "stitch" ]
[ { "code": " {\n \"title\": \"visit\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"date\": {\n \"bsonType\": \"string\"\n },\n \"hour\": {\n \"bsonType\": \"string\"\n },\n \"persons\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"objectId\"\n }\n }\n }\n }\n {\n \"persons\": {\n \"ref\": \"#/stitch/mongodb-atlas/handmirable/person\",\n \"foreign_key\": \"_id\",\n \"is_list\": true\n }\n }\n type Visit {\n _id: ObjectId\n date: String\n hour: String\n persons: [Person]\n }\n\n input VisitInsertInput {\n date: String\n hour: String\n _id: ObjectId\n }\n\n input VisitQueryInput {\n hour: String\n persons: [PersonQueryInput]\n _id: ObjectId\n date: String\n }\n\n input VisitUpdateInput {\n hour: String\n _id: ObjectId\n date: String\n }\n", "text": "Hello,I want to model a list of persons who make visits :I have declared this relation:The stitch generated graphql schema is:I don’t understand why the field “persons” disappear in VisitInsertInput and VisitUpdateInput. I can query my visitors but can’t create or update them. Is my model wrong?", "username": "Vincent_Lebeau" }, { "code": "", "text": "Hi Vincent – Unfortunately this was a bug on our end but it has now been addressed.", "username": "Drew_DiPalma" } ]
Stitch many to many relationship
2020-03-25T16:48:12.469Z
Stitch many to many relationship
2,376
https://www.mongodb.com/…83281531c386.png
[ "queries" ]
[ { "code": "", "text": "Hey everyone, first off thanks for the help on this one. So I have attached a screen shot of a data sample we are trying to query. This budgetgroup contains a couple fields which references a user object Id. The fields are: budgetOwner (single user Object Id), approvers (array of user object ids), and members (array of user objectIds). I am trying to develop a single database query that will return an array of budget groups in which contain a specific user object ID in either the budgetOwner field, or within the two arrays (approvers & members).Any help on this one would be greatly appreciated.Regards,David", "username": "David_Stewart" }, { "code": "userId = ObjectId(\"5ea6511bf82c44599b1c5ae4\") // for, example\n\ndb.collection.find(\n { $or: [ { budgetOwner: userId }, { approvers: userId }, { members: userId } ] }\n)\nuserIdmembersapproversdb.collection.find(\n { $or: [ { budgetOwner: userId }, \n { $and: [ { approvers: userId }, { members: userId } ] }\n ] }\n)\n", "text": "This query will do using the $or query operator:In case you are querying such that the userId is in both the members & approvers arrays, the query is:", "username": "Prasad_Saya" }, { "code": "", "text": "Prasad_Saya! Thank you so much! That works!David", "username": "David_Stewart" } ]
Return Objects containing search query in array
2020-04-27T01:00:48.154Z
Return Objects containing search query in array
2,608
null
[]
[ { "code": "", "text": "can anyone share the steps on how to Load data into your sandbox cluster from Atlas?", "username": "Chiragkumar_Patel" }, { "code": "", "text": "You are a little bit ahead of the course.It is in Chapter 2 - Loading Data into Your Sandbox Cluster.", "username": "steevej" }, { "code": "", "text": "Hello Steve.\nI have gone through chapter -2.I am getting below error.MongoDB Enterprise Sandbox-shard-0:PRIMARY> load(“loadMovieDetailsDataset.js”)\n2020-04-25T20:45:19.738+0530 E QUERY [js] SyntaxError: unexpected token: ‘:’ :\n@(shell):1:1\n2020-04-25T20:45:19.740+0530 E QUERY [js] Error: error loading js file: loadMovieDetailsDataset.js :\n@(shell):1:1\nMongoDB Enterprise Sandbox-shard-0:PRIMARY>", "username": "Chiragkumar_Patel" }, { "code": "", "text": "Please enclose the file in double quotes", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Then, why did you post your question under chapter 1?", "username": "steevej" }, { "code": "loadMovieDetailsDataset.js", "text": "Hi @Chiragkumar_Patel,Were you able to load the dataset into your sandbox cluster ?If not, then did you make any changes in the loadMovieDetailsDataset.js file ?~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
How to Load data into your sandbox cluster from Atlas
2020-04-25T15:33:57.848Z
How to Load data into your sandbox cluster from Atlas
1,166
null
[ "server" ]
[ { "code": "", "text": "Hi AllSpecifically we would really appreciate the fix for aggregate queries with sort running very slowly\nSERVER-7568. The status is ‘Resolved’ against version 4.3.1, so I presume that this means it will be released as part of Mongo version 4.3.Hence the question do we have a date on the road map for the release of 4.3?\nThanks", "username": "Simon_Dunn" }, { "code": "", "text": "Welcome to the community Simon!MongoDB 4.3 is the development release series leading up to the 4.4 production release (see MongoDB Versioning). You can download development/unstable releases for testing, but we do not recommend running them in a production environment as they will include work in progress and have not been throughly tested yet.MongoDB 4.4 is currently in the “release candidate” stage of testing, where there will be several 4.4.0 release candidates available before a final Generally Available (GA) release. Release candidates are also for testing purposes only, but you could test in a development/staging environment to confirm if your issue will be resolved.The specific issue you have highlighted is not a general fix for “sorts running very slowly”, so I recommend starting a new topic to explore why your aggregation pipelines are slow.It would be helpful to include:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I’ll see if I can’t experiment with the rc dev version.", "username": "Simon_Dunn" }, { "code": "{\n ...\n groups: [\n ObjectId(\"5ce283422ab79c000f9040f5\"),\n ObjectId(\"5e9d01c5a5db2000075764fe\")\n ]\n ....\n}\ndb.collection.aggregate([ { $match : { $and: [ { groups: ObjectId(\"5ce283422ab79c000f9040f5\") } ] } }, {$sort: {_id: 1}} ] )\ndb.collection.aggregate([ { $match : { $and: [ { groups: ObjectId(\"5e9d01c5a5db2000075764fe\") } ] } }, {$sort: {_id: 1}} ] )\ndb.collection.find({ $and: [ {groups: ObjectId(\"5ce283422ab79c000f9040f5\")} ]}).sort({_id: 1})\ndb.collection.find({ $and: [ {groups: ObjectId(\"5e9d01c5a5db2000075764fe\")} ]}).sort({_id: 1})\n", "text": "Hi StennieSo it’s release 4.4 I’m after. I’m not after a fix for general sorts running very slowly, sorts on find work perfectly well. The bug I mentioned is related to aggregate queries not selecting the index to sort on correctly which results in the entire collection being reloaded (AIUI) and explains the problems we are having completely. I really just need to know when the next release of Mongo is scheduled, I assume the referenced bugs will be addressed in it.We have implemented a work-around for now as recommended by a bug linked to 7568 https://jira.mongodb.org/browse/SERVER-21471 by replacing our aggregate call with find but this is not going to be a long term viable option without a lot of code upheaval which I would rather not have to do.The issue manifests itself (at least for us) working on a collection of 34,000,000 documents. Weirdly if we run an aggregate query that returns many documents (8,782,333 to be precise) the query completes in approx 8 seconds. But if we then run the same query against a different objectId such that it returns a very small number of documents (27) then it takes 2 minutes to return:So we have a collection containing documents that reference other objects in an array of ObjectIds e.g.:and we wish to run a query that returns every document that has a specific ObjectId in the groups array:8,782,333 documents\n7.92 seconds27 documents\n127 secondsNow I know that this is not a generic sort issue as the following equivalent find queries demonstrate:8,782,333 documents\n5.13 seconds27 documents\n0.18 seconds127 seconds down to 0.18 seconds replacing the aggregate with the find and both using sort.MongoDb: 4.2 Atlas cluster M30 tier, replica set, not sharded (yet)‘Explain’ output for the 2 aggregate queries, I could see no discernible difference between the 2", "username": "Simon_Dunn" }, { "code": "", "text": "Experimenting with the 4.3.1 release candidate locally shows that this issue of the aggregate -> sort query running slowly is fixed in the RC.Do we have any idea when 4.4 will be available on General Release? Are we talking days, weeks, months, years? Any insight you can provide would be very helpful to us, otherwise we are going to have to look into either re-working our queries or look at other solutions.Neither of which fills me with joy.", "username": "Simon_Dunn" }, { "code": "", "text": "While I don’t know the timing of the releases, I would assume since the 4.4 builds are on RC2 currently and there’s only about a dozen items left on their JIRA marked as 4.4 Required, I would assume that the release is only a few months, at most, off. That is purely a guess however based on what I see.I don’t think that MongoDB provides estimated release dates either, but maybe someone on the inside will come by later and give a more official answer.", "username": "Doug_Duncan" }, { "code": "find()aggregate()_id{groups:1}{groups: 1, _id: 1})find()", "text": "Do we have any idea when 4.4 will be available on General Release? Are we talking days, weeks, months, years? Any insight you can provide would be very helpful to us, otherwise we are going to have to look into either re-working our queries or look at other solutions.Hi Simon,Since MongoDB 4.4 is a major release which introduces new features and compatibility changes, there is an extended testing and release candidate phase. Release candidates are considered feature complete and ready for testing, with perhaps some polish left on new features or documentation. Any testing or feedback provided on release candidates is extremely helpful.The release date is generally determined based on when the release is ready (for example, documentation complete and no known blocking issues). As Doug mentioned, we are currently at rc2 which is the third release candidate (release candidates start at rc0). As a general timescale guideline, the expectation for GA will be weeks or months from now rather than days or years.For more details on what has changed, please see the Release Notes for MongoDB 4.4.‘Explain’ output for the 2 aggregate queries, I could see no discernible difference between the 2The specific issue you are focused on is about aggregation preferring indexes that can be used for sort (avoiding a blocking in-memory sort which may potentially fail). If you compare the explain output for equivalent find() versus aggregate() queries you will likely find that aggregation is choosing the _id index (or one which supports your sort criteria) over an index like {groups:1} (which is more selective but does not support the sort criteria). If you are able to share the explain output, someone might be able to provide a more informed suggestion.Possible workarounds include:Your aggregation pipeline may include more stages than the redacted example, but if it can be replaced by a find() query that would also be a straightforward solution.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Doug_Duncan @Stennie_X\nThanks both for your replies. We have implemented the ‘find’ work-around presently to the queries against the collection that is currently affected by this issue. I am trying to pre-empt the approaching tsunami when others grow to the point that they too will be affected. We really would prefer to not have to re-write the whole back-end to use find rather than aggregate.We could implement compound indexes to all of the collections but we have requirements where queries can be written to sort on many different fields (never more than one at a time I add) so we would need a lot of compound indexes.We have experimented with using hint as suggested by the bug reports but we have found that all that seems to occur is the hinted index is applied to both the match and sort phases, with the net effect that the sort is carried out as a non-index sort. Which works lovely on small result sets and fixes the immediate problem but then breaks queries that return large result sets such that they need the allowDiskUse option set true and run more slowly than the problem we are trying to fix in the first place.So if we hint the index that satisfies the match on a query:\na small result set performance is good\na large result set requires allowDiskUse: true and performance is terribleIf we hint the index that satisfies the sort\na small result set performance is terrible\na large result set performance is good.So we would need compound indexes to keep both situations happy.", "username": "Simon_Dunn" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Release date for Mongo 4.4?
2020-04-24T11:03:32.455Z
Release date for Mongo 4.4?
3,516
null
[ "atlas" ]
[ { "code": "", "text": "We migrated our application from a dedicated Mongo instance to Mongo Cloud. We noticed our application disconnecting from the Database cluster on a daily basis. We are currently on the free tier.It seems that the database servers are being restarted on a daily basis. We are running DEV and PROD versions of our application and both disconnect at the same time.\nAre you aware of this? Is there a way to keep our application connected. I see 3 mongo instances in the cluster connection, I would have thought there would be some round-robin approach to such a restart. The disconnection lasts around 1 min.", "username": "Andrew_Chukwu" }, { "code": "MongoDB Enterprise Cluster0-shard-0:PRIMARY> rs.status().members.forEach(function(x){print(secondsToDhms(x.uptime))})\n18 hours, 28 minutes, 10 seconds\n18 hours, 28 minutes, 18 seconds\n18 hours, 27 minutes, 55 seconds\n", "text": "M0 is not intended for production use.I checked my M0 cluster. It’s quick, but is is a rolling restart.", "username": "chris" }, { "code": "", "text": "Welcome to the community @Andrew_Chukwu!Is there a way to keep our application connected. I see 3 mongo instances in the cluster connection, I would have thought there would be some round-robin approach to such a restart. The disconnection lasts around 1 min.Your application should be able to handle transient changes in cluster availability using a replication set connection with Retryable Writes and appropriate Read Preferences. If your application doesn’t cope well, you could post more information including your driver & version, a snippet of code that reproduces the issue, and the error message or behaviour observed.Scheduled maintenance and relevant cluster configuration changes are performed with round-robin restarts, however you cannot influence the schedule for free and shared tier clusters (M0, M2, and M5).If you have a dedicated Atlas cluster (M10+) you can Set Preferred Cluster Maintenance Start Times in your project settings and also Test Failover to confirm your application correctly handles failover.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Database daily restart/disconnection
2020-04-24T11:03:00.927Z
Database daily restart/disconnection
6,362
null
[ "queries" ]
[ { "code": "", "text": "Hello Everyone Mongo newbie here,\nI was wondering if anyone of you have encountered a long running query that might take days to run. I wanted to know if, there is any way around of implementing a resume function or restarting a query from a checkpoint.Thanks in advance!", "username": "okay_lucid" }, { "code": "", "text": "Welcome to the community @okay_lucid!It’s definitely atypical to have queries that run for days, and there is no concept of pausing a query.I suggest looking into Materialized Views in MongoDB 4.2+, which would allow you to incrementally merge data into a result collection.You may also want to look into some of the data model patterns in the Building with Patterns blog post series. In particular, the Bucket or Computed patterns might be relevant.Regards,\nStennie", "username": "Stennie_X" } ]
Stop and resume queries
2020-04-26T10:07:04.385Z
Stop and resume queries
1,787
null
[ "stitch" ]
[ { "code": "", "text": "I’m trying to deploy and host via Github using the CLI and I’m having a hell of a time.I’ve tried following every doc page and blog post to create sample apps from mongodb official sources and it’s just not working. I’m able to connect with my stitch api key and the atlas api key and the cli returns the user.\nfrom the cli I can “import” and the prompt asks me to enter a “new app” and then it brings up my list of current stitch apps and allows me to select it and then i get a 403 forbidden. the error from the console lists Cloud: MongoDB Cloud but that redirects to the new unified login overview page.I have a cluster, linked to my stitch api on the UI, api keys are generated, I linked github successfully and selected the repo.Then I tried exporting the app from the UI but I’m getting a “cannot read app id from source control error” when i push it to the repo.I’m not sure what else to try. HELP!", "username": "jeremyfiel" }, { "code": "", "text": "Hi Jeremy – If the CLI is producing a 403 when you try to import an application then this may be an indicator that they Atlas API key that you created does not have “Project Owner” permissions for the project associated with your Stitch application – could this be the case?When you are trying to publish the app to your repo, can you outline the steps that you are going through? Are you exporting the application for source control?", "username": "Drew_DiPalma" }, { "code": "Failed: could not find /hosting/files directory\t\n", "text": "Hi Drewthat appears to have made the deployment successful. Now i’m having an issue where I don’t have the hosting files uploaded from the github deployment. Am I correct assuming the hosting files would be transferred to the “hosting” tab after a commit? If i try the stitch app link, I’m getting an error page.\nI’ve read I need to include a metadata.json file but the instructions are unclear for a complete noob like myself. What is a “file resource path” and what are the expected entries in this file? Mongo Docs explain the file creation and the values but not in laymen terms. A sample file here would be awesome or at least one of the tutorials walking through this. I scoured github but didn’t find any. Is this type of file supposed to be committed? I found a few scripts written to deploy the same file but the repo’s don’t have the actual file generated. Seems they are using this script to deploy this file away from the commit.\nI tried adding an empty file with an empty array but I’m getting an error on deploymentthanks!", "username": "jeremyfiel" }, { "code": "[\n {\n \"path\": \"/hosting/files\",\n \"attrs\": [{\n \"name\": \"index\",\n \"value\": \"text/html\"\n },\n {\n \"name\": \"data\",\n \"value\": \"application/x-javascript\"\n },\n {\n \"name\": \"styles\",\n \"value\": \"text/css\"\n }]\n }\n ]\n", "text": "so this post helped me a bit to understand the folder structure. Now the files are displayed in the hosting tab on the UI and the site is working.\nI came up with this:", "username": "jeremyfiel" } ]
Stitch CLI help needed
2020-04-25T03:55:04.005Z
Stitch CLI help needed
2,913
null
[ "installation" ]
[ { "code": "", "text": "./mongod: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory", "username": "aristides_villarreal" }, { "code": "", "text": "", "username": "chris" }, { "code": "", "text": "Welcome to the community @aristides_villarreal!Can you confirm the version of Ubuntu you are running and how you installed MongoDB?There isn’t a Ubuntu 20.20 release (this doesn’t fit their version convention), but Ubuntu 20.04 was just released this week.As per the link @chris shared, Ubuntu 20.04 isn’t an officially supported platform yet. You can watch SERVER-44070: Add Community & Enterprise Ubuntu 20.04 x64 for updates.MongoDB 4.2 packages are available for Ubuntu 18.04 and 16.04. For a full list of supported O/S versions for current releases of MongoDB server, see Supported Platforms.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Ubuntu 20.04, use the zip file with the 4.2.6", "username": "aristides_villarreal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb error in Ubuntu 20.04
2020-04-24T01:45:19.671Z
Mongodb error in Ubuntu 20.04
5,431
null
[ "stitch" ]
[ { "code": "", "text": "Hi there,If MongoDB Team is not planning to add Stitch analytics soon, I would like to try implementing it myself. What do I mean by analytics:How?Possible problems?What do you think about this idea?P.S.If others find it useful, I would happily share my code when it’s ready.", "username": "Dimitar_Kurtev" }, { "code": "", "text": "Hi Dimitar –This actually matches up with a project that we are hoping to do later this year (exposing more of a realtime view into the usage of an application). That being said, having this integrated into the UI is still probably a few months off and if you want to build something else for your own application in the meantime, then the approach that you’ve outlined makes sense.Another approach that we have seen folks take is polling the Logs API then then forwarding to another Logs system that they can use to monitor/alert on the load on Stitch. We are looking into a more formal process for logs forwarding for the future that might take the place of this.", "username": "Drew_DiPalma" }, { "code": "", "text": "Hi Drew,\nThanks for you answer. I must admit I did not even know that there are services that do “log management”. I’ll look into these kind of systems and will probably skip implementing it myself. Looking forward to see what Stitch team will implement in the future.", "username": "Dimitar_Kurtev" } ]
MongoDB Stitch Analytics
2020-04-23T09:19:28.636Z
MongoDB Stitch Analytics
1,619
null
[ "app-services-user-auth", "stitch" ]
[ { "code": "", "text": "Hello,Hopefully this is a pretty basic question, but couldn’t find any solutions online.I have a user authenticated in my Stitch app but when I try to deploy my updated code I get this error:Failed: failed to import app: error validating Auth Provider: cannot remove Auth Provider ‘oauth2-google’; please first delete all Users associated with this provider using the web interfaceIn order to successfully deploy I need to manually delete the users in my Stitch console and the redeploy. I must be able to deploy code to Stitch hosting whilst having authenticated users, how can I solve this issue?Daniel", "username": "Daniel_Gold" }, { "code": "", "text": "Hi Daniel – Apologies for the lag in getting to this question. To clarify a few points here –If you are trying to remove the Google Auth provider with the deploy then you will need to remove the associated users. As you note, this can be done in the UI, but it can also be accomplished via our Admin API. If you are not attempting to remove your Google Auth configuration this is likely an indicator that it is not present in the application configuration that you are attempting to deploy.", "username": "Drew_DiPalma" }, { "code": "", "text": "Hi Drew,Thanks very much for getting back to me. No I’m using auto deploy through GIT. I have actually figured out the problem, it is as you say an config error in my local repo, caused by using the wrong file structure in my case. It wasn’t intentional, but is now working correctly.Best regards,Daniel", "username": "Daniel_Gold" } ]
Autodeploy failed - error validating Auth Provider
2020-03-28T20:03:05.479Z
Autodeploy failed - error validating Auth Provider
2,453
null
[ "stitch" ]
[ { "code": "", "text": "Hello,I have a user authenticated in my Stitch app but when I try to deploy my updated code I get this error:Failed: failed to import app: error validating Auth Provider: cannot remove Auth Provider ‘oauth2-google’; please first delete all Users associated with this provider using the web interfaceIn order to successfully deploy I need to manually delete the users in my Stitch console and the redeploy. /But I must be able to deploy code to Stitch hosting whilst having authenticated users, how can I solve this issue?", "username": "Daniel_Gold" }, { "code": "", "text": "Hi Daniel – Apologies for the lag in getting to this question. To clarify a few points here –If you are trying to remove the Google Auth provider with the deploy then you will need to remove the associated users. As you note, this can be done in the UI, but it can also be accomplished via our Admin API. If you are not attempting to remove your Google Auth configuration this is likely an indicator that it is not present in the application configuration that you are attempting to deploy.", "username": "Drew_DiPalma" }, { "code": "", "text": "Hi Drew,Thanks very much for getting back to me. No I’m using auto deploy through GIT. I have actually figured out the problem, it is as you say an config error in my local repo, caused by using the wrong file structure in my case. It wasn’t intentional, but is now working correctly.Best regards,Daniel", "username": "Daniel_Gold" } ]
Autodeploy failed
2020-04-06T23:07:05.523Z
Autodeploy failed
2,115
null
[ "atlas-functions", "app-services-user-auth", "stitch" ]
[ { "code": "", "text": "Hey,\nI am using the email/password provider for my stitch application. The email address confirmation is sent via custom function, along with the password reset. However, there is no option to resend the email confirmation link via custom function. I am currently resending the confirmation via the code below. Is there currently a method for resending a confirmation email via custom function?await emailPasswordClient.resendConfirmationEmail(email)Thanks!", "username": "Patrick_Willetts" }, { "code": "", "text": "Hi Patrick – We haven’t added the re-run custom confirmation endpoint to the SDK yet but are hoping to soon. In the meantime, you should be able to hit the client API endpoint directly from your frontend as outlined in this Github issue.", "username": "Drew_DiPalma" } ]
Options to resend confirmation email via custom function
2020-04-25T03:00:51.688Z
Options to resend confirmation email via custom function
2,520
null
[ "atlas-triggers", "stitch" ]
[ { "code": "", "text": "Hi,\nWe´d like to export an existing collection data (most probably after a simple filter query) to csv and upload the file to an AWS S3 bucket, every night.It is around 40000 documents(ending in ~=15KB csv file) but should scale well as the data is expected grow.From initial investigation I see we can do this by MongoDB Stitch ; configuring a scheduled trigger connected to function doing the export. But I don´t see any examples/discussions about “export” specifically.\nIs this good idea?", "username": "Omur_Guner" }, { "code": "", "text": "Hi Omur – This should definitely be accomplishable with a Trigger. Generally, I think you will want to write a function that follows these steps –One caveat is that Stitch currently will only pull up to 50k documents in a single MongoDB request. So, as your data grows you may need to make multiple requests to construct your CSV.", "username": "Drew_DiPalma" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
CSV export by a scheduled trigger - MongoDB Stitch
2020-04-03T07:00:00.075Z
CSV export by a scheduled trigger - MongoDB Stitch
3,359
null
[ "stitch", "app-services-data-access" ]
[ { "code": "", "text": "I deleted a rule for one of my collections on mongostitch by accident. Now when I try and remake that rule for that specfic collection from mongo atlas, I cant see it in the dropdown for all the collections in that database.Also any new collections i make in atlas dont sync up with mongostitch too so I cant make another one also.Does anyone know how you resync your collections from mongo atlas to mongo stitch", "username": "Zach_Hyland" }, { "code": "", "text": "Hi Zach – Apologies for the late reply. I tried to reproduce this issue in the Stitch UI but was unable to so it may have already been fixed. Typically, ensuring that any changes in your application are deployed and then refreshing the UI will re-pull the list of collections from your database.If the collection doesn’t have data in MongoDB then it won’t show up in the UI, but you will be able to add it to rules again using the + icon next to your database name in the Rules UI.", "username": "Drew_DiPalma" } ]
How do you resync your latest collections in a db from mongo atlas to mongo stitch in order to add rules for them?
2020-03-26T16:29:28.044Z
How do you resync your latest collections in a db from mongo atlas to mongo stitch in order to add rules for them?
3,110
null
[ "atlas-search", "text-search" ]
[ { "code": "", "text": "Is Full-Text Search in MongoDB based on Lucene?", "username": "Shayan_Test" }, { "code": "", "text": "Yes for MongoDB Atlas. see Full-Text Search and Auto Scaling Coming To MongoDB Atlas | MongoDB Blog", "username": "Omid" }, { "code": "", "text": "Hi,Atlas Search (previously known as Atlas Full-Text Search) builds on Apache Lucene. There have been several updates since the original announcement referenced by @Omid, but one important change is that this feature is now available across all cluster tiers running MongoDB 4.2 (originally an M30 or above cluster was required).There is also a Text Search feature available in the core MongoDB server since MongoDB 2.4. This feature supports more basic text search options and is not based on Lucene.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Shayan_Test" }, { "code": "", "text": "Is Atlas Search available on non-cloud MongoDB?Hi,Atlas Search is currently not available on-prem. There is a relevant feature request you can watch and upvote: Atlas Search: support on-prem installations.Why are there two different text search types for MongoDB (Atlas Search, Text Search). Which one is preferable?Atlas Search is a cloud-first initiative which provides more comprehensive search features for MongoDB Atlas users without any additional installation & management requirements. If you are a MongoDB Atlas user with advanced search requirements (and a MongoDB 4.2+ cluster), I would consider using Atlas Search.However, Atlas Search does add some resource overhead because there are additional processes and storage requirements for the Lucene integration. See: Atlas Search Performance Considerations.If you are not using Atlas (or have more basic text search requirements) you can use standard Text Search.Can both of the text search types index data that are spread across different shards?Yes, both of these text search indexes support sharded clusters.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is Full-Text Search based on Lucene?
2020-04-19T13:03:29.885Z
Is Full-Text Search based on Lucene?
4,714
null
[]
[ { "code": "", "text": "i get this error\n", "username": "Marim_51913" }, { "code": "", "text": "Exit the current mongo shell and rerun command.", "username": "steevej" }, { "code": "", "text": "get the same error\n\nScreenshot_20200425_0853421366×768 68.5 KB\n", "username": "Marim_51913" }, { "code": "", "text": "Same error, same solution.Exit the current mongo shell and rerun command.Your prompt is marimemad@debian: $ and then you type mongo --nodb. In cas you did not realize, you are starting the mongo shell. Your prompt change to > , indicating that you are in the mongo shell. You have to exit this shell and enter the other mongo command at your debian prompt.", "username": "steevej" }, { "code": "", "text": "thanks ", "username": "Marim_51913" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Connect to atlas
2020-04-25T12:41:13.496Z
Connect to atlas
1,112
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": \"5e7b0e1c2ff059dbea2bf2d3\"\n \"name\": \"Phyllis Larson\",\n \"gender\": \"female\",\n \"loans\": [\n {\n \"id\": 1,\n \"balance\": 1989.31\n },\n {\n \"id\": 2,\n \"balance\": 482.08\n },\n {\n \"id\": 3,\n \"balance\": 1142.74\n }\n ]\n },\n{\n \"_id\": \"5e7b0e1c3018e1b7cc97b7fe\",\n \"name\": \"Mason Walker\",\n \"gender\": \"male\",\n \"loans\": [\n {\n \"id\": 1,\n \"balance\": 2335.99\n },\n {\n \"id\": 2,\n \"balance\": 3943.21\n },\n {\n \"id\": 3,\n \"balance\": 1156.3\n }\n ]\n },\ndb.cd.aggregate([ \n { $unwind : \"$loans\" },\n { $group : { _id : \"$gender\", balance : { $avg : \"$loans.balance\" } } } \n])\n", "text": "In my data I have a lot of documents like these, what I need is a average spending on the basis of gender. When I unwind the loans to add up the individual balance for each person more than one document is created with all balances. While using the following query I get the average but the this is less than expected, because for people having more than 1 balance,After unwinding more than one document will be created there by changing average. For instance if we have 6 males with 3 loans of each $1000 and 4 females with 2 loans each $2500, the total loans with 6 males is $18000 and that with females is $20000. The average balance of males is $3000 and that of females is $5000. With the above aggregation we will get average loan of males as $1000 because after unwinding there will be 18 records of male and 8 records of females with average of $2500. What is the approach to get correct average even after unwinding?Thanks\nArun", "username": "ARCH" }, { "code": "db.collection.aggregate( [\n { \n $addFields: { \n total_balance: {\n $reduce: {\n input: \"$loans\" ,\n initialValue: 0,\n in: { \n $add: [ \"$$value\", \"$$this.balance\" ] \n }\n }\n }\n }\n },\n { \n $group: {\n _id: \"$gender\",\n avg_balance: { $avg: \"$total_balance\" }\n }\n }\n] ).pretty()", "text": "The correct approach is to find the total loan balance for each individual, and then find the average by gender:", "username": "Prasad_Saya" } ]
Aggregate and unwinding
2020-04-24T20:38:38.538Z
Aggregate and unwinding
2,190
null
[]
[ { "code": "", "text": "Let’s play a game!We’re back with the second in our series of online scavenger hunts from MongoDB. Answer a series of 10 clues by visiting MongoDB resources including our blog, forums, documentation and more and learn about these resources while earning some awesome rewards!Each participant who completes a single scavenger hunt will earn:Each Monday we will post a new set of clues giving you the chance to earn additional badges and other rewards.We will soon be announcing additional rewards for completing multiple scavenger hunts with a special reward for those who complete all our weekly scavenger hunts this spring and summer.Every scavenger hunt will run from Monday until 5pm US Eastern time on Friday.Have an idea for a clue or question that should be included? Let us know!Scavenger Hunt Week Two ", "username": "Ryan_Quinn" }, { "code": "", "text": "Week 2 done! And I have signed up for the MongoDB.live event! Thanks again", "username": "Natac13" }, { "code": "", "text": "Hi there!\nWeek 2 done \nI unfortunately missed week one, can this be kept open a little bit longer? I am curios what hints you have there, the current have been very interesting! Great job!\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "I’m here for it! Week two done!", "username": "Terrance_Cazy" }, { "code": "", "text": "Thanks to everyone who participated this week!Check back on Monday for the third installment of the MongoDB Online Scavenger Hunt and feel free to share your ideas for future clues with us here.", "username": "Ryan_Quinn" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Let's Play a Game! - MongoDB Scavenger Hunt #2
2020-04-20T13:16:50.789Z
Let&rsquo;s Play a Game! - MongoDB Scavenger Hunt #2
6,056
null
[ "queries", "atlas-search" ]
[ { "code": "db.getCollection('col').aggregate([\n {\n\n \"$searchBeta\": {\n \"compound\" : {\n \"must\" : [\n {\n \"term\": {\n \"path\": [\"firstName\",\"middleName\",\"lastName\"],\n \"query\": \"smit*\",\n \"wildcard\": true,\n }\n }\n ],\n }\n }\n }\n])\nObjectId(\"5d4c160dd54af60001a3a1dd\")\nObjectId(\"5d4c1678d54af60001a3a1e9\")\nObjectId(\"5e7c6675e57e1100013bcbd2\")\ndb.getCollection('parties').aggregate([\n {\n \"$match\" : {\n \"_id\" : ObjectId(\"5d4c160dd54af60001a3a1dd\")\n \n },\n \"$searchBeta\": {\n \"compound\" : {\n \"must\" : [\n {\n \"term\": {\n \"path\": [\"firstName\",\"middleName\",\"lastName\",\"identityMail\"],\n \"query\": \"eefje*\",\n \"wildcard\": true,\n }\n }\n ],\n }\n }\n }\n])\nError: command failed: {\n\t\"operationTime\" : Timestamp(1587733651, 1),\n\t\"ok\" : 0,\n\t\"errmsg\" : \"A pipeline stage specification object must contain exactly one field.\",\n\t\"code\" : 40323,\n\t\"codeName\" : \"Location40323\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1587733651, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"1jukCsxHk+0Z4Je7FwZwLs3csr4=\"),\n\t\t\t\"keyId\" : NumberLong(\"6764005400170725379\")\n\t\t}\n\t}\n} : aggregate failed \n", "text": "I am testing searchBeta and I am trying to combine it with $match to add stronger matches that don’t need a lucene index.Eg, the working lucene search looks like this (I know a compound is not needed here, it is just an example taken from a bigger query):which gives a nice result containing 3 documents in my case:Now I am just trying to see if it is possible to combine a search with a stronger match and tried it with one of the above object ids, but this gives an error:The error is a follows:Shouldn’t this just work?regards,Sven", "username": "Sven_Beauprez" }, { "code": "", "text": "Found it, first I made the mistake of positioning the $match, it shouldn’t have been next to $searchBeta of course, but a level up as an element in the array.Secondly, once I tried that I got an error message that made more sense and had to move the $match after the $searchBeta in the array.Now it works! Cool stuff!", "username": "Sven_Beauprez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Combining $searchBeta with eg. $match
2020-04-24T13:12:04.262Z
Combining $searchBeta with eg. $match
2,607
null
[ "python" ]
[ { "code": "", "text": "Hi,I keep getting this error “pymongo.errors.ServerSelectionTimeoutError: flask-mongodb-atlas-shard-00-01.0hym1.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1108),”.I have created a project using Pymongo and flask. My python SSL certificates seem to be up to date. I tried following the last solution to this error, but it was not very clear to me. If anyone can please assist with a solution to this problem it would be appreciated.Regards,\nGD", "username": "Guy_De_La_Cruz" }, { "code": "python versiondnspythonpymongocertifi", "text": "Hey @Guy_De_La_Cruz,Welcome to the MongoDB Community Forums! [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1108),”.This error mostly means that the validation issue is on the client-side of the connection, and not the server. Your local trust stores do not contain the new root certificate required for verification. For more information, do see: Which certificate authority signs MongoDB Atlas cluster TLS certificates and TSL/SSL and PyMongoNow, as a first step, can you please let us know which OS you are using as well as the python version on your system and try the steps mentioned in this post, and let us know if all these check-out or not, ie.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
pymongo.errors.ServerSelectionTimeoutError
2020-04-19T20:36:49.786Z
pymongo.errors.ServerSelectionTimeoutError
2,854
null
[ "stitch" ]
[ { "code": "", "text": "Hello,I have been try to use MongoDB Stitch to do some sever-less work pretty simple but without success.I would like to ask to see if there is any workaround I can take.My goal is to use Ghost Blog provided JavaScript client to create article when there is a new data insert to MongoDB.The error I got from trigger is:Error: StitchError: TypeError: ‘adapter’ is not a function\"I searched online, this is an error from axios which I believe Ghost Client use this to send http request under the hood.My question is axios module is a popular http module used in nodejs world. Is this not supported in Stitch or I did something wrong?I can remove the need of using Handlebar module, but if there is a way can fix it would be great.By the way:Thanks in advance.", "username": "hanwang" }, { "code": "", "text": "Hi – We had a release on Wednesday (4/22) that enabled Axios support and we believe it also addresses the issues you ran into with Handlebars (though we haven’t tested Handlebars extensively). Please feel free to let us know if you run into any additional issues in the future!", "username": "Drew_DiPalma" } ]
Stitch error for using Axios and Handlebar node modules
2020-04-20T16:56:11.481Z
Stitch error for using Axios and Handlebar node modules
2,641
null
[]
[ { "code": "", "text": "\nCompass schema shows null and document shows only two fields present in the test answers but I got marked down.", "username": "Lance_26422" }, { "code": "", "text": "Please do not post your questions in more that one thread. It slows people trying to help by forcing them to read more threads that necessary.", "username": "steevej" }, { "code": "", "text": "Try pressing Reset button and then Analyse button again.", "username": "steevej" }, { "code": "", "text": "Ok I’ve just noticed this thread after posting on the other thread. As @steevej-1495 advised, continue discussions here.", "username": "007_jb" }, { "code": "", "text": "\nimage1366×768 80 KB\n\nClearly I am in the videos.movies collection but nothing shows in the compass after i analyzed schema, then i referred to the documents which showed only director and genre as the fields which were in the lab test question but i had it wrong on all three attempts", "username": "Lance_26422" }, { "code": "", "text": "Click this:\nThen click the Analyze button on the far right (not the one in the middle):\n", "username": "007_jb" }, { "code": "", "text": "Hi @Lance_26422,Please let us know if you are still facing any issue.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Lab 1.1. Compass Schema shows null
2020-04-23T11:34:57.640Z
Lab 1.1. Compass Schema shows null
1,387
null
[ "aggregation" ]
[ { "code": " db.getCollection(\"questions\").aggregate([{\n $facet:\n {\n \"data\":[\n {$match:{}},\n {$skip:0},\n {$limit:10},\n {$sort:{ time_added: 1 }}\n ],\n \"filterCount\":[{$match:{}}, {$group:{_id:null,count:{$sum:1}}}],\n \"totalCount\":[{$group:{_id:null,count:{$sum:1}}}] \n }\n }])\n", "text": "Lastly I’m using mongodb to build Trivia game, I’m sorting by default all the questions by last added.I build the next query by my needs:Using regular query, to find data I get it order perfect by ‘time_added’, but using this sort, all the data inside ‘data’ is not order well.Can’t find any more help in the internet.Here is screenshots", "username": "David_Tayar" }, { "code": "time_added", "text": " Hi @David_Tayar and welcome to the MongoDB Community forum.You are limiting your results to the first 10 documents that MongoDB finds and then you are sorting only those 10 documents by time_added. Are you sure that is what you want to do? A limit without any sort before it is not guaranteed to return documents any any particular order.I think if you do your sort and then limit you might find the results more as you would expect them to be.", "username": "Doug_Duncan" }, { "code": " {$sort:{ time_added: 1 }}\n {$match:{}},\n {$skip:0},\n {$limit:10},\n", "text": "I change it to:And it’s work’s like magic !I was thinking the query go ahead and not step by step.", "username": "David_Tayar" }, { "code": "", "text": "Glad that worked out for you.One suggestion I would have is to do your match before your sort if you’re planning on filtering documents via the match stage of the pipeline. This will limit the number of documents you need to sort in those cases.", "username": "Doug_Duncan" }, { "code": "", "text": "It’s the better way, thank you !", "username": "David_Tayar" } ]
Sort by date is wrong when using $facet
2020-04-22T17:17:16.347Z
Sort by date is wrong when using $facet
6,539
null
[ "queries" ]
[ { "code": "", "text": "I have some documents ,there are a field named “tagid” as variable size array , how can I use find the tagid like “tagid[0:3]==[1,2,3]” statement,thank you .", "username": "Chunlin_Chen" }, { "code": "db.yourCollection.aggregate([{\n $match: {\"myArrayField.3\": {$exists: true }}\n }, \n {\n $project: {\n myArrayField: 1,\n myNewArrayField: {$slice: [\"$myArrayField\", 3]}\n }\n }, \n {\n $match: {\"myNewArrayField\": [\"value1\",\"value2\",\"value3\"]}\n }])\n", "text": "Welcome to the community @Chunlin_Chen !You don’t give us a lot of information to give you an optimal answer.\nTherefore, your problem may have different approaches.\nHere is my contribution You should use the $slice operator which allows to have a subset of an array. This operator, in your case, must be used within the Aggregation Framework in a $project stage. If you don’t know the Aggregation Framework, I recommend that you take M121 MongoDB University course.So, here is my example.\nI have a document in a collection with an array field that contains 5 values (strings but it doesn’t matter).\nAnd with this aggregation pipeline you will be able to solve your problem.Stages$match: This must, must be the first stage of the pipeline.\nIf you have other search criteria they must be specified here.\nIn my example, I exclude all documents that have an array with less than 3 elements.\nMaybe in your case no document is in this scenario but it’s for the example .$project: In this stage, I create a new field which contains only the first 3 elements of your array field. I also keep the original array field because I don’t know if you need all the elements of your array in your results…\nIt’s the stage where you have to make your projections. The earlier, the better.$match: Here I simply do the test, an equality in your case.You can also add a final $project to remove the new array of 3 values if you don’t need it.It’s my solution based on the information you give us.I hope this will help you ", "username": "Gaetan_MORLET" } ]
How can I use find in array like "tagid[0:3]==[1,2,3]" statement
2020-04-24T05:34:07.325Z
How can I use find in array like &ldquo;tagid[0:3]==[1,2,3]&rdquo; statement
1,716
null
[ "kotlin" ]
[ { "code": "\"io.realm:realm-gradle-plugin:7.0.0-beta-SNAPSHOT\"\n", "text": "Hi! I added the Kotlin 7 beta to my dependencies:I’m just a bit disoriented locating this version: In the realm-java repo, there are only tags until 6.1. The beta isn’t mentioned anywhere. I’m actually also not sure how the plugin version relates to the actual Realm version. Can you please clarify?More generally I’d like to be able to follow the releases: Where can I look to see if there’s a new beta?", "username": "Ivan_Schuetz" }, { "code": "next-majorversion.txtnext-major:CHANGELOG.md", "text": "Hi Ivan,The Realm Java v7 beta is currently only available as a Snapshot release, so hasn’t been tagged on GitHub yet. This snapshot version is based on the next-major branch (the version reference comes from version.txt).For a list of changes in the beta, see next-major:CHANGELOG.md.Non-beta releases should be tagged on GitHub, but I have also passed on the feedback about tagging beta releases.More generally I’d like to be able to follow the releases: Where can I look to see if there’s a new beta?The last beta release was before the new forums were launched in Feb-2020, but we should be announcing upcoming releases in the Product and Driver Announcements category.You can set notification options for a specific category and/or tag combination on this site by clicking on the icon near the top right of a result page.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Where do I find the Kotlin 7.0 beta source?
2020-04-22T12:05:59.126Z
Where do I find the Kotlin 7.0 beta source?
2,362
null
[ "dot-net" ]
[ { "code": "", "text": "Hello,I am new in MongoDB.The InsertOne And InsertMany return null,How can I be sure data were inserted ?", "username": "omid_mirzaei" }, { "code": "InsertOneInsertMany", "text": "Hi @omid_mirzaei, welcome!The InsertOne And InsertMany return null,For write operations such as InsertOne and InsertMany, you need to catch for MongoWriteException. There are two properties of this exception:For more information please see: MongoDB .NET/C# driver: Write Exceptions and MongoDB .NET/C# driver WriteConcern.I would also suggest to review Write Concern concept in the documentation to understand the concept of write acknowledgement.I am new in MongoDB.I would recommend to enrol in MongoDB free online courses to learn more about MongoDB. Especially M220N: MongoDB for .NET developers to learn the essentials .NET application development. The next session starts soon on April 28th.Regards,\nWan.", "username": "wan" } ]
InsertOne in .net core return void
2020-04-16T00:13:21.429Z
InsertOne in .net core return void
3,174
null
[]
[ { "code": "", "text": "I want to keep my /data folder on a remote host, separated from the host where I install and run my mongoDB instances.On the event of updating these mongoDB instances to a next major version, I want to just simply keep the /data folder unchanged (suppose I have backed up the folder). After reconnection, would WiredTiger still persist the data (as binary files) with 100% reliability?", "username": "Thang_Vu" }, { "code": "dbPathmongoddbPathfsync()", "text": "Welcome to the community @Thang_Vu.Mounting your dbPath from a remote host is an atypical deployment approach as you may see network-related performance issues.However, as long as performance isn’t a concern, mongod has exclusive read/write access to the files in the dbPath, and the mount point supports fsync() system calls this approach should work. The MongoDB server will refuse to start if incorrect permissions or an unsupported filesystem are detected.The Remote Filesystems section in the MongoDB Production Notes has some suggested options if you happen to be using NFS.What is your motivation for using a remote filesystem? Is your remote filesystem on the same physical host as the host for your MongoDB instances?On the event of updating these mongoDB instances to a next major version, I want to just simply keep the /data folder unchangedThis is the same scenario as a normal MongoDB major version upgrade. You need to follow the relevant upgrade instructions (including prerequisites) in the release notes for the version you are upgrading to.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie, thank you for you quick reply. Your response has everything I need, especially about the remote filesystem section in the MongoDB production notes which I missed.What is your motivation for using a remote filesystem? Is your remote filesystem on the same physical host as the host for your MongoDB instances?Evaluation the possibility of using a remote filesystem is one of the concern from our customer. We developers are kept away from the true motivation, so for this question I cannot give you a solid answer. The filesystem would be very likely be placed on a different physical host with the VMs for the mongod instances.", "username": "Thang_Vu" }, { "code": "", "text": "Evaluation the possibility of using a remote filesystem is one of the concern from our customer. We developers are kept away from the true motivation, so for this question I cannot give you a solid answer. The filesystem would be very likely be placed on a different physical host with the VMs for the mongod instances.Hi,If your team is kept away from the deployment side of things (but still providing recommendations or potentially supporting production issues), I would strongly recommend not using a remote filesystem. If your customers push back on this suggestion, perhaps they will share their motivation. They may be trying to apply deployment patterns which are more typical with other products, but not recommendable for MongoDB deployments.Regards,\nStennie", "username": "Stennie_X" }, { "code": "dbPathdbPath", "text": "Hi Stennie, thank you for your advices. It appears that, there were some miscommunications between the teams. The initial idea is to use a mounted directory within the same mongoDB instance’s host, not a remote directory entirely. In other words, mounting is now separated from mongoDB: instead of actually mounting the dbPath, we create a mounting point in advance and use it as our dbPath. Would you recommend such an approach?", "username": "Thang_Vu" }, { "code": "dbPathdbPathdbPathdirectoryPerDBdirectoryForIndexes", "text": "In other words, mounting is now separated from mongoDB: instead of actually mounting the dbPath , we create a mounting point in advance and use it as our dbPath . Would you recommend such an approach?Hi,Mounting a directory for the dbPath is fine, and a very different proposition from mounting a remote filesystem. You still need to ensure file & directory permissions are correct. I also suggest restarting your system at least once after adding new mount points to make sure these correctly persist on a reboot (although testing regimes are up to you).In some cases it can even make sense to Separate Components onto Different Storage Devices so data, journal, and log paths are not contending for I/O. There is also some flexibility in storage layout with options like directoryPerDB and directoryForIndexes.However, I’d try to avoid adding unnecessary complexity to your deployment without understanding the potential implications for administration and your chosen backup approach.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,That’s everything I wanted to know, thank you very much for the insights. I’m glad to be a part of this community!Regards,\nThang Vu", "username": "Thang_Vu" } ]
Will WiredTiger keep the binary files up-to-date when mongoDB is updated?
2020-04-13T04:01:11.869Z
Will WiredTiger keep the binary files up-to-date when mongoDB is updated?
1,459
null
[ "node-js" ]
[ { "code": "NumberDecimal()ReferenceError: NumberDecimal is not defineddb version v4.2.2MongoDB shell version v4.2.2const result = await client\n .db('PurchaseOrders')\n .collection('purchaseOrders')\n .updateOne(\n { _id: ObjectId(_id) },\n {\n $set: { data: NumberDecimal('1.02') }\n },\n {\n upsert: true\n }\n );\n", "text": "I’ve posted this on StackOverflow as well.I’m trying to store currency data in MongoDB using NumberDecimal() as per the documentation suggests, however, I’m getting ReferenceError: NumberDecimal is not defined . I’m running db version v4.2.2 and MongoDB shell version v4.2.2 within Nodejs.What am I missing here? Is this an import problem.", "username": "Blake_Pascoe" }, { "code": "Decimal128.fromString(string)Decimal128", "text": "It is defined as Decimal128 in NodeJS MongoDB driver.The static method Decimal128.fromString(string) can be used to create a Decimal128 instance from a string.", "username": "Prasad_Saya" } ]
How do I use NumberDecimal() within Node.js?
2020-04-24T01:12:58.765Z
How do I use NumberDecimal() within Node.js?
10,720
null
[ "compass" ]
[ { "code": "", "text": "Hi,\nAny help with this bug !!! Incomplete key value pair for option", "username": "Oussama_Badr" }, { "code": "", "text": "What is the operation you are doing when you get this message?Most likely cause is due to malformed URI or some special characters in your password", "username": "Ramachandra_Tummala" }, { "code": "", "text": "it is when connecting to my database", "username": "Oussama_Badr" }, { "code": "", "text": "Are you using any API or some program to connect to your DB?\nDoes connection work from shell?\nPlease check your connect string for any errors", "username": "Ramachandra_Tummala" }, { "code": "", "text": "use MongoDB Compass.\nI have already connected to this database but since updating the tool I have encountered this error!", "username": "Oussama_Badr" }, { "code": "", "text": "What is your current and previous version?\nDo you get any other error like deprecated\nAre you using hostname or srv string format to connect from Compass\nOther than upgrade any changes made to your connection string/parameters", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Its happening to me, and I’m using “MongoDB Compass Community” Version 1.20.5 (1.20.5). Using SRV string format.", "username": "C_Bess" }, { "code": "?authSource=admin&replicaSet=&readPreference=primary&appname=MongoDB%20Compass%20Community&ssl=true", "text": "Nevermind, I fixed it. I had go to the “connection string” view to connect, then remove the trailing: ?authSource=admin&replicaSet=&readPreference=primary&appname=MongoDB%20Compass%20Community&ssl=true", "username": "C_Bess" } ]
Incomplete key value pair for option
2020-04-08T13:06:34.613Z
Incomplete key value pair for option
8,205