image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "I just created a new Realm app, added some records, and later added new records with additional fields that weren’t in the original schema. I couldn’t see these fields in MongoDB Atlas when I clicked into collections. I also tried manually updating the schema in the Realm tab to reflect my added field, and deploying that, but I still can’t see the added fields.I can see them additional fields in my Swift app, when I print out the Realm object.Any idea on what’s going on here? If it helps, the records are user records that are generated by the emailPasswordAuth functions.Thanks a bunch!", "username": "Peter_Lu" }, { "code": "", "text": "Hi @Peter_Lu, welcome to the community forum.What’s in the schema shouldn’t impact which fields you can see through the Atlas UI.If you’re still having an issue with this, could you please share the code that adds the document, the Realm object definition and the backend Realm schema?", "username": "Andrew_Morgan" } ]
Can't see some fields in Atlas UI
2021-03-27T05:35:23.365Z
Can’t see some fields in Atlas UI
1,938
null
[]
[ { "code": "ps", "text": "I tried to backup my database from cluster using mongodump --uri “”.But I got error while doing so.\nError read as:\nOn some systems, a password provided directly in a connection string or using --uri may be visible to system status programs such as ps that may be invoked by other users. Consider omitting the password to provide it via stdin, or using the --config option to specify a configuration file with the password.Please suggest the best way for backing up my database.Thanks,\nPrashant.", "username": "Prashant_Panchal" }, { "code": "mongodumpmongorestoremongodumpmongodump --uri mongodb+srv://<USER>:<PASSWORD>@clustername.ajv83.mongodb.net/<DATABASE> \n2021-04-13T18:06:34.605+0200\tWARNING: On some systems, a password provided directly in a connection string or using --uri may be visible to system status programs such as `ps` that may be invoked by other users. Consider omitting the password to provide it via stdin, or using the --config option to specify a configuration file with the password.\n2021-04-13T18:06:35.438+0200\twriting bot.jobs to dump/bot/jobs.bson\n2021-04-13T18:06:35.533+0200\twriting bot.max_items to dump/bot/max_items.bson\n2021-04-13T18:06:35.774+0200\tdone dumping bot.jobs (45762 documents)\n2021-04-13T18:06:35.780+0200\tdone dumping bot.max_items (25645 documents)\n$ mongodump --uri mongodb+srv://<USER>@clustername.ajv83.mongodb.net/<DATABASE>\nEnter password:\n\n2021-04-13T18:09:52.719+0200\twriting bot.jobs to dump/bot/jobs.bson\n2021-04-13T18:09:52.814+0200\twriting bot.max_items to dump/bot/max_items.bson\n2021-04-13T18:09:53.018+0200\tdone dumping bot.jobs (45762 documents)\n2021-04-13T18:09:53.047+0200\tdone dumping bot.max_items (25645 documents)\nmongodump", "text": "Hi @Prashant_Panchal,If you are running on MongoDB Atlas, the easiest way to backup your data is simply to activate the Cloud Backups.\nimage1142×589 41.9 KB\n\nimage1052×371 45.8 KB\nIf you are not running on Atlas (or Ops Manager), you have a lot more work to do.First, your backup strategy depends on your configuration. If you are running a 4.2+ sharded cluster that have sharded transactions in progress, mongodump & mongorestore cannot be part of your backup strategy as they do not maintain the atomicity of transactions across shards.See doc: https://docs.mongodb.com/database-tools/mongodump/#sharded-clustersSecond, mongodump can be a valid solution for all the other cases (Replica Sets, etc) but there are many details that needs to be taken into account like Read Preference or the performance impact on your production cluster.Regarding your password error, I could be wrong be I think it’s a warning.Here is probably the format you probably used:If I execute this command line in Linux, the user & password are saved in the command history which is a security issue.And I get this warning:But the dump worked as expected.To avoid sending my password in the command line history, I can do this instead:As the password is missing, mongodump prompts me for my user password.Don’t forget to backup all the databases as mongodump is just saving the one you are targeting.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @Prashant_Panchal,Your are getting warning message related to security but your MongoDB database backup was completed if i am not wrong. Your are getting this warning because you are supplying user password on command line with backup script. To avoid this warning you can remove password parameter and supply password when prompting for user password.Thanks\nBraj Mohan", "username": "BM_Sharma" } ]
Mongodump, mongoexport
2021-04-13T12:58:00.440Z
Mongodump, mongoexport
6,861
null
[ "backup" ]
[ { "code": "", "text": "Im finding a solution for backing up all dbs from a server to another.\nIn my mine:\nOn backup server. Write a script to export all dbs from Running Server and import to backup server with crontab daily\nIs that a good way to do this task?\nThanks for reading", "username": "Hu_nh_Le" }, { "code": "", "text": "Hi @Hu_nh_Le and welcome in the MongoDB Community !Usually, when you perform a backup, you just store the result of your backup in another safe location and you hope that you will never have to use it to restore your entire cluster.There are many ways to backup a MongoDB cluster. Make sure your strategy works for your configuration (test it!) and make sure that in the event where you have to restore your entire prod infra from a backup that you are meeting your RTO.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @Hu_nh_Le,Why you require to restore all databases. You can keep backup copy to some save location. If you are concern to keep data on running node. Then I recommend to configure delay + hidden node which keep your data on live server.", "username": "ROHIT_KHURANA" }, { "code": "", "text": "Hi @ Hu_nh_Le,If your requirement is to restore mongodb instance on another server daily basis then the better option is to add this server as delay and hidden member of the existing replica set cluster.Thanks\nBraj Mohan", "username": "BM_Sharma" } ]
Backup databases to another backup server?
2021-03-31T09:51:12.923Z
Backup databases to another backup server?
2,961
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello ,We are using realm on the mobile side for sync data and javascript on web site to insert data in to altas but we are facing an issue to store price in doubleIssue: if the price is in decimal number its store in double datatype = 100.12 but if the price is 100.00 its get stored in int32how we can resolved the issue\nJavascript has decimal datatype to store decimal values but realm doesn’t have decimal datatype to get dataplease help me", "username": "kunal_gharate" }, { "code": "", "text": "Hi @kunal_gharate could you please share your Realm Object definition and your backend Realm schema?", "username": "Andrew_Morgan" }, { "code": "", "text": "I have fixed this issuesolution : I have stored decimal value at javascript side and Decimal128 - bson type at android side before that I was using double Datatype", "username": "kunal_gharate" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to store price in double using javascript for fetch on realm side
2021-04-09T12:09:58.384Z
How to store price in double using javascript for fetch on realm side
2,430
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "We are in the process of porting our CosyncJWT service from Realm Cloud to MongoDB Realm. We have gotten the JWT authentication working well. My question concerns the Metadata fields, which are optional. This is obviously a new feature to MongoDB Realm because it did not exist in the older Realm Cloud product. Unfortunately, the documentation link \" Learn more about this and how to use an identity provider.\" does not go anywhere. Is this feature a way to specify additional field data that is packaged inside of the JWT authentication token when a user logs in? I could see this being used to store coupon codes for a social media app. I am just at loss for how specifically this is implemented.", "username": "Richard_Krueger" }, { "code": "", "text": "@Richard_Krueger I believe you are referring to Custom User data which can be defined here:\nhttps://docs.mongodb.com/realm/users/define-custom-user-data/index.html", "username": "Ian_Ward" }, { "code": "{\n \"aud\": \"myapp-abcde\",\n \"exp\": 1516239022,\n \"sub\": \"24601\",\n \"user_data\": {\n \"name\": \"John Doe\",\n \"coupons\": [\n \"123\",\n \"456\",\n ]\n }\n }\nuser_data.namenameuser_data.couponscoupons{\n \"id\": \"59fdd02846244cdse5369ebf\",\n \"type\": \"normal\",\n \"data\": {\n \"name\": \"John Doe\",\n \"coupons\": [\n \"123\",\n \"456\"\n ]\n },\n identities: [\n {\n \"id\": \"24601\",\n \"provider_type\": \"custom-token\",\n \"data\": {\n \"name\": \"John Doe\",\n \"coupons\": [\n \"123\",\n \"456\"\n ]\n },\n }\n ]\n}\n", "text": "The link that you provided should be going to this link in the docs which should help.if your JWT authentication token is passing in coupon codes for a user that looks like this, I imagine the JWT data would look like this:You would define your fields like this:Path\tField // Name\nuser_data.name // name\nuser_data.coupons // couponsand your user object would have coupon data in the following form:", "username": "Sumedha_Mehta1" }, { "code": "", "text": "@Sumedha_Mehta1 Again thanks for the quick turnaround, this was the link I was looking for. Somehow the link in the MongoDB Realm portal under providers Custom JWT Authentication is broken. This Metadata Fields options is a great feature, which was not in your older Realm Cloud offering, for it allows the signing authority to pass additional data inside the JWT token.", "username": "Richard_Krueger" }, { "code": "", "text": "We are aware of some of the in-product links not working at the moment and are actively working on fixing them - they should be up and running soon.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi @Sumedha_Mehta1I have been trying to use this JWT metadata configuration to extract the ‘sub’ from the JWT token into a custom field but I’m unable to do so.\nIs this function broken or only limited to values on the token inside ‘user_data’ ?Also - I have been trying to figure out how to link the user record in the MongoDB Realm portal to custom_data. Ideally I’d like my custom server to create the record in the custom_data collection - but I cannot seem to find a way for my server to fetch these records.\nCan we not set the id of the realm users to be the sub value from the JWT token?ThanksB", "username": "Benjamin_Storrier" }, { "code": "", "text": "Hi, @Richard_Krueger I bumped on to CosyncJWT and I am trying to implement it. I have managed to connect to Mongo Realm. I created a user but I was not able to see the user-created in Realm.Another question, how do I get API endpoint for CosyncJWT to connect to the client? Thanks", "username": "jaseme" }, { "code": "", "text": "@jaseme you can check out the sample code in this GitHub projectCosync Sample Application Code. Contribute to Cosync/CosyncSamples development by creating an account on GitHub.We have samples that call the API both from Swift and React Native.Richard", "username": "Richard_Krueger" } ]
Custom JWT Authentication provider metadata
2020-07-10T16:04:20.923Z
Custom JWT Authentication provider metadata
3,206
null
[ "transactions", "c-driver" ]
[ { "code": "", "text": "Our application has been working for 6 months fine using transactions on a single replica using Mongodb 4.2.\nInstalled version 4.4.4 this week and we are constantly getting:WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transactionOver the last 3 days we checked all our code to make sure we had not changed anything that would cause this.Finally this morning I restarted version 4.2 server and found our application is working fine with transactions again.Something has changed in version 4.4 and I cannot work out what has changed in this version that would case the write conflict. Searched the mongodb log file and there is nothing that shows a write conflict.I have even installed 4.4.5RC and that version also creates the Write Conflict error.Our application creates 2 connections, each connection is to database, creates a session with transactions enabled. The 2 connections are on the same thread.Both databases has a transaction started.\nBoth databases are read, inserted and updated in their transaction.\nCommit the first database and it does not error.\nCommit the second database and it errors with WriteConflict.It always happens on the second database commit.We are using the latest Mongodb C driver 1.7.4.Does anyone have any ideas why this is happening with v4.4 and not v4.2?\nDo I need to change a setting?", "username": "Phillip_Carruthers" }, { "code": "", "text": "Mongodb C driver 1.7.4Is this a typo? That version of the driver supports MongoDB version 3.4 and earlier…", "username": "Asya_Kamsky" }, { "code": "", "text": "Yep typo, it should be 1.17.4", "username": "Phillip_Carruthers" } ]
Write Conflict using Mongodb 4.4.4
2021-04-03T13:37:45.339Z
Write Conflict using Mongodb 4.4.4
3,839
null
[]
[ { "code": "", "text": "Hello Mongo community, I was wondering if it’s possible to migrate from a Firestore db to Mongo DB Atlas, Im developing an IOS app using Firebase (Auth, Firestore, Storage) for now for my lack of back-end knowledge , but if my app scales up I’d like to switch to Mongo in the future , will there be problems? Thanks", "username": "Marco_Vastolo" }, { "code": "", "text": "@Marco_Vastolo There are definitely similarities between Firebase and MongoDB Realm - particularly in how the data is stored in Firestore and MongoDB Atlas - the data follows a document model and is namespaced into collections.In any migration you will need to do a bit of refactoring in in order for the different systems to work. For instance, the Realm client SDKs use an object database which is reflected from your object model definitions whereas with Firestore, the data is cached locally as a document which can then be mapped to your data layer of choice if you wish (SQLite or Classes) or used as a document directly.One thing to keep in mind is that in any of these systems there is typically no way to migrate user accounts unless you manually store them yourself. This is because the passwords are hashed and salted based on different algorithms and there is not a way to transfer these to a different system and have them authenticate successfully. This means that users will need to register a new account with the new system - there are ways of creating a mapping between the two but it can be brittle.I hope this helps", "username": "Ian_Ward" } ]
Firebase migration
2021-04-13T12:17:37.272Z
Firebase migration
10,753
https://www.mongodb.com/…0c_2_1024x35.png
[]
[ { "code": "{messageid : 655890431938265092}\nand \n{messageid : NumberLong(\"655890431938265092\")}\n", "text": "SchemaScreenshot from 2021-03-25 18-17-231632×56 5.51 KBI tried both below , and i got zero results,how to fix this?I am using compass version 1.21.2 , on ubuntu 20.04 LTS", "username": "Takis" }, { "code": "messageiddb.coll.insert({messageid: NumberLong(\"123456789987654321\")})\n", "text": "Hi @Takis,It’s working fine for me with MongoDB Compass 1.26.1 (current prod version).Are you sure that your messageid is actually stored as a NumberLong value?You can double check by clicking here or here in the UI (see red arrows).FYI, this is how I inserted the document in MongoDB from the shell:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "MongoDB Compass 1.26.1 with linux all works,i tried so many times with older versions.\nI dont believe i did typos all this time,but can’t be sure.\nThank you for the reply.", "username": "Takis" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Compass : How to query a long field?
2021-03-25T16:22:52.610Z
Compass : How to query a long field?
4,167
null
[ "crud", "spring-data-odm" ]
[ { "code": " at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:85) [spring-data-mongodb-1.10.11.RELEASE.jar:]\n", "text": "Hello,we have a mongodb 4.2, the app uses the save method, so we set\nsetFeatureCompatibilityVersion to 4.0 but still get the Error:2021-03-24 10:36:49,805 WARN [de.kaufland.ilm.kraken.eventdriven.parser.actor.parse.ParseFileWorker] (pool-4-thread-23) Format KdaImportFormat [importFormatId=‘ImportFormatId(country=MD, source=kpos)’], File 2021/1340/03/24/receipts/2021-03-24T11.36.47+02.00_Transaction_1340_5f1d77d7-5b0c-4b46-b094-33f8a5724155.zip Write failed with error code 61 and error message ‘Failed to target upsert by query :: could not extract exact shard key’; nested exception is com.mongodb.WriteConcernException: Write failed with error code 61 and error message ‘Failed to target upsert by query :: could not extract exact shard key’: org.springframework.dao.DataIntegrityViolationException: Write failed with error code 61 and error message ‘Failed to target upsert by query :: could not extract exact shard key’; nested exception is com.mongodb.WriteConcernException: Write failed with error code 61 and error message ‘Failed to target upsert by query :: could not extract exact shard key’any Idea ?Regards Karsten", "username": "Karsten_Engstler" }, { "code": "", "text": "Hi @Karsten_Engstler and welcome in the MongoDB Community !I don’t see the link between the exception you are getting and the feature compatibility. Looks like you have an issue with your shard key instead.Can you please provide the query and the shard key of the collection you are targeting?Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
setFeatureCompatibilityVersion 4.0 on 4.2 Save still not working
2021-04-08T14:44:34.839Z
setFeatureCompatibilityVersion 4.0 on 4.2 Save still not working
2,915
null
[ "aggregation", "queries", "python" ]
[ { "code": "querygroup by {\"$dateToString\": { \"format\": \"%Y-%m-%d \", \"date\": \"$first\" }\"id13\":\"$samples.id13\"{'_id': ObjectId('6068da8878fa2e568c42c7f1'),\n 'first': datetime.datetime(2018, 1, 24, 14, 5),\n 'last': datetime.datetime(2018, 1, 24, 15, 5),\n 'maxid13': 12.5,\n 'minid13': 7.5,\n 'nsamples': 13,\n 'samples': [{'c14': 'C',\n 'id1': 3758.0,\n 'id10': 0.0,\n 'id11': 274.0,\n 'id12': 0.0,\n 'id13': 7.5,\n 'id15': 0.0,\n 'id16': 73.0,\n 'id17': 0.0,\n 'id18': 0.342,\n 'id19': 6.3,\n 'id20': 1206.0,\n 'id21': 0.0,\n 'id22': 0.87,\n 'id23': 0.0,\n 'id6': 2.0,\n 'id7': -79.09,\n 'id8': 35.97,\n 'id9': 5.8,\n 'timestamp1': datetime.datetime(2018, 1, 24, 14, 5),\n 'timestamp2': datetime.datetime(2018, 1, 24, 9, 5)},\n {'c14': 'C',\n 'id1': 3758.0,\n 'id10': 0.0,\n 'id11': 288.0,\n 'id12': 0.0,\n 'id13': 8.4,\n 'id15': 0.0,\n 'id16': 71.0,\n 'id17': 0.0,\n 'id18': 0.342,\n 'id19': 6.3,\n 'id20': 1207.0,\n 'id21': 0.0,\n 'id22': 0.69,\n 'id23': 0.0,\n 'id6': 2.0,\n 'id7': -79.09,\n 'id8': 35.97,\n 'id9': 6.2,\n 'timestamp1': datetime.datetime(2018, 1, 24, 14, 10),\n 'timestamp2': datetime.datetime(2018, 1, 24, 9, 10)},\n .\n .\n .\n .\ncursor=mydb1.mongodbbuckethour.aggregate([\n\n {\n \"$match\": {\n \"first\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2020-12-31 23:00:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n { \"$unwind\": \"$samples\" },\n{\n \"$match\": {\n \"first\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2020-12-31 23:00:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n\n{\n \"$group\": {\n \"_id\":{\n \"date\": {\"$dateToString\": { \"format\": \"%Y-%m-%d \", \"date\": \"$first\" }},\n \"id13\":\"$samples.id13\"\n }\n }\n},\n {\n \"$project\": {\n \"_id\": 0,\n \"day\":\"$date\"\n\n }\n },\n {\"$sort\": {\"day\": -1}}\n\n])\n},\n {\n \"$project\": {\n \"_id\": 0,\n \"day\":\"$_id.date\"\n\n }\n },\n {\"$sort\": {\"day\": -1}}\n\n])\n", "text": "I have this query and i want to group by {\"$dateToString\": { \"format\": \"%Y-%m-%d \", \"date\": \"$first\" } and \"id13\":\"$samples.id13\"\nMy data look like that:How to project only date and then sort by date?\nWhat am i doing wrong?\nThanks in advance!Edit: i did that and it seems to be working!:", "username": "harris" }, { "code": "", "text": "Hi Harris,I’m not sure exactly what’s going wrong just from looking at it.Are you hosting your database in Atlas? If so, you can use Atlas’s aggregation pipeline builder to build your pipeline stage by stage and see where things are going wrong. Once you get it working, you can export your pipeline to any language.Another option is to use MongoDB Compass’s aggregation pipeline builder. Compass works regardless of where your database is hosted.I never get a pipeline just right on the first try, so I really appreciate the visual pipeline builders.If you can’t figure it out after trying a visual pipeline builder, can you post a couple of sample documents from your collection and tell us what you’re trying to achieve with the pipeline?", "username": "Lauren_Schaefer" }, { "code": "", "text": "Thank you!I tried it but it doesnt seems to work.i updated my question with the documents as you told me to!", "username": "harris" }, { "code": "", "text": "Edit: i did that and it seems to be working!:Woo hoo! So you’re good to go now? Can we mark this question as solved?", "username": "Lauren_Schaefer" }, { "code": "", "text": "Yes of course!Thank you for your help!", "username": "harris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Group by multiple fields with pymongo
2021-04-13T16:04:59.244Z
Group by multiple fields with pymongo
4,880
null
[ "transactions", "spring-data-odm" ]
[ { "code": "class Transaction {\n@Id\npublic String id;\npublic String firstProperty;\npublic String secondProperty;\n} \n\nclass TransactionRepository extends MongoRepository<TransactionInfo , String> {\n...\n}\nTransaction transaction = new Transaction(\"T1\");\ntransaction.setFirstProperty(\"first\");\ntransactionRepository.save(transaction);\n{\n_id:123,\nfirstProperty: \"first\"\n}\nTransaction transaction = new Transaction(\"T1\");\ntransaction.setSecondProperty(\"second\");\ntransactionRepository.save(transaction);\n{\n_id:123,\nfirstProperty: \"first\",\nsecondProperty: \"second\"\n}\n{\n_id:123,\nsecondProperty: \"second\"\n}\n", "text": "Consider the following Java Class.In Java following code is executed :Following document is created.If this piece of code is executed later :Expected Document :Actual Document:From what I read in MongoDB docs I expect the document to be updated with “secondProperty” but it results in the removal of “firstProperty” . I think the document is getting created again, instead of getting updated. Please let me know if I am missing something.", "username": "Sandeep_Siddaramaiah" }, { "code": "_id$set", "text": "Hi @Sandeep_Siddaramaiah and welcome in the MongoDB Community !Save != Update.Save replaces the entire document ─ based on the _id by the new one while an update applies the transformation.See the doc:https://docs.mongodb.com/manual/reference/method/db.collection.save/#replace-an-existing-documentIf you want to add a new field, you should use an update with the $set operator.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Spring SimpleMongoRepository save behaviour
2021-04-07T20:02:37.611Z
Spring SimpleMongoRepository save behaviour
2,355
null
[ "atlas-functions" ]
[ { "code": "https://accounts.yyy.com/oauth/v2/token? refresh_token=1000.nnnnnnnnnnaa& client_id=1000.X9T5WJYBKL1ZFQOEGHU3BU3X473A3H& client_secret=2bbdd11& redirect_uri=https://google.com/& grant_type=refresh_token", "text": "Hi all,I’m trying to use a template literal to create a multiline string in a Realm function and getting the following error:\nhttp request: “url” argument must be a stringThe string is as followsvar url = https://accounts.yyy.com/oauth/v2/token? refresh_token=1000.nnnnnnnnnnaa& client_id=1000.X9T5WJYBKL1ZFQOEGHU3BU3X473A3H& client_secret=2bbdd11& redirect_uri=https://google.com/& grant_type=refresh_tokenI am using this URL when posting to an enpoint.\nI get http request: “url” argument must be a stringIf I create a single line string (no back ticks), I don’t get this error.The documentation states Realm supports template literals.Any ideas?Thanks,\nHerb", "username": "Herb_Ramos" }, { "code": "exports = function(){\n var url = `https://accounts.yyy.com/oauth/v2/token?refresh_token=1000.nnnnnnnnnnaa&\n client_id=1000.X9T5WJYBKL1ZFQOEGHU3BU3X473A3H&\n client_secret=2bbdd11&\n redirect_uri=https://google.com/&\n grant_type=refresh_token`;\n \n console.log(url);\n return true;\n};\n> ran on Tue Apr 13 2021 19:31:49 GMT+0200 (Central European Summer Time)\n> took 362.474208ms\n> logs: \nhttps://accounts.yyy.com/oauth/v2/token?refresh_token=1000.nnnnnnnnnnaa&\n client_id=1000.X9T5WJYBKL1ZFQOEGHU3BU3X473A3H&\n client_secret=2bbdd11&\n redirect_uri=https://google.com/&\n grant_type=refresh_token\n> result: \ntrue\n> result (JavaScript): \nEJSON.parse('true')\nexports = function(){\n var url = \"https://accounts.yyy.com/oauth/v2/token?refresh_token=1000.nnnnnnnnnnaa&\" \n + \"client_id=1000.X9T5WJYBKL1ZFQOEGHU3BU3X473A3H&\"\n + \"client_secret=2bbdd11&\"\n + \"redirect_uri=https://google.com/&\"\n + \"grant_type=refresh_token\";\n \n console.log(url);\n return true;\n};\n> ran on Tue Apr 13 2021 19:34:48 GMT+0200 (Central European Summer Time)\n> took 268.881812ms\n> logs: \nhttps://accounts.yyy.com/oauth/v2/token?refresh_token=1000.nnnnnnnnnnaa&client_id=1000.X9T5WJYBKL1ZFQOEGHU3BU3X473A3H&client_secret=2bbdd11&redirect_uri=https://google.com/&grant_type=refresh_token\n> result: \ntrue\n> result (JavaScript): \nEJSON.parse('true')\n", "text": "Hi @Herb_Ramos,Looks like this is working fine for me:Result:Else there is still the good old concatenation trick.Not that I get a different result though…Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Template literals
2021-04-01T19:11:30.555Z
Template literals
2,957
null
[ "crud", "golang" ]
[ { "code": "", "text": "Hi everybody!I have a document with a bool value, which I want to invert. How would I do that with golang?\nI already established that I my update function works. I just don’t know which command to use.Thank you in advance!", "username": "Tripple_U" }, { "code": "test:PRIMARY> db.coll.insertOne({bool:true})\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"6075cfffdf48fa5c4d46e8ae\")\n}\ntest:PRIMARY> db.coll.findOne()\n{ \"_id\" : ObjectId(\"6075cfffdf48fa5c4d46e8ae\"), \"bool\" : true }\ntest:PRIMARY> db.coll.updateOne({\"_id\" : ObjectId(\"6075cfffdf48fa5c4d46e8ae\")}, [{\"$set\": {bool: {\"$not\": \"$bool\"}}}])\n{ \"acknowledged\" : true, \"matchedCount\" : 1, \"modifiedCount\" : 1 }\ntest:PRIMARY> db.coll.findOne()\n{ \"_id\" : ObjectId(\"6075cfffdf48fa5c4d46e8ae\"), \"bool\" : false }\n$set$bool", "text": "Hi @Tripple_U,Sorry for the late reply but here is a solution written in the Mongo Shell which should be easy to transform into Golang ─ but sadly, I don’t speak golang yet .Note the important trick here: we are using the Aggregation Pipeline update syntax to use the $set and get access to the current value with $bool.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Invert bool value with update
2021-03-27T10:43:17.385Z
Invert bool value with update
5,850
null
[ "aggregation", "data-modeling", "mongoose-odm" ]
[ { "code": "var BookingSchema= new Schema(\n {\n userID: {type: Schema.Types.ObjectId , ref:'Users'},\n date_finished:{type: Date},\n date_started: {type:Date},\n Project_title: {type:String},\n Project_desc: {type: String},\n total_cost: {type :Number},\n resourceID: {type: Schema.Types.ObjectId,ref:'Resources'},\n timestamp: {type:Date}\n\n }\n);", "text": "Hello guys. I have implemented a simple booking system. What i want is to group Bookings for a user by year and by month(based on date_started) , calculate the total cost for every month, and also carry some additional properties like resource id in order to display them in my browser. Have anyone some ideas how to implement that? Below is my collection schema iam using .Thank you in advance", "username": "petridis_panagiotis" }, { "code": "$groupuser+year+month,total costresource id", "text": "Hello @petridis_panagiotis, You can use an aggregation query (db.collection.aggregate()) to get the desired result. The aggregation stage $group allows you group by user+year+month, and calculate (or sum) the total cost (for the month).See $group aggregation example. You can use the aggregation operator $first to capture other fields like resource id from within the group stage.", "username": "Prasad_Saya" }, { "code": "", "text": "Hello i read what $first is but what i understood this carries other fields only for the first doc. To be more precise i will give my favourite result.\nYear 2021\nJanuary\n[{Booking id:124,resource id total_cost},{Booking id:453,rrsource id total_cost} , month cost:40]\nFeb the same\nMarch same\n…\n2022 the same", "username": "petridis_panagiotis" }, { "code": "", "text": "@petridis_panagiotis, please include a sample input document from your collection.", "username": "Prasad_Saya" }, { "code": "[\n {\n _id: 607188c53f598d0015d362a7,\n resourceID: { _id: 606f58733d3c8b2f50f78a79, name: 'Sillicon Waffer Tool' },\n date_started: 2021-06-21T11:00:00.000Z,\n date_finished: 2021-06-23T11:30:00.000Z,\n total_cost: 4.800000000000001\n },\n {\n _id: 607188913f598d0015d362a5,\n resourceID: {\n _id: 606f5b2e3d3c8b2f50f78a7b,\n name: 'Photolithography Laboratory'\n },\n date_started: 2021-06-10T11:00:00.000Z,\n date_finished: 2021-06-11T11:30:00.000Z,\n total_cost: 12\n },\n {\n _id: 607188313f598d0015d362a3,\n resourceID: { _id: 606f5a423d3c8b2f50f78a7a, name: 'Nanophotonics Classroom' },\n date_started: 2021-05-10T11:00:00.000Z,\n date_finished: 2021-05-11T11:00:00.000Z,\n total_cost: 9.600000000000001\n },\n {\n _id: 606f7d03c297a70015be79da,\n resourceID: { _id: 606f58733d3c8b2f50f78a79, name: 'Sillicon Waffer Tool' },\n date_started: 2021-04-19T01:00:00.000Z,\n date_finished: 2021-04-21T01:00:00.000Z,\n total_cost: 4.800000000000001\n }\n]\n", "text": "Ok, let’s go. Below i give a sample datasetNow, i will show you what at the end would like to have in my browser. So this page implemented with some silly array if-statements,hence the way to structurize like that my data was very struggling. As you can see from the image, in the link bar we take the user id from the parameter and query for this user. Then for one year( my dataset has at this time bookings only in 2021) we display bookings for every month and calculate the monthly cost (like June in my image). So, finally, is there any query that will free me up from complex and multiple arrays and array loop iterations? .If my desired result is imposibble using one mongodb command and maybe a few javacript iterations, i will again do that using nested arrays172262066_1407676396253012_5467754003057423619_n1366×768 96.9 KB", "username": "petridis_panagiotis" }, { "code": "db.collection.aggregate([\n{ \n $match: { _id: INPUT_USER_ID } \n},\n{ \n $group: {\n _id: { year: { $year: \"$date_started\" }, month: { $month: \"$date_started\" } },\n total_cost_month: { $sum: \"$total_cost\" }\n }\n}\n])\n{ \n \"_id\" : { \"year\" : 2021, \"month\" : 6 }, \n \"total_cost_month\" : 16.8 \n}", "text": "You can use this aggregate to start with and refine it as per your formatting:A sample output for a year+month would be:", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you very much. In addition i would like to display some properties for every booking of each month, like in my image. You can see under the Month name there are the details of their respective bookings, is that possible?", "username": "petridis_panagiotis" }, { "code": "db.test.aggregate([\n{ \n $match: { _id: INPUT_USER_ID } \n},\n{ \n $group: {\n _id: { \n year: { $year: \"$date_started\" }, \n month: { $month: \"$date_started\" } \n },\n total_cost_month: { $sum: \"$total_cost\" },\n bookings_month: { \n $push: { \n date_started: \"$date_started\",\n date_finished: \"$date_finished\",\n total_cost: \"$total_cost\" \n } \n }\n }\n}\n])", "text": "You can see under the Month name there are the details of their respective bookings, is that possible?Yes, that is possible - try this one:", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you very much, iam looking forward to try and i will respond to you asap\nThank you!!! ", "username": "petridis_panagiotis" }, { "code": "", "text": "So we have results… It works perfect!. Thank you very much Mister Prasad Saya iam very thankfull to you and iam to your disposal whenever you want .", "username": "petridis_panagiotis" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Group By year-month
2021-04-12T15:09:33.298Z
Group By year-month
31,217
null
[]
[ { "code": "", "text": "How can I calculate the optimal total connection count from my service to the my mongo DB endpoint? Is there a basic formula based on expected number of queries per second and CPU and IO taken by each query?Similarly, is there a formula to calculate the optimal database instance type/size to use based on traffic patterns and query characteristics (CPU, IO consumed or latency of query)?Note: By instance type I mean similar to EC2 instance type which provides info on vCPU and memory (RAM)I will be using this to create the connection pool in my service. I’m assuming that if my service has N hosts then per host the connection pool size need to be the total optimal connection count divided by N.", "username": "Shruthi_s1" }, { "code": "=================================\nConnection Monitoring and Pooling\n=================================\n\n:Status: Accepted\n:Minimum Server Version: N/A\n\n.. contents::\n\nAbstract\n========\n\nDrivers currently support a variety of options that allow users to configure connection pooling behavior. Users are confused by drivers supporting different subsets of these options. Additionally, drivers implement their connection pools differently, making it difficult to design cross-driver pool functionality. By unifying and codifying pooling options and behavior across all drivers, we will increase user comprehension and code base maintainability.\n\nMETA \n====\n\nThe keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in `RFC 2119 <https://www.ietf.org/rfc/rfc2119.txt>`_.\n\nDefinitions\n", "text": "Hi,MongoDB Drivers already manage connection pools so I don’t think it’s a good idea to add another home made connection pool on top of it.For the sizing, there are no magic formulas. It can be completely different from one use case to another. It depends on your read/write ratio, your hardware, perf requirements, etc.You can have a look to the pricing and cluster tier MongoDB Atlas provides to get an idea of what MongoDB generally recommends for the ratio storage - RAM - CPU.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Calculating connection count and DB server type
2021-03-24T00:03:34.486Z
Calculating connection count and DB server type
3,552
https://www.mongodb.com/…6_2_599x1024.png
[ "data-modeling" ]
[ { "code": "holes", "text": "Problem: I have a $cursor with documents as a “vertical” data source. I need to show this on a sub-table “horizontally”, i.e. to pivot the data.This is possible with RDBMS, though requires extra querying to process.What is the MongoDB / document DB equivalent? Or have I modelled completely incorrectly? (First use case…)Each player has a “round” document, with each hole played as a sub-document. Also, can someone confirm if this model is embedded or linked? Currently, holes is an array of Objects, so I guess linked? Is this beneficial?Screenshot 2021-04-10 at 22.15.47766×1308 101 KBI would like to have a display like most websites or TV coverage, with the players name and score, and a lower table of the full round breakdown per hole.", "username": "Dan_Burt" }, { "code": "", "text": "The display would be something like Golf Channel’s:image2308×1342 310 KB", "username": "Dan_Burt" }, { "code": "", "text": "Hi @Dan_BurtI think you can use an aggregate and unwind stage to transform each array to an object.Than lay it out on the grid…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "holes", "text": "Thanks @Pavel_DuchovnyMy knowledge was that each query from the MongoDB would need (or already include) this aggregation. But I have already “received” the object from MongoDB, so I don’t see how I can aggregate or transform this sub-array. Would this be processed entirely in PHP? Any pointers for help?With my current thinking, it would only be this holes array that would require this treatment.Secondly, as part of my indirect question where I have used this sub-document / sub-object - would it be easier to use this as a normal array of data types and not another MongoDB Object? Pro’s, Con’s, etc?", "username": "Dan_Burt" }, { "code": "db.players.aggregate([{\"$match\" : {playerId : ...}}, {\"$unwind\" : \"$holes\" }]);\n", "text": "Hi @Dan_Burt,I am not sure I understand your second question… In my opinion the data model of keeping player statistics per game is ok as amount of holes is not dramatically large, so you won’t hit unbound arrays antipatterns.Now I am not sure what do you mean you get the document in php as is, how do you query it? Via a php driver?Why can’t you do:Of course php syntax for aggs is a bit different.If you want to do it on client side parse the document as you wish…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "roundholes$collection = $client->golf->leaderboard;\n\n$filter = [];\n$options = ['sort' => ['score' => 1, 'throughHole' => -1]];\n$cursor = $collection->find($filter, $options);\n$cursorholesround", "text": "Currently, I already have a single query to get back a list of round documents, which contains this holes sub-document / sub-array.The filter will change in future based upon the chosen competition. But I only have sample data currently, so not applying any filter currently. I am just applying sorting to the query. The PHP syntax for this is:I then iterate through this $cursor object for my leaderboard display.I had hoped that this same single query could be used, as the holes array is included in my resultset.Your suggestion would be to use an entirely different query to get to this data. So this would require 2 queries (I expect)? As I need my first query to sort the main round documents first. Or can I complete both parts of the page display in the same aggregate query? (sorting first, then doing this extra aggregation / processing on the sub-documents)The other option I was asking about it to just keep my original query, with no aggregation, and then do the processing in PHP instead. I guess this is a MongoDB forum, but was wondering if anyone could provide pointers for how to do this in PHP? My reading & searching all points to similar DB queries to perform the aggregation. Not to “pivot” an array variable.", "username": "Dan_Burt" }, { "code": "", "text": "Hi @Dan_Burt,You can do the sorting and filtering in first stages of aggregation and unwind after.So it will be one query does it allThanks\nPavel", "username": "Pavel_Duchovny" } ]
"Pivot" table with MongoDB & PHP
2021-04-10T21:27:52.305Z
&ldquo;Pivot&rdquo; table with MongoDB &amp; PHP
5,500
https://www.mongodb.com/…4_2_1024x381.png
[ "aggregation", "java" ]
[ { "code": "new Document(\"tags\",\n new Document(\"$elemMatch\",\n new Document(\"$regex\", keywords).append(\"$options\", \"i\")))\nFilters.elemMatch(\"tags\", Filters.regex(\"tags\", keywords, \"i\"));\n public List<Bookmark> getBookmarksBySearchKeywords(String userId, String keywords) {\n if (userId == null || keywords == null) return null;\n\n List<Bson> pipeline = new ArrayList<>();\n Bson matchUserId = Aggregates.match(new Document(\"user_id\", userId));\n Bson matchFields = Aggregates.match(Filters.or(\n Filters.regex(\"title\", keywords, \"i\"),\n Filters.regex(\"description\", keywords, \"i\"),\n Filters.regex(\"url\", keywords, \"i\"),\n Filters.regex(\"dateCreated\", keywords, \"i\"),\n new Document(\"tags\",\n new Document(\"$elemMatch\",\n new Document(\"$regex\", keywords)\n .append(\"$options\", \"i\")))\n// Filters.elemMatch(\"tags\", Filters.regex(\"tags\", keywords, \"i\"))\n ));\n Bson sortByTitle = Aggregates.sort(Sorts.ascending(\"title\"));\n pipeline.add(matchUserId);\n pipeline.add(matchFields);\n pipeline.add(sortByTitle);\n\n List<Bookmark> bookmarks = bookmarksCollection.aggregate(pipeline).into(new ArrayList<>());\n return bookmarks;\n }\n", "text": "I’m having trouble utilizing the MongoDB Java driver’s helper methods to simplify a query on my collection.Goal: the user should be able to search all bookmarks by title (string), description (string), url (string), created date (Date), and in an array of tags (string[ ]).The aggregation pipeline in Compass spat out this code, which DOES WORK to search the elements in the tags array:I’ve looked at the Java documentation for elemMatch and regex, but the method’s parameters don’t seem to work for this type of query My whole function:Here’s an example of a bookmark in the database. I also need to change the front-end Vue app to submit the dateCreated field as a Date, rather than a string!image1240×462 71.9 KBAny help is greatly appreciated!", "username": "Ian_Goodwin" }, { "code": "tags$matchFilters.regex(\"tags\", keywords, \"i\")$elemMatch$elemMatchstock: [ { qty: 10, price: 5.85 }, { ... }, ]$elemMatch", "text": "Hello @Ian_Goodwin, you can use this code to filter the tags array in your $match stage of the aggregation (worked with Java Driver v3.12.2 and MongoDB v4.2.8):Filters.regex(\"tags\", keywords, \"i\")Please note the $elemMatch is not required in this case; $elemMatch is required when you are querying on multiple fields of a sub-document of an array field (for example, stock: [ { qty: 10, price: 5.85 }, { ... }, ], and if you are querying on both fields of the array field then use $elemMatch).", "username": "Prasad_Saya" }, { "code": "$elemMatch", "text": "That makes sense. Thanks for helping me understand $elemMatch, that does the trick!", "username": "Ian_Goodwin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Java helper functions - Query an array with a regex in an aggregation pipeline
2021-04-12T21:23:42.863Z
Java helper functions - Query an array with a regex in an aggregation pipeline
5,734
null
[]
[ { "code": "mongodumpmongoexportmongoexport --uri=\"mongodb://my-username:[email protected]:27017,replica-2.mongodb.net:27017,replica-3.mongodb.net:27017/database?replicaSet=replica-set\" --collection=collection-name --out=collection-name.json --ssl", "text": "During my studying of MongoDB I tried to dump a collection from training course’s mongo database. I tried both mongodump and mongoexport . My shell command is like\nmongoexport --uri=\"mongodb://my-username:[email protected]:27017,replica-2.mongodb.net:27017,replica-3.mongodb.net:27017/database?replicaSet=replica-set\" --collection=collection-name --out=collection-name.json --sslWhen I run the command, nothing happened: no response, no new local files, no errors, only cursor flashes.How can I export collections by the uri ?", "username": "Yuriy_Lykhusha" }, { "code": "mongoexport --uri=\"mongodb+srv://user:[email protected]/sample_supplies\" --collection=xyz --out=xyz.json \n", "text": "Try uri stringmodify above to suit your cluster", "username": "Ramachandra_Tummala" }, { "code": "ps", "text": "@Ramachandra_Tummala As I tried this on my machine I got:\nOn some systems, a password provided directly in a connection string or using --uri may be visible to system status programs such as ps that may be invoked by other users. Consider omitting the password to provide it via stdin, or using the --config option to specify a configuration file with the password.", "username": "Prashant_Panchal" }, { "code": "", "text": "That’s true\nJust omit password from your command\nIt will prompt for password\nGive the password and your command should workYou can explore config file with password option also", "username": "Ramachandra_Tummala" } ]
Mongoexport / mongodum with --uri
2021-01-23T20:00:51.426Z
Mongoexport / mongodum with &ndash;uri
6,700
null
[]
[ { "code": "show users\n{\n\t\"_id\" : \"admin.gott\",\n\t\"userId\" : UUID(\"53c1829c-af0b-4ca6-8370-ef0b66502e3d\"),\n\t\"user\" : \"gott\",\n\t\"db\" : \"admin\",\n\t\"roles\" : [\n\t\t{\n\t\t\t\"role\" : \"root\",\n\t\t\t\"db\" : \"admin\"\n\t\t}\n\t],\n\t\"mechanisms\" : [\n\t\t\"SCRAM-SHA-256\"\n\t]\n}\n", "text": "Hi,\nI have version 4.4.4 running with authorization enabled.\nAfter creating the admin user I try to limit the access from local host only. But there will no restrictions added.Then call:db.runCommand({updateUser:“gott”,authenticationRestrictions:[{clientSource:[\"::1\",“127.0.0.0/7”]}]})\n{ “ok” : 1 }But after call usersInfo again the authentication restrictions are lost.db.runCommand({usersInfo: 1})\n{\n“users” : [\n{\n“_id” : “admin.gott”,\n“userId” : UUID(“53c1829c-af0b-4ca6-8370-ef0b66502e3d”),\n“user” : “gott”,\n“db” : “admin”,\n“roles” : [\n{\n“role” : “root”,\n“db” : “admin”\n}\n],\n“mechanisms” : [\n“SCRAM-SHA-256”\n]\n}\n],\n“ok” : 1\n}", "username": "MDC_MDC" }, { "code": "db.runCommand({usersInfo: 'gott', showAuthenticationRestrictions:true})", "text": "Hi @MDC_MDC\nTry db.runCommand({usersInfo: 'gott', showAuthenticationRestrictions:true})", "username": "chris" } ]
Authentication restrictions looks like not working
2021-04-13T06:21:45.170Z
Authentication restrictions looks like not working
1,661
null
[ "upgrading" ]
[ { "code": "", "text": "Hello Everyone,\nI’m trying to find best solution for migration all MongoDB cluster from 3.4 to 4.4. the old system is running on MongoDB 3.4 on RedHat 7.7, and I want to move all data with user information to MongoDB 4.4 on Redhat 7.9 on another server. I undertand that I can’t use mongodump/mongorestore utility since it didn’t include user information. I also understand that I can’t use mongoexport since it’s used for collection backup, not whole Database backup. So, I need your help to find the best solution for migrating all DB (not upgrading) to newer MongoDB on another server.Thank you all in advance.Regards,\nAhmet", "username": "Ahmet_Gunata" }, { "code": "", "text": "Hi @Ahmet_GunataAs RHEL 7 supports mongodb 4.4 I would upgrade in-place then move. You don’t mention a replicaset so I will assume a standalone.Start with a backup.\nI would follow the upgrade path 3.4 → 3.6 → 4.0 → 4.2 → 4.4 (details in each versions release notes). Read the procedures carefully and practice if you can.\nThen shut down the database. Copy/Rsync the data directory(recursively) and configuration to the new host.\nStart mongo 4.4 on the new host.Or start with mongodb 3.4 on the new system and do the upgrades there.Also ensure your client app drivers are using a version that supports 4.4", "username": "chris" }, { "code": "", "text": "Hi @chris,\nThank you for answer. Actually I should add more details to my questions.\nI’m planning to move some databases in old MongoDB (3.4) to current MongoDB (4.4) which is currently used by some applications.\nI think I have to upgrade old MongoDB to newer version as you mentioned , then do mongodump/mongorestore to move databases into current MongoDB.\nWhen I try to do dump/restore directly (dump from 3.4 restore to 4.4), I didn’t get any error on my test environment. Do you think that I can use this method? Is there any restriction about this method?\nRegards,\nAhmet", "username": "Ahmet_Gunata" }, { "code": "", "text": "When I try to do dump/restore directly (dump from 3.4 restore to 4.4), I didn’t get any error on my test environment. Do you think that I can use this method? Is there any restriction about this method?If it works go for it. ", "username": "chris" } ]
MongoDB Migration from 3.4 to 4.4
2021-04-12T07:35:52.793Z
MongoDB Migration from 3.4 to 4.4
11,151
null
[]
[ { "code": "", "text": "Hello,We are having lock already acquired issue when restoring on atlas from dump. There are many collection so somehow we are not able to trace on whichWe tried to multiple options to trigger/flush release locks but none of ones are working like (db.fsyncLock(), db.killOp()) but it always says that admin does not have rights to execute these commands.Is there no way to run Admin Commands on Mongo Atlas from CLI even we have Admin Privileges? Example we think db.killop() may work but we are having errors that we don’t have permission to run these commands.How can we identify which collection exactly creating lock issue. It’s problem with write locks.", "username": "Ravi_Shah" }, { "code": "", "text": "Hi @Ravi_Shah,Welcome to MongoDB community.You should be able to kill operations with a cluster admin through the real-time panel on your node via Atlas UI.This UI will also show you waiting operations and their statistics.There you might find what operations are blocking. In general mongodump and restore might require strong locks and we recommend considering other backup and restore options that Atlas have (build in backup) or use secondary nodes to perform the dump…If this matter is urgent I suggest you open a high savirity support case with our support under the support tab.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Mongo atlas collection lock already acquired, can not release with atlas admin role
2021-04-13T09:50:10.182Z
Mongo atlas collection lock already acquired, can not release with atlas admin role
2,527
null
[ "swift" ]
[ { "code": "", "text": "Hi there,I’ve recently migrated some code in one of my projects from using a ‘self-built’ ObservedResults solution with a UITableView to the one provided by Realm Swift with a SwiftUI native view using LazyVStack and ForEach - thanks for implementing that by the way.The issue I’m facing now is that as my app runs and continues to modify/append items to the database (in collections OTHER to the ones being displayed) for some reason the number of active versions continues to grow and eventually crashes the application.\nEither with a message indicating the maximum number of application versions being exceeded or if I remove that limit with another more nefarious looking crash reason significantly later on.I believe I’ve isolated this to the use of LazyVStack/List/ForEach as if I remove this code or iterate over the Results indices instead of the Results directly the application does not eventually crash.Any ideas for fixing this pinning issue?\nI’d rather get this working than revert to the UITableView + ObservationToken implementation.Thanks!!!", "username": "Elias_Court" }, { "code": "", "text": "Actually - this appears to happen more generally with the use of ObservedRealmObject as well… ", "username": "Elias_Court" }, { "code": "", "text": "It’s difficult to say considering your solution is ‘self-built’, but– given the existing implementation of ObservedResults and ObservedRealmObject, there is no way around the pinning issue. Unfortunately, by having to use frozen objects to allow for SwiftUIs data binding to function properly, version pinning is unavoidable.We do hope to fix this in the future though.", "username": "Jason_Flax" } ]
Number of active versions increases continuously using SwiftUI ForEach and ObservedResults
2021-04-13T08:57:48.301Z
Number of active versions increases continuously using SwiftUI ForEach and ObservedResults
2,300
https://www.mongodb.com/…a_2_1024x554.png
[]
[ { "code": "", "text": "How would I write a query which checks the createdAt and finds all objects between 01-01-2021and 31-04-2021? I tried the following query, but it didn’t give back the right data(see image):\n{createdAt:{$gte:“01-03-2021”,$lt:“31-03-2021”}}image1920×1040 74.6 KBits weird because when i use the filter function from MongoDB Charts(without the query bar), it will show the requested data.", "username": "Ruben" }, { "code": "ISODate{createdAt:{$gte:ISODate(\"2021-01-01\"),$lt:ISODate(\"2020-05-01\"}}\n", "text": "Hi @Ruben -Three things:Putting these together, the query you likely want is:HTH\nTom", "username": "tomhollander" }, { "code": "", "text": "Hi Tom,thanks for your help. After some small fixes the query below worked!{createdAt:{$gte:ISODate(“2020-03-01”),$lt:ISODate(“2021-03-31”)}}", "username": "Ruben" }, { "code": "", "text": "Thanks for this Ruben. The first code got me close, but didn’t work.", "username": "Mario_Yip" }, { "code": "{createdAt:{$gte:ISODate(“2020-03-01”),$lt:ISODate(“2021-04-01”)}}\n", "text": "Great! Yeah the dates in your screenshot and in your text didn’t match so I wasn’t sure what range you wanted.\nKeep in mind that the query you are using returns data from the beginning of March 1st to the end of March 30th, i.e. it does not cover March 31st at all. Maybe that’s what you want but it seems unlikely. To cover the full month of March you need this:", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Finding data between two dates by using a query in MongoDB Charts
2021-04-12T14:00:36.136Z
Finding data between two dates by using a query in MongoDB Charts
368,039
null
[]
[ { "code": "", "text": "Hi,I would like to port some changes that I developed for MongoDB v4.2 to v4.4, however I’m not able to build the database of the latter branch.Build command:python3 buildscripts/scons.py CC=gcc-8 CXX=g++-8 mongodErrors:src/mongo/db/auth/role_graph_builtin_roles.cpp: In function ‘mongo::Status mongo::{anonymous}::_mongoInitializerFunction_AuthorizationBuiltinRoles(mongo::InitializerContext*)’:\nsrc/mongo/db/auth/role_graph_builtin_roles.cpp:182:24: error: ‘getDefaultRWConcern’ is not a member of ‘mongo::ActionType’\n<< ActionType::getDefaultRWConcern // clusterManager gets this also\n^~~~~~~~~~~~~~~~~~~\nsrc/mongo/db/auth/role_graph_builtin_roles.cpp:247:24: error: ‘getDefaultRWConcern’ is not a member of ‘mongo::ActionType’\n<< ActionType::getDefaultRWConcern // clusterMonitor gets this also\n^~~~~~~~~~~~~~~~~~~\nsrc/mongo/db/auth/role_graph_builtin_roles.cpp:248:24: error: ‘setDefaultRWConcern’ is not a member of ‘mongo::ActionType’\n<< ActionType::setDefaultRWConcern\n^~~~~~~~~~~~~~~~~~~\nsrc/mongo/db/auth/role_graph_builtin_roles.cpp:258:24: error: ‘refineCollectionShardKey’ is not a member of ‘mongo::ActionType’\n<< ActionType::refineCollectionShardKey;\n^~~~~~~~~~~~~~~~~~~~~~~~\nsrc/mongo/db/auth/role_graph_builtin_roles.cpp: In function ‘void mongo::{anonymous}::addEnableShardingPrivileges(mongo::PrivilegeVector*)’:\nsrc/mongo/db/auth/role_graph_builtin_roles.cpp:311:49: error: ‘refineCollectionShardKey’ is not a member of ‘mongo::ActionType’\nenableShardingActions.addAction(ActionType::refineCollectionShardKey);\n^~~~~~~~~~~~~~~~~~~~~~~~\nCompiling build/opt/mongo/s/client/parallel.o\"It seems like scons is not able to generate the actionType’s source file, what should I do?Thank you", "username": "Zikker" }, { "code": "build", "text": "Can you provide some additional details?", "username": "Andrew_Morrow" }, { "code": "", "text": "Hi,Thank you for your response.\nCurrently I’m trying to build branch v4.4, without my changes, from commit 7aa1b65641938719accd595bda3e45e97dc5f475.\nThe clean start did actually work, as the errors no longer show, however I’m now experiencing a new error (which also I’ve never encountered in branch v4.2):…\nLinking build/opt/mongo/mongod\n/usr/bin/ld.gold: out of memory\ncollect2: error: ld returned 1 exit status\nscons: *** [build/opt/mongo/mongod] Error 1\nscons: building terminated because of errors.\nbuild/opt/mongo/mongod failed: Error 1I monitored the RAM usage throughout the compilation process, and I always have around 6-10GB of free memory. Do you have any idea?", "username": "Zikker" }, { "code": "--link-model=dynamic", "text": "No, sorry. No specific ideas. The server build is very resource intensive. For development purposes on v4.4 you could try building with --link-model=dynamic which definitely will reduce your memory needs. However, the binaries you produce are not production quality. You may just need to find a bigger machine (AWS?) if you plan to produce static production quality binaries.", "username": "Andrew_Morrow" }, { "code": "", "text": "Your parameter did the trick, however I’m using a lab server shared between several people, so it might be the case that it was previously heavy loaded, thus the out of memory error; I’ll have to look into it. Anyway, I would like to measure latencies and gather some real-world results with this build, is it “good enough”?", "username": "Zikker" }, { "code": "", "text": "I wouldn’t recommend it. There is a currently unquantified performance cost for dynamic build, and we only use it for correctness builds, never performance.", "username": "Andrew_Morrow" }, { "code": "", "text": "Thanks for your help!\nRegardless, I’ve never encountered this memory error on v4.2, I’ll also try to give a look at the build process to see if something changed", "username": "Zikker" }, { "code": "", "text": "Many things changed between 4.2 and 4.4, both in the build system and the codebase. Also, the v4.4 codebase is almost certainly just bigger than v4.2 was.", "username": "Andrew_Morrow" }, { "code": "", "text": "tracked down this problem to the use of the gold linker (-fuse-gold). Removing this flag from the scripts, and/or replacing with -fuse-lld, it links successfully (mongod v4.4, Ubuntu 18.04.3 LTS).\nwith the regular build process, the gold linker fails systematically on a 128GB RAM server (but memory is NOT the problem, the ld process grows to use a minimum amount of RAM before failing with OoO)", "username": "Tommaso_Cucinotta" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Build from source error in MongoDB v4.4
2021-02-02T17:53:47.103Z
Build from source error in MongoDB v4.4
5,423
null
[ "backup" ]
[ { "code": "", "text": "Greetings,\ni would like to ask if this field will be 100% consistent even if i restore from the backup and let reindex the whole DB.Or if i need to make some relations without embedding i should use additional incremental id.Thank you", "username": "Ukro_Ukrovic" }, { "code": "_idmongodumpmongorestore", "text": "Hi @Ukro_Ukrovic,_id (and other field values) will not change as a result of indexing or mongodump and mongorestore.Regards,\nStennie", "username": "Stennie_X" } ]
_id - ObjectId consistency
2021-04-12T13:06:13.991Z
_id - ObjectId consistency
2,477
null
[ "dot-net" ]
[ { "code": "BsonSerializer.RegisterSerializer(new GuidSerializer(GuidRepresentation.Standard));\n BsonDocument doc = new BsonDocument\n {\n {\"Standard\" ,new BsonBinaryData( new Guid( \"1A76AD2A-4FF6-4291-860E-51C5C34AA890\"),GuidRepresentation.Standard) },\n {\"CSharpLegacy\" ,new BsonBinaryData( new Guid( \"1A76AD2A-4FF6-4291-860E-51C5C34AA890\"),GuidRepresentation.CSharpLegacy) }\n };\ndoc = { {\n\t\t\"Standard\": CSUUID(\"1a76ad2a-4ff6-4291-860e-51c5c34aa890\"),\n\t\t\"CSharpLegacy\": CSUUID(\"1a76ad2a-4ff6-4291-860e-51c5c34aa890\")\n\t}\n}\ndoc = { {\n\t\t\"Standard\": UUID(\"1a76ad2a-4ff6-4291-860e-51c5c34aa890\"),\n\t\t\"CSharpLegacy\": CSUUID(\"1a76ad2a-4ff6-4291-860e-51c5c34aa890\")\n\t}\n}\n", "text": "Hi,I did a small test regarding Guid serialization with the c# driver 2.11.Here is my codeWhen looked the the document , it looked like this:I would expect it to look like this:Am I right?Thanks,\nItzhak", "username": "Itzhak_Kagan" }, { "code": "var collectionSettings = new MongoCollectionSettings\n{\n GuidRepresentation = GuidRepresentation.Standard\n};\nvar docCollection = Database.GetCollection<BsonDocument>(\"Document\", collectionSettings);\n", "text": "It looks like that somehow it’s only possible for MongoDB.Driver to have only one type of UUID in a collection. So you just can’t create UUID in one field or object and CSUUID in another.UPD Source: GuidRepresentationModeI struggled with this cause I had CSUUID’s in my collection but want to add UUID. In the end I setup MongoCollectionSettings like thisand got an error “GuidRepresentation Standard is only valid with subType UuidStandard, not with subType UuidLegacy”After that I cleared all data in the the collection and error is gone. Instead of CSUUID my document was saved with UUID", "username": "Victor_Trusov" } ]
Mongodb c# driver 2.11 Guid serialization issue
2020-08-22T16:07:33.598Z
Mongodb c# driver 2.11 Guid serialization issue
6,102
null
[ "installation", "c-driver" ]
[ { "code": "", "text": "Hello\nI have been trying to install and build the mongo c and mongo cxx libraries on my windows machine for an entire week now, and haven’t been able to do it. When I try to build the mongo cxx driver, I get an error that says moreless the following \"cannot find package LIBBSON-1.0; didn’t provide a libbson-1.0-config.cmake file. After a few days, I realized that obviously the mongo c driver hadn’t been properly built, since libbson and libmongoc had been installed with the Ubuntu bash shell, and Windows is not able to access that files and if it was, it could not compile them because they belong to different operating systems. My question here is: could you please provide a few steps to install libbson and libmongoc on windows (not ubuntu or macOS or linux) and a really simple guide to build the rest of the libraries? Something really simple, just the commands to type in order to build them properly. I have been trying to do this for a really long time, and I already feel that I need help from true experts. I would be really grateful with any kind of help. Thank you in advance!", "username": "Josemi_Quilez" }, { "code": "", "text": "PD: I have already tried installing it following the instructions on the official guide, and it didn’t work for me. I tried to install it combining Windows cmd and Ubuntu bash shell, and it didn’t work. I cannot find any way to execute some commands on windows cmd, such as installing libbson and libmongoc", "username": "Josemi_Quilez" }, { "code": "", "text": "@Josemi_Quilez, the instructions you reference have been around for quite some time and have worked for many users. Please have a look at the guidelines for asking for help with C driver build failures and then provide the additional necessary information. That should enable us to provide more specific assistance.", "username": "Roberto_Sanchez" }, { "code": "", "text": "Hello @Roberto_Sanchez:\nThanks for your help. I haven’t been able to do many things today unfortunately, but tomorrow I will post a new comment with the neccesary instructions to test the error. I order to test it, I will also post a github repository so you can reproduce the error on your machine (it is pretty straightforward). To give you a basic idea about it, when I compile a node addon on Ubuntu including the drivers it works properly, but when I try to do the same on Windows it returns syntax error in relation to view.hpp and element.hpp mongocxx library files (extremely rare I think, it detects the syntax of the files from the built driver as invalid). I have tested every possible solution I came across and up with over a week, and haven’t been successful.\nAnyway, tomorrow (in 20 hours aprox) I will post all the instructions required and the github repository.\nA great thanks for your help again, and sorry for the delay!\nYours faithfully,\nJosemi_Quilez", "username": "Josemi_Quilez" } ]
Building mongo-c-driver on windows
2021-04-09T20:36:42.096Z
Building mongo-c-driver on windows
3,642
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hi there,\nI could not find documentation for how to remove user by webSDK. Lets say a user wants to DELETE his/her user account. is there a realm function for this? Kindly tell me a safe right way of doing this.Notice I dont mean deleting the user form Realm UI. I know this.Kind regards,\nBehzad Pashaie", "username": "Behzad_Pashaie" }, { "code": "", "text": "Dear @Rasvan_Andrei_Dumitr you have replied as the question is solved. Please remove the solved tag.\nThanks", "username": "Behzad_Pashaie" }, { "code": "", "text": "Dear @Behzad_Pashaie I have deleted my reply but it will take 24h.Excelent question, by the way!Best Regards\nRasvan", "username": "Rasvan_Andrei_Dumitr" }, { "code": "", "text": "Thanks dear @Rasvan_Andrei_Dumitr. Yes this is actually a major issue specially regarding GDPR. Hope fully we get an answer soon ", "username": "Behzad_Pashaie" }, { "code": "", "text": "A user does not have the privilege to delete their user via the user-facing API, but you could setup a function that deletes the user via the Realm Admin API. This way you can ensure that any documents “owned” by that user is also being properly deleted.", "username": "kraenhansen" }, { "code": "", "text": "Hi @kraenhansen ,\nThanks for the solution. I work on this approach.Kind regards, Behzad", "username": "Behzad_Pashaie" }, { "code": "", "text": "Dear @Rasvan_Andrei_Dumitr some solution is here", "username": "Behzad_Pashaie" }, { "code": "", "text": "Dear @kraenhansen ,\nrelated to the same issue. Can you kindly say what is the difference between apiKey generated from Project Access manager & apiKey generated from Organisation Access manger?\nAnd Which of them should be user to generate and API key to use the API for deleting user ?Thanks ", "username": "Behzad_Pashaie" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How remove/delete user account for webSDK
2021-04-08T10:47:58.769Z
How remove/delete user account for webSDK
3,114
null
[ "react-native", "realm-web", "typescript" ]
[ { "code": "//lib/auth.ts\n\nif (platform.isWeb) {\n import * as Realm from \"realm-web\";\n} else {\n import Realm from \"realm\"; //for react-native\n}\n\nexport async function loginAnonymous() {\n // Create an anonymous credential\n const credentials = Realm.Credentials.anonymous();\n try {\n // Authenticate the user\n const user: Realm.User = await app.logIn(credentials);\n // `App.currentUser` updates to match the logged in user\n assert(user.id === app.currentUser.id)\n return user\n } catch(err) {\n console.error(\"Failed to log in\", err);\n }\n}\nimport { loginAnonymous } from './lib/auth.ts'\n\nloginAnonymous().then(user => {\n console.log(\"Successfully logged in!\", user)\n})\n", "text": "Hi there,\ni want to build a native mobile app for iOS and Android with React Native and a webapp, using web frameworks like React, Vue or Svelte. On the MongoDB Realm docs i readed this:The React Native SDK does not support JavaScript or TypeScript applications written for web browsers. For that use case, you should consider the Web SDK.Even though I seem to need to use both SDKs, is there any way to share the code between the mobile and web app so I don’t have to implement the feature twice? I imagine it like this:and then i can use only this one function in my web and mobile app like this:As it seems, the functions of both SDKs are pretty much identical, right? Would such an approach work?I would be very happy about feedback or suggestions for improvement", "username": "Niklas_Grewe" }, { "code": "", "text": "I have the same problem, and no, it is not currently possible to share code between both platforms, but they do support electron.I’m currently working on porting realm-js to web via webassembly as a side project for the weekends. I’ve made good progress so far, but there are many things to consider and it’s not even sure they’ll accept my PR in the future.I’ll probably just do a desktop version until my port is ready or until they unify the APIs.", "username": "Maxence_Henneron" }, { "code": "", "text": "@Maxence_Henneron thanks for your answer, could you explain it a little bit more? Why can’t share the code between both platforms? Do the SDKs work completely differently under the hood? superficially, the same functions are used, so from the name…why can’t i make the imports platform specific? React Native doesn’t do anything different in principle. You have a function, a component, which is then converted for the target platform. Can’t the same principle be applied here?glad to hear you want to port realm to the web. Is there anywhere I can see how far along you are, or to what extent it can be used?", "username": "Niklas_Grewe" }, { "code": "", "text": "First I want to mention that I am not a mongoDB/Realm employee.The web sdk and electron/react-native SDK do not work the same because Realm is a database engine written in C++ that is compiled and executed natively on the machine. When you use react-native, it will call the native C++ code from react native, same goes with electron on a desktop.\nMongoDB Realm Sync will then sync the local database with a remote MongoDB atlas instance. Which means each client has its own local database that automatically gets sync to a remote MongoDB database.On the other hand, the web SDK is just a bridge to the hosted mongodb atlas database, and the query language is completely different so you’ll actually need to use MongoDB queries instead of Realm queries.What I’m trying to achieve is actually compiling the Realm engine in webassembly so the database can run in the browser and then sync over websocket.This is a large effort that I started last weekend but will likely create a fork once I have something that at least runs.The app I’m currently working on will have to use Electron until I can get it to run in the browser.I hope that makes sense!", "username": "Maxence_Henneron" } ]
Code Sharing between React-Native and Web SDK?
2021-04-12T11:06:38.505Z
Code Sharing between React-Native and Web SDK?
3,985
null
[]
[ { "code": "", "text": "I am scheduling MongoDB backup to S3 and looks like I am facing this error whenever I try to run the job:“Failed: error getting oplog start: error getting recent oplog entry: mongo: no documents in result”I tried running the part of the script manually to troubleshoot more and looks like I get the same error. Below is a part of the script I am using to automate this:echo “[$SCRIPT_NAME] Dumping all MongoDB databases to compressed archive…”\nmongodump --oplog \n–archive=\"$ARCHIVE_NAME\" \n–gzip \n–uri “$MONGODB_URI”also, this runs perfectly fine in prod but throws up this error in dev. both MongoDB is on the same version, what am I missing here?", "username": "Dolis_Sharma" }, { "code": "", "text": "Hi @Dolis_SharmaIs the dev instance configured to be a replicaset? The oplog is only present on replicaset members.", "username": "chris" }, { "code": "", "text": "@chris yes, I think that may be the issue here, dev is not configured to be a replica set. In that case, skipping oplog should work.", "username": "Dolis_Sharma" } ]
Schedule MongoDB Backup to S3 using Kubernetes CronJob
2021-04-07T10:17:34.104Z
Schedule MongoDB Backup to S3 using Kubernetes CronJob
3,464
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.0.24-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.0.23. The next stable release 4.0.24 will be a recommended upgrade for all 4.0 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.0.24-rc0 is released
2021-04-12T14:10:06.144Z
MongoDB 4.0.24-rc0 is released
2,386
null
[ "aggregation", "dot-net" ]
[ { "code": "AggregateOptions {AllowDiskUse = true}var collection = database.GetCollection<BsonDocument>(collectionName);\n\nstring strPipeline = @\"\n[\n {\n $group : \n {\n _id : {raw_curve_id : \"\"$raw_curve_id\"\", published_date : \"\"$published_date\"\", delivery_date : \"\"$delivery_date\"\", value : \"\"$value\"\"},\n ids: { $push: \"\"$_id\"\"},\n saved_dates: { $push: \"\"$saved_date\"\"},\n count: {$sum: 1}\n }\n },\n {$match: { count: {$gt: 1} } },\n]\";\nvar pipelineDoc = BsonSerializer.Deserialize<BsonDocument[]>(strPipeline);\nvar cursor = await collection.AggregateAsync<BsonDocument>(pipelineDoc, new AggregateOptions {AllowDiskUse = true});\nvar firstDuplicate = await cursor.FirstOrDefaultAsync();\n", "text": "I am trying to use an aggregation to identify duplicated data.The code below fails with a MongoCommandException which says:\n‘Command aggregate failed: Exceeded memory limit for $group, but didn’t allow external sort. Pass allowDiskUse:true to opt in’I am using AggregateOptions {AllowDiskUse = true}, but it seems like that setting is not passed to the MongoDB server.", "username": "john_m" }, { "code": "allowDiskUseallowDiskUse", "text": "Are you using MongoDB Atlas Free Tier or shared cluster? If so, the allowDiskUse option is ignored (source):Atlas Free Tier and shared clusters do not support the allowDiskUse option for the aggregation command or its helper method.If you are not using Atlas Free Tier or shared cluster, you can use command monitoring to inspect the command the driver sends to the server.", "username": "Andreas_Braun" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
AggregateOptions AllowDiskUse not passed to server?
2021-04-12T13:24:47.895Z
AggregateOptions AllowDiskUse not passed to server?
5,187
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Is it possible to export users from a Realm app?I want to re-create an app but keep all the app users.Thanks,\nThomas", "username": "Thomas_Hansen" }, { "code": "", "text": "Hi,I want same feature but for resilience data, i want way to export all data from Realm App ( Users, Secret, etc ), encrypted or not i dont care.Just one way to save all data of one application outside of Mongo.\nAnd if i want to rebuild another Realm App. I want an easy way like import export CLI to do this.Thanks.", "username": "Jonathan_Gautier" }, { "code": "could not find secretrealm-cli import --strategy=replace", "text": "Hi Thomas/Jonathan,Unfortunately it is not currently possible to move users from one app to another.\nThis appears to have been suggested as an idea by a different customer on our feedback portal, please feel free to vote on this idea and/or add comments with more information about your business use case.Also regarding export of secrets, they are intentionally not included in the export process.\nPlease follow these steps to move them across:Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Thanks for response @Mansoor_OmarOk but today we didn’t really have sort of backup for realm apps, only code import export, i think it’s very dangerous for any business to not have access of this data.Can you talking about resilience of data, How data was store ? Where data was store ? If one datacenter shutdown what’s happend to our realm apps ?", "username": "Jonathan_Gautier" }, { "code": "", "text": "Again thanks @Mansoor_Omar for the response, but I really really hope we see export/import of users soon as the user database = business. I don’t even know if the user database is backed up internally.", "username": "Thomas_Hansen" }, { "code": "", "text": "Depending on your use case, a workaround that might work for now is to back up your users using the Realm Admin API: Atlas App Services API – that will however only allow you to recreate users that are using the email / password authentication provider.An alternative workaround is to use a third party authentication provider which is able to issue a JWT that you can use to authenticate the user towards the Realm server. That third party service might have ways for you to backup and restore users.", "username": "kraenhansen" }, { "code": "", "text": "Thanks @kraenhansen :).Okay, so do you know why the API only handles email/password auth? Is it a technical issue not including users authenticated via socal providers or could I submit a suggestion to include them as well?The issue I have is that users are tightly coupled (physically) to the Realm app and you can’t export them. I can’t just delete the app and recreate it and the scary thing is if the app is lost or stops working. And it prevents me from changing app region.", "username": "Thomas_Hansen" }, { "code": "", "text": "I believe the users endpoint will return all users (independent of their auth provider) but the response won’t contain enough for you to create them again in another Realm app. For obvious security reasons, you can’t retrieve users password in cleartext My comment was more that you could “import” them by creating email / password users with a random password that they can reset before logging in. But … this is (as I mentioned) a workaround.To be completely honest, I share your concern: As a developer you either have to have control over backups of your database of users or good confidence that these are backed up and can be restored in case something goes wrong in the operations. For what it’s worth, MongoDB Realm uses Atlas as its database and as such relies on the backup capabilities of the platform.I agree that we can definitely improve the documentation and developer experience here.", "username": "kraenhansen" }, { "code": "", "text": "If we use Realm app, this not for use another external service for authentication BTW, for me is not normal, as minimal you have created any backup system like atlas for us. If one day you have any problem with data we lost every links between customer in Realm App and Custom Data in Mongo Atlas. And If we create new users, theirs IDs will change to, and both database will not be sync.It’s so complicated to give access to Realm App Database ? @kraenhansen", "username": "Jonathan_Gautier" }, { "code": "", "text": "@kraenhansen Got an idea.You can create encrypted backup, this will not create security breach if only you have keys to decrypt.\nWe this solution we can backup externaly our realm app users and database.And if someday we want to backup users, transfert users, change region, restore at point etc.\nWe just give you this file or select in list of backup like atlas, and you use in background.", "username": "Jonathan_Gautier" }, { "code": "", "text": "@kraenhansen ofc, I see your point in not being able to export or retrieve users and password .For me I think the use case boils down to being able to re-create the app without loosing the user database.Although I don’t know how Realm works under the hood, I guess it would be doable to have the option not to delete users if you delete the app. They must share some common key to the app and when you create a new app, you could import an old/existing user database. Then if you want to delete users for good there could be a second place in UI for deleting user collections.", "username": "Thomas_Hansen" }, { "code": "", "text": "I think both are great suggestions.", "username": "kraenhansen" }, { "code": "", "text": "@kraenhansen Hi,\nDo you plan to add one of these feature or similar soon ?", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi @Jonathan_Gautier – We don’t have any plans to add this in the near term, but we do have an item tracking this request in our feedback forums that you can follow for updates.", "username": "Drew_DiPalma" }, { "code": "", "text": "Hi,\nI probably lack of some knowledge on that topic, but it is stated somewhere above that we can recreate user (for email authentication) to workaround this issue.\nI don’t agree with that : this will recreate a new user with a new id.\nAs documents are partitioned with that specific Id, doing so will present a fresh empty realm to the user and all his previous documents will remain orphans lost in the middle of all other document.you’ll have to re-link documents with the new owner id manually?\nThis is also valid for external provider as for auth function.If you have any issue with your user account, your business will be in great trouble.IMO, being able to backup user database is a must have even without passwords. Asking users to reset their password is much easier than telling a customer “sorry, due to a missclick/issue, all your data are still safe in the database but we don’t know exactly where…”regards.", "username": "bruno_levx" }, { "code": "", "text": "Hi Bruno – In this case, would it be possible to use an Auth Trigger to move documents from the old user to a new user on sign-up? That would allow the migration of data to be able to take place automatically.", "username": "Drew_DiPalma" }, { "code": "", "text": "Look likes temporary solution … We need better solution to manage users database guys ", "username": "Jonathan_Gautier" }, { "code": "", "text": "@Drew_DiPalma\nHi Drew,Sure using Auth trigger could help if you have the former user id stored somewhere.\nMoreover you will load your cluster a little more as it will run upon auth process for all users.\nHow long it will run if your DB have more than 50 collections with some conaining thousands of documents ?\nIf you just loose previous user entry, then you have to manually figure out what former user id was by comparing all existing partition keys to all existing user ids.If you loose more than one user, you’re stuck.If you can retrieve the previous entry , then you can create a one shot function to remap partition key. But once more, to do that you must have at least 1 backup ( other than a printscreen of the user list !)regards,\nBruno", "username": "bruno_levx" }, { "code": "", "text": "As a quick workaround it could be a good option to get the ability to query all users information from a function (in order to manage backup with a custom function with a schedule ) . it could also allow admins to run cleaning/archiving for unused accounts based on “last login date”.An ugly solution can be using (as Drew propose) an Auth Trigger to save user critical informations in a collection upon authentication. Unfortunately this could bring some security issues.regards,\nBruno", "username": "bruno_levx" } ]
How to export Realm app users?
2021-03-11T11:28:21.684Z
How to export Realm app users?
5,250
null
[ "configuration" ]
[ { "code": "{\"t\":{\"$date\":\"2021-04-07T15:18:37.887+05:30\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":95,\"message\":\"[1617788917:887666][13179:0x7f20dfe68f00], connection: __posix_std_fallocate, 58: **/data/mongodb/journal/WiredTigerTmplog.0000000001: fallocate:: Operation not supported\"}}**\n{\"t\":{\"$date\":\"2021-04-07T15:18:37.887+05:30\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":95,\"message\":\"[1617788917:887744][13179:0x7f20dfe68f00], connection: __posix_sys_fallocate, 75: /data/mongodb/journal/WiredTigerTmplog.0000000001: fallocate:: Operation not supported\"}}\n{\"t\":{\"$date\":\"2021-04-07T15:19:07.909+05:30\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":2,\"message\":\"[1617788947:909845][13179:0x7f20dfe68f00], txn-recover: __posix_open_file, 808: /data/mongodb/journal/WiredTigerLog.0000000027: handle-open: open: No such file or directory\"}}\n{\"t\":{\"$date\":\"2021-04-07T15:19:07.909+05:30\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":2,\"message\":\"[1617788947:909937][13179:0x7f20dfe68f00], txn-recover: __wt_log_scan, 2420: WiredTiger is unable to read the recovery log: No such file or directory\"}}\n{\"t\":{\"$date\":\"2021-04-07T15:19:07.909+05:30\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":2,\"message\":\"[1617788947:909958][13179:0x7f20dfe68f00], txn-recover: __wt_log_scan, 2423: This may be due to the log files being encrypted, being from an older version or due to corruption on disk: No such file or directory\"}}\n{\"t\":{\"$date\":\"2021-04-07T15:19:07.909+05:30\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":2,\"message\":\"[1617788947:909968]**[13179:0x7f20dfe68f00], txn-recover: __wt_log_scan, 2426: You should confirm that you have opened the database with the correct options including all encryption and compression options: No such file or directory\"}}**", "text": "Hi,I am facing below error when I try to start the MongoDB server. This is not a upgraded setup and the dbPath is on a NFS volume. Please can you let me know , how to resolve this? Build version is 4.4.3", "username": "Akshaya_Srinivasan" }, { "code": "dbPathmongodNo such file or directory", "text": "Hi @Akshaya_Srinivasan,Please provide some more background on this environment:Have you checked the permissions for your dbPath against your the user your mongod process is running as? The No such file or directory errors suggest a file path or permission problem.You mention this was not an upgraded setup – was MongoDB running successfully before? If so, what has changed recently?What options are you using for your NFS mount point?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "hi @Stennie_XThanks for your reply. Identified the issue. Due to locking and unlocking the server during backup, it led to journal file roll over which caused this issue. Fixed it.Whereas this error , looks like a soft error. Server startup is not affected by this.\n{“t”:{\"$date\":“2021-04-07T15:18:37.887+05:30”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:95,“message”:\"[1617788917:887666][13179:0x7f20dfe68f00], connection: __posix_std_fallocate, 58: /data/mongodb/journal/WiredTigerTmplog.0000000001: fallocate:: Operation not supported\"}}", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
WiredTiger error with fallocate while trying to start the server on NFS volume
2021-04-07T11:04:11.376Z
WiredTiger error with fallocate while trying to start the server on NFS volume
5,669
null
[]
[ { "code": "{\"t\":{\"$date\":\"2021-04-12T09:52:34.633+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.637+09:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.637+09:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.637+09:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":15445,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"sw-pc\"}}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.637+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.5\",\"gitVersion\":\"ff5cb77101b052fa02da43b8538093486cf9b3f7\",\"openSSLVersion\":\"OpenSSL 1.1.1h 22 Sep 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.637+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.637+09:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.638+09:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Operation not permitted\"}}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.638+09:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":919}}\n{\"t\":{\"$date\":\"2021-04-12T09:52:34.638+09:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "I followed steps on https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/I faced first problem with GPG Key that returns “gpg: no valid OpenPGP data found.”\nSomehow I got PUBkey in apt-key list but wget command still returns “no valid”.After I installed mongodb-org alright.My systemctl status is alwaysActive:failed with --config /etc/mongod.conf (code=exited, status=14).And this is a log after i run \"sw@sw-pc  ~  mongod \"sorry for mess. this is my first time posting online.If you need to know more information about my computer, let me know.And any tips for writing better posting is also welcome. thank you", "username": "SEUNGWOON_JEON" }, { "code": "", "text": "Hi @SEUNGWOON_JEON, check if you have proper permission on /tmp directory. Below error seems issue with permission.{“t”:{\"$date\":“2021-04-12T09:52:34.638+09:00”},“s”:“E”, “c”:“NETWORK”, “id”:23024, “ctx”:“initandlisten”,“msg”:“Failed to unlink socket file”,“attr”:{“path”:\"/tmp/mongodb-27017.sock\",“error”:“Operation not permitted”}}", "username": "ROHIT_KHURANA" }, { "code": "", "text": "Hi @ROHIT_KHURANA. Thanks for your kindness.\nI think I just fixed my problem by uninstall every mongodb related stuff in my computer then re-install from scratch.\nIt took me 20 hours of doing anything I found on Internet which including giving permission.\nSeems like status is Active now (which is better than so far). But I see same issue with permission on log so I will work on it. Thank you.", "username": "SEUNGWOON_JEON" } ]
Problems installing and running on Ubuntu 20.04
2021-04-12T01:29:46.891Z
Problems installing and running on Ubuntu 20.04
3,663
null
[ "aggregation" ]
[ { "code": "", "text": "Hello,I presently have 4 dropdowns I populate from the results of 4 separate sortByCount() queries.Is it possible to do this in one query? probably using a different bucket/facet method. Probably making use of addToSet or Push or something.Many thanks", "username": "Russell_Smithers" }, { "code": "", "text": "I presently have 4 dropdowns I populate from the results of 4 separate sortByCount() queries.Is it possible to do this in one query?It isn’t clear [to me] what you want are doing and want to do. Can you provide an example document from your collection and your current query(ies) so I can better understand your question?", "username": "Eric_Stimpson" }, { "code": " TheModel.LocationCounts\n = MongoClient.\n GetSortByCount(UserCriteria, x => x.Location, User);\n\n TheModel.ActivityCounts\n = MongoClient.\n GetSortByCount(UserCriteria, x => x.Activity, User);\n\n TheModel.SectorCounts\n = MongoClient.\n GetSortByCount(UserCriteria, x => x.Sector, User);\n\n TheModel.SkillCounts\n = MongoClient.\n GetSortByCount(UserCriteria, x => x.Skill, User); \n Results =\n Collection.\n Aggregate().\n Match(Filter).\n SortByCount(Field).\n SortBy(x => x.Id).\n ToList();\n", "text": "Thanks Eric,I have a collection with a number of fields, and four of them are\nlocation, skill, sector and activitytypelocation for example as say, 10 possible values from 1000 documents.\nI.e. any of those 1000 documents will only have one of 10 values.skill might have 5 unique values , and so on.I have a web search form where I populate a drop down for each of; location, sector, skill, activityrtype with the value and how many times that value occurs in the result set.But I have to make 4 seperate queries like this in the c# class which gets the data for the view model.Each of the above is doing the followingCan I do the above for 4 fields and get for lists of unique values and their occurrence type?\ne.g. get the date for four dropdowns like the following at once.For those reading this (I expect Eric knows this) and looking at the seemingly hard coded\nSortBy(…) SortByCount() returns the same class always and Id is the unique value in the list. so x.Id is always present.Thanks", "username": "Russell_Smithers" } ]
Multiple SortByCount's in on Query? is it possible?
2021-04-09T09:43:51.567Z
Multiple SortByCount&rsquo;s in on Query? is it possible?
1,547
null
[ "replication", "configuration" ]
[ { "code": "", "text": "Hi,network compression is just for client/server or also for replica set members?\nWe’re using v4.4Thanks in advance.", "username": "Antonia_Tugores" }, { "code": "--networkMessageCompressorsnet.compression.compressors", "text": "Hi @Antonia_TugoresMembers and clients, they must also have compression enabled with a common compressor type.See --networkMessageCompressorsOr net.compression.compressors", "username": "chris" }, { "code": "", "text": "Thanks!\nIn fact, I was using net.compression.compressors but it seemed not to be working for the initial sync. Now I’ve been checking last 4.4.5 changelog and our problem was probably related to https://jira.mongodb.org/browse/SERVER-52919, included in 4.4.5.", "username": "Antonia_Tugores" } ]
Network compression between RS members
2021-04-09T09:51:58.980Z
Network compression between RS members
2,025
null
[ "crud" ]
[ { "code": "", "text": "Hi!I am building a bulk operation for my application and it only consist of single-document write operations.However, I need each operation to have mongodb “retryable writes” enabled correctly.So I am wondering if an unordered bulk write works just fine for it or wether it only works with an ordered bulk operation (which would be less efficient) ?Beside, I have correctly added the retryable write option in my connection string.Thanks in advance,", "username": "GUIGAL_Allan" }, { "code": "0", "text": "Hello @GUIGAL_Allan, it looks like there is no restriction on using “retryable writes” with ordered or unordered bulk write operations.Some points to note:For more information see Retryable Writes.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB retryable writes in unordered bulk write operation
2021-04-11T23:43:42.889Z
MongoDB retryable writes in unordered bulk write operation
1,886
null
[ "sharding" ]
[ { "code": "", "text": "We have been using MongoDB on production for the past 8 months.\nWe have a cluster with 9 replica sets. The config server and the shards are co-hosted.\nRecently we have been seeing instances where some of the Config server processes abruptly goes down.The logs in such cases always have these messages2020-12-13T22:26:01.495+0530 F - [TaskExecutorPool-0] Invariant failure pool->_requests.empty() src/mongo/executor/connection_pool.cpp 1085\n2020-12-13T22:26:01.495+0530 F - [TaskExecutorPool-0]\n2020-12-13T22:26:01.529+0530 F - [TaskExecutorPool-0] Got signal: 6 (Aborted).***aborting after invariant() failureCould anyone please point out, what this could actually be the reason for. Thank you", "username": "Chaitra_KR" }, { "code": "", "text": "Welcome to the MongoDB forums @Chaitra_KR!Can you confirm the specific version of MongoDB you are using? Are all members of your sharded cluster running the same version (if not, what versions are being used)?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello Stennie,\nThe version of MongoDB in use is 4.2.1. And yes, all members of the replica set are of the same version", "username": "Chaitra_KR" }, { "code": "", "text": "@Chaitra_KRCan you share the rs.conf() and rs.status() command output from your config server replica set.Thanks\nBrajmohan", "username": "BM_Sharma" }, { "code": "", "text": "Hi, I am facing similar problem from quite a time. Same version mongo 4.2.1\nrs.status() and rs.conf() returning right output, which is expected", "username": "Ankit_Jain" }, { "code": "{\n\t\"_id\" : \"configServerReplSet\",\n\t\"version\" : 7,\n\t\"configsvr\" : true,\n\t\"protocolVersion\" : NumberLong(1),\n\t\"writeConcernMajorityJournalDefault\" : true,\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"host\" : \"*****\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 1,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"host\" : \"*****\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 1,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"host\" : \"*****\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 1,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 4,\n\t\t\t\"host\" : \"*****\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 0,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 0\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 5,\n\t\t\t\"host\" : \"*****\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 0,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 0\n\t\t}\n\t],\n\t\"settings\" : {\n\t\t\"chainingAllowed\" : true,\n\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\"heartbeatTimeoutSecs\" : 10,\n\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\"catchUpTimeoutMillis\" : -1,\n\t\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\t\"getLastErrorModes\" : {\n\t\t\t\n\t\t},\n\t\t\"getLastErrorDefaults\" : {\n\t\t\t\"w\" : 1,\n\t\t\t\"wtimeout\" : 0\n\t\t},\n\t\t\"replicaSetId\" : ObjectId(\"*****\")\n\t}\n}\n{\n\t\"set\" : \"configServerReplSet\",\n\t\"date\" : ISODate(\"2020-12-21T04:29:09.101Z\"),\n\t\"myState\" : 2,\n\t\"term\" : NumberLong(26),\n\t\"syncingTo\" : \"*****\",\n\t\"syncSourceHost\" : \"*****\",\n\t\"syncSourceId\" : 1,\n\t\"configsvr\" : true,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 2,\n\t\"writeMajorityCount\" : 2,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1608524948, 74),\n\t\t\t\"t\" : NumberLong(26)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2020-12-21T04:29:08.915Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1608524948, 74),\n\t\t\t\"t\" : NumberLong(26)\n\t\t},\n\t\t\"readConcernMajorityWallTime\" : ISODate(\"2020-12-21T04:29:08.915Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1608524948, 74),\n\t\t\t\"t\" : NumberLong(26)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1608524948, 74),\n\t\t\t\"t\" : NumberLong(26)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2020-12-21T04:29:08.915Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2020-12-21T04:29:08.915Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1608524936, 56),\n\t\"lastStableCheckpointTimestamp\" : Timestamp(1608524936, 56),\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"****\",\n\t\t\t\"ip\" : \"****\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 40208,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1608524947, 35),\n\t\t\t\t\"t\" : NumberLong(26)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1608524947, 35),\n\t\t\t\t\"t\" : NumberLong(26)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2020-12-21T04:29:07Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2020-12-21T04:29:07Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2020-12-21T04:29:07.810Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2020-12-21T04:29:08.004Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1608484751, 1),\n\t\t\t\"electionDate\" : ISODate(\"2020-12-20T17:19:11Z\"),\n\t\t\t\"configVersion\" : 7\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"****\",\n\t\t\t\"ip\" : \"****\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 40210,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1608524948, 74),\n\t\t\t\t\"t\" : NumberLong(26)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2020-12-21T04:29:08Z\"),\n\t\t\t\"syncingTo\" : \"*****\",\n\t\t\t\"syncSourceHost\" : \"*****\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 7,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"name\" : \"*****\",\n\t\t\t\"ip\" : \"****\",\n\t\t\t\"health\" : 0,\n\t\t\t\"state\" : 8,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 0,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\t\"t\" : NumberLong(-1)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\t\"t\" : NumberLong(-1)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2020-12-21T04:29:07.351Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : -1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 4,\n\t\t\t\"name\" : \"*****\",\n\t\t\t\"ip\" : \"*****\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 40208,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1608524947, 88),\n\t\t\t\t\"t\" : NumberLong(26)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1608524947, 88),\n\t\t\t\t\"t\" : NumberLong(26)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2020-12-21T04:29:07Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2020-12-21T04:29:07Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2020-12-21T04:29:08.157Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2020-12-21T04:29:08.178Z\"),\n\t\t\t\"pingMs\" : NumberLong(12),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"*****\",\n\t\t\t\"syncSourceHost\" : \"*****\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 7\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 5,\n\t\t\t\"name\" : \"******\",\n\t\t\t\"ip\" : \"******\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 40208,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1608524947, 88),\n\t\t\t\t\"t\" : NumberLong(26)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1608524947, 88),\n\t\t\t\t\"t\" : NumberLong(26)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2020-12-21T04:29:07Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2020-12-21T04:29:07Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2020-12-21T04:29:08.811Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2020-12-21T04:29:08.951Z\"),\n\t\t\t\"pingMs\" : NumberLong(12),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"*****\",\n\t\t\t\"syncSourceHost\" : \"*****\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 7\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$gleStats\" : {\n\t\t\"lastOpTime\" : Timestamp(0, 0),\n\t\t\"electionId\" : ObjectId(\"000000000000000000000000\")\n\t},\n\t\"lastCommittedOpTime\" : Timestamp(1608524948, 74),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1608524948, 74),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1608524948, 74)\n}\n", "text": "@BM_SharmaHere is the result of rs.conf()And here is the rs.status()", "username": "Chaitra_KR" }, { "code": "", "text": "fixed in https://jira.mongodb.org/browse/SERVER-42930", "username": "maneesh" } ]
Invariant failure on Config server replica set
2020-12-15T18:46:58.104Z
Invariant failure on Config server replica set
3,084
null
[ "aggregation", "performance" ]
[ { "code": "\n db.collection.aggregate([\n {$match: {userId: id}},\n {$sort: {date: -1}},\n {$skip: skip},\n {$limit: limit},\n {\n \"$lookup\": {\n localField: \"targetId\",\n foreignField: \"_id\",\n from: 'users',\n as: 'target',\n }\n },\n {\n \"$project\": {\n 'actorId': 1,\n 'date': 1,\n 'description': 1,\n 'points': 1,\n 'targetId': 1,\n 'userId': 1,\n '_id': 1,\n 'target._id': 1,\n 'target.emails.address': 1,\n }\n }])\n\n db.collection.aggregate([\n {$match: {userId: id}},\n {$sort: {date: -1}},\n {$skip: skip},\n {$limit: limit},\n {\n $lookup: {\n from: 'users',\n pipeline: [\n {\n $match: {\n $expr: {$eq: [\"$_id\", \"$targetId\"]},\n }\n },\n {\n $project: {\n _id: 1,\n 'emails.address': 1,\n }\n }\n ],\n as: 'target',\n }\n }\n ])\n", "text": "Why does one of these queries take minutes longer to resolve?", "username": "Zack_Beyer" }, { "code": "explainexecutionStats", "text": "Hello @Zack_Beyer, it is little difficult to tell just by looking at the queries. I suggest you include some more details so that we can take a closer look at your question.Take a look at this topic from the documentation; it has information about how an aggregation query is optimized to perform and suggestions for doing so: Aggregation Pipeline Optimization.Finally, do generate a query plan using the explain method (the executionStats mode) on the aggregation query - the plan will tell how the indexes are used and what stage of the query has issues regarding the performance. And, include the generated plans also in your reply.", "username": "Prasad_Saya" }, { "code": "\"$lookup\": {\n localField: \"targetId\",\n foreignField: \"_id\",\n from: 'users',\n as: 'target',\n}\n $lookup: {\n from: 'users',\n pipeline: [\n {\n $match: {\n $expr: {$eq: [\"$_id\", \"$targetId\"]},\n }\n },\n {\n $project: {\n _id: 1,\n 'emails.address': 1,\n }\n }\n ],\n as: 'target',\n }\n $match: {\n $expr: {$eq: [\"$_id\", \"$targetId\"]},\n }\n $match: {_id:\"$targetId\"}", "text": "Zack is a colleague of mine. I reviewed his code and I cannot justify the behavior. His post is much too vague. I’ll provide some extra details.His aggregation is selecting docs from a collection of ~200K records keyed on “userId”. This typically returns 0-5 docs. It then does a $lookup to pull in the several “user” docs from the “users” collection that match the “targetId” which is the “_id” foreign key into the user collection. The users collection has ~10MM docs.When the lookup is done thusly:It is fast. It “does the right thing” uses the _id index and returns FULL user docs for each match user.When the lookup is done differently, to try and “optimize” the data SIZE and project the matched docs down to fewer fields, the query does NOT use the _id index and appears to be doing an index scan.So, the only purpose for this latter syntax is to exploit a “projection” clause to limit to the only 2 user fields of interest.My guess is this clause is the offending code.But, we could not simplify this clause. Nor can I understand why $expr of $eq on an indexed field is so slow.Zack could not get any simplified version like this match query to work.\n $match: {_id:\"$targetId\"}Thanks!", "username": "Eric_Oemig" }, { "code": "\"targetId\"let\"targetId\"users $lookup: {\n from: \"users\",\n let: { id : \"$targetId\" },\n pipeline: [\n {\n $match: {\n $expr: {$eq: [\"$_id\", \"$$id\"]},\n }\n },\n {\n $project: {\n _id: 1,\n 'emails.address': 1,\n }\n }\n ],\n as: 'target',\n }explain", "text": "Welcome to community Eric,\nAren’t you supposed to define \"targetId\" in a let field. Otherwise mongodb will look for \"targetId\" in the from collection i.e. users in your case. Thus the delay caused by the fetch operation. I wonder how this pipeline gives the right documents. You should try:but nothing can be said for sure without the explain outputRegards,", "username": "Imad_Bouteraa" }, { "code": "let", "text": "Zack used the let syntax in his original version… and then removed it as he tried to simplify the code. I believe the query worked in both cases. I do know that the performance was terrible, which is what started the investigation in the first place.", "username": "Eric_Oemig" }, { "code": "let match {_id:\"$$id\"}", "text": "I think he also used the let syntax for the “simplified” match clause. No version of the “simplified” match ever gave correct results from the lookup. Although now that you mention the implications of the let… I am thinking he should give it another go with match {_id:\"$$id\"}But I do think he tried that syntax too. He showed me quite a few versions of code.Regardless… how do we explain the terrible runtime of the lookup with a pipeline?", "username": "Eric_Oemig" }, { "code": "db.collection.explain().aggregate(<your pipeline>)db.collection.explain(\"executionStats\").aggregate(<your pipeline>) match {_id:\"$id\"}let$expr$$id$id", "text": "Regardless… how do we explain the terrible runtime of the lookup with a pipeline?or better, following @Prasad_Saya’s suggestion:ps. match {_id:\"$id\"}you can’t access variables defined in let without $expr. and it must be $$id to be interpreted as a variable. $id is interpreted as a field of the from collection", "username": "Imad_Bouteraa" }, { "code": "$lookup$lookup: {\n localField: \"targetId\",\n foreignField: \"_id\",\n from: 'users',\n as: 'target',\n}\n$lookup: {\n from: 'users',\n pipeline: [\n {\n $match: {\n $expr: {$eq: [\"$_id\", \"$targetId\"]},\n }\n },\n {\n $project: {\n _id: 1,\n 'emails.address': 1,\n }\n }\n ],\n as: 'target',\n}\ncollectionpipeline$match$lookuptargetIdlet$match$lookup: {\n from: 'users',\n let: { targetIdVar: \"$targetId\" },\n pipeline: [\n {\n $match: {\n $expr: { $eq: [ \"$_id\", \"$$targetIdVar\" ] },\n }\n },\n {\n $project: {\n 'emails.address': 1,\n }\n }\n ],\n as: 'target',\n}\npipeline$project$project: {\n _id: 1,\n 'emails.address': 1,\n}\n_id: 1_id", "text": "Hello @Eric_Oemig,The two code snippets of the $lookup stage are:Case 1:Case 2:The usage syntax of the Case 2 is not correct (as @Imad_Bouteraa has pointed in his reply earlier). Why, it is not correct?This is because you cannot use the collection document field in the pipeline’s $match stage of the $lookup - directly. You need to assign the field (the targetId) to a variable using the let, and then use that variable in the $match stage. It is a rule as described in Join Conditions and Uncorrelated Sub-queries and this example in the documentation. So, the correct syntax would be as follows:Also, in the pipeline you have a $project stage. In thiis projection,the _id: 1 is not required, as the _id field is selected automatically.", "username": "Prasad_Saya" } ]
Why does one of these queries take minutes longer to resolve?
2021-04-09T23:00:15.864Z
Why does one of these queries take minutes longer to resolve?
4,080
null
[ "mongoose-odm", "connecting" ]
[ { "code": "", "text": "i am a beginner to mongoDb and i have used it in my project many times where i used to create a file , import mongoose and connect it , i want to ask is established mongodb from mongoDb cloud different from using it in a project file", "username": "dhruv_singhal" }, { "code": "", "text": "Hi @dhruv_singhal,Welcome to MongoDB community.I believe that you are talking about connecting mongoose to MongoDB atlas.The answer is yes and you should aquire your atlas connection string and clear access list and use mongoose connection tutorialThanks\nPavel", "username": "Pavel_Duchovny" } ]
Regarding working of MongoDb
2021-04-11T10:32:56.195Z
Regarding working of MongoDb
1,377
null
[ "data-modeling" ]
[ { "code": "listing", "text": "Hi there, pretty much I just want to quick as about should it be going with Single collection or Multi collection.I have this listing collection that has many types. Each type is consisting of different existing fields (indexes), such an example:Type: Estate, Subtype: HouseType: Estate, Subtype: LandNote that, each subtype of type “estate” has different existing indexes fields. Not only “estate”, I’d have “lost” and “found” types on our listing collection, of course, “lost” and “found” would have different existing fields for indexing.Every type has its own existing indexed fields, with this then, should I go with Single Collection or Multi-collections to differentiate with their type?Thanks.", "username": "ibgpramana" }, { "code": "", "text": "Hello @ibgpramana, some factors determine how you structure the data in a collection.The important ones to consider are the number of documents in the collection (and for each type, in this case) and the kind of queries your application has (these include CRUD (Create, Read, Update and Delete) operations).The application’s functionality is one of the main criteria in determining how you organize the data. So, what is your application, what kind of functions, how you use the data in the application, etc., is what you need to tell us for further discussion.That said, MongoDB’s document model allows documents with a flexible schema. So, if it suits your application needs you can have documents with different types within the same collection. But, with this structure, will your application program be able to do the operations on the data without complex logic? Having complex logic in an application means a lot of maintenance overhead (in future).", "username": "Prasad_Saya" }, { "code": "listingsgaragebathroombedroom", "text": "Thanks for reply,The first factor to consider on base what you’ve stated is: the number of documents in the collection. This is depends on the users, they can post many listing for each type they want, and of course, it involves with CRUD operations.About what is my application and function, think of a craigslist. All the listing data in the application will be Listing with many types, and each type has their own many subtypes.I agree that MongoDB’s allows us implement a flexible schema. Currently, we’re in the progress of building this app, and use the Single Collection approach, where listings collection stored and keep many listing types.The reason why I’m asking this is that, I’m concerned whether my current scenario is the right schema. Note that, Estate (House) would have indexes for garage, bathroom, and bedroom, but Estate (Land) would never have existing fields + indexes for those.", "username": "ibgpramana" }, { "code": "subtype\"Land\"", "text": "@ibgpramana, there is an index type called a Partial Index. This index allows creating index on specific criteria (or condition). The documentation says:Partial indexes only index the documents in a collection that meet a specified filter expression. By indexing a subset of the documents in a collection, partial indexes have lower storage requirements and reduced performance costs for index creation and maintenance.Your data can benefit from this indexing. For example, if the subtype is \"Land\" you can create a partial index on this condition.", "username": "Prasad_Saya" } ]
Store data in a single collection or multiple collections?
2021-04-10T07:58:09.268Z
Store data in a single collection or multiple collections?
16,912
null
[ "installation" ]
[ { "code": "", "text": "Hi There,I need to downgrade from Mongo DB 3.6 to 3.4. Could you tell me what are the commands so I can perform to downgrade correctly?I have Ubuntu 20.04 LTS on the computer.Much appreciated.Pipa", "username": "Pipa_Pipa" }, { "code": "", "text": "Hi @Pipa_PipaUbuntu 18.04 is the latest version supported by MongoDB 3.6 and Ubuntu 16.04 is the latest for MongoDB 3.4, you will likely have dependency issues trying otherwise.The downgrade procedures is well documented in the 3.6 release/upgrade notes, for various installations;standalone:replica set:or sharded cluster:", "username": "chris" }, { "code": "", "text": "I need to downgrade from Mongo DB 3.6 to 3.4. Could you tell me what are the commands so I can perform to downgrade correctly?Welcome to the MongoDB Community @Pipa_Pipa!Can you provide more details on why you need to downgrade? If you are trying to workaround a specific issue or compatibility change, perhaps there are alternative approaches to suggest.MongoDB 3.4 reached End-Of-Life (EOL) in January 2020 and MongoDB 3.6 just reached end of life in April, 2021 (per MongoDB’s Support Policy). I would generally recommend upgrading to a supported release series (currently 4.0 or newer) as EOL server versions will not receive any further maintenance or security updates.As @chris noted, tutorials for upgrading and downgrading can be found in the MongoDB server documentation. MongoDB 3.4 (first released in Nov 2016) predates the release of Ubuntu 20.04 by 3 1/2 years and already reached EOL before Ubuntu 20.04’s release. You will have to build from source or look for alternative packages in order to install an EOL version of MongoDB server in your newer O/S.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,I need to have a Ubiquiti Unifi controller installed and they only support version 3.4 of Mongo.Im quite new to this, so no familiar with the Downgrade tutorial listed for Mongo.Would there be any 101 on how to downgrade or something with a few commands to perform to get it done? I know it’s on my own risk, just need to get this running until I find other alternatives,Your help is much appreaciated.Pi", "username": "Pipa_Pipa" }, { "code": "mongod--nohttpinterfacebkohler", "text": "Hi @Pipa_Pipa,I need to have a Ubiquiti Unifi controller installed and they only support version 3.4 of Mongo.I suspect the “only support version 3.4” wording means this is the only version Ubiquiti have tested with. However, it is likely that you can use MongoDB 3.6 unless the controller is relying on some older commands that have finally been removed in 3.6. Generally commands are deprecated for several major server release series before they are fully removed.See Compatibility Changes in MongoDB 3.6 for a list of server changes that may be relevant.Would there be any 101 on how to downgrade or something with a few commands to perform to get it done? I know it’s on my own risk, just need to get this running until I find other alternatives,Yes, those are the links @chris already provided. The caveat I added is that MongoDB 3.4 was EOL before your version of Ubuntu was released, so I expect you’ll have to build from source or look for alternative packages.I suggest trying to run the Unifi controller with MongoDB 3.6. There is some related discussion on the Ubiquity forums: MongoDB 3.6. It looks like the only issue is trying to start mongod with a deprecated --nohttpinterface parameter, but bkohler provides a workaround in the MongoDB 3.6 discussion thread.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I need to have a Ubiquiti Unifi controller installed and they only support version 3.4 of Mongo.From their release notes:\nWe support MongoDB 3.6 since 5.13.10", "username": "chris" } ]
Downgrade from 3.6 to 3.4
2021-04-07T20:50:20.395Z
Downgrade from 3.6 to 3.4
3,619
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hi all,I would like to know if there is a way to revoke all the sessions for a specific user from a Realm function.\nI know there is an option in the MongoDB Realm > App Users UI but I would like to do the same in a function which will be called by a trigger.Thanks for your help!", "username": "Julien_Chouvet" }, { "code": "", "text": "Hey Julien -We have an admin API endpoint that lets you revoke sessions that does this that you can access from a function. There is a previous post on the forum that goes into detail about how to use the Admin API within functions here that might help.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks @Sumedha_Mehta1 for your help", "username": "Julien_Chouvet" } ]
Revoke session - Realm
2021-04-08T07:28:11.240Z
Revoke session - Realm
2,787
null
[ "aggregation", "python", "crud" ]
[ { "code": "for file in sorted_files:\n df = process_file(file)\n\n for row,item in df.iterrows():\n data_dict = item.to_dict()\n mycol1.update_one(\n {\"nsamples\": {\"$lt\": 13}},\n {\n \"$push\": {\"samples\": data_dict},\n \"$min\": {\"first\": data_dict['timestamp1'],\"minid13\":data_dict['id13']},\n \"$max\": {\"last\": data_dict['timestamp1'],'maxid13':data_dict['id13']},\n \"$inc\": {\"nsamples\": 1,\"totid13\":data_dict['id13']}\n },\n upsert=True\n )\n{'_id': ObjectId('6068da8878fa2e568c42c7f1'),\n 'first': datetime.datetime(2018, 1, 24, 14, 5),\n 'last': datetime.datetime(2018, 1, 24, 15, 5),\n 'maxid13': 12.5,\n 'minid13': 7.5,\n 'nsamples': 13,\n 'samples': [{'c14': 'C',\n 'id1': 3758.0,\n 'id10': 0.0,\n 'id11': 274.0,\n 'id12': 0.0,\n 'id13': 7.5,\n 'id15': 0.0,\n 'id16': 73.0,\n 'id17': 0.0,\n 'id18': 0.342,\n 'id19': 6.3,\n 'id20': 1206.0,\n 'id21': 0.0,\n 'id22': 0.87,\n 'id23': 0.0,\n 'id6': 2.0,\n 'id7': -79.09,\n 'id8': 35.97,\n 'id9': 5.8,\n 'timestamp1': datetime.datetime(2018, 1, 24, 14, 5),\n 'timestamp2': datetime.datetime(2018, 1, 24, 9, 5)},\n {'c14': 'C',\n 'id1': 3758.0,\n 'id10': 0.0,\n 'id11': 288.0,\n 'id12': 0.0,\n 'id13': 8.4,\n 'id15': 0.0,\n 'id16': 71.0,\n 'id17': 0.0,\n 'id18': 0.342,\n 'id19': 6.3,\n 'id20': 1207.0,\n 'id21': 0.0,\n 'id22': 0.69,\n 'id23': 0.0,\n 'id6': 2.0,\n 'id7': -79.09,\n 'id8': 35.97,\n 'id9': 6.2,\n 'timestamp1': datetime.datetime(2018, 1, 24, 14, 10),\n 'timestamp2': datetime.datetime(2018, 1, 24, 9, 10)},\n .\n .\n .\n .\ntotid13for file in sorted_files:\n df = process_file(file)\n #df.reset_index(inplace=True) # Reset Index\n #data_dict = df.to_dict('records') # Convert to dictionary\n #to row einai o arithmos ths grammhskai to item ti periexei h grammh\n for row,item in df.iterrows():\n data_dict = item.to_dict()\n mycol1.update_one(\n {\"nsamples\": {\"$lt\": 13}},\n {\n \"$push\": {\"samples\": data_dict},\n \"$min\": {\"first\": data_dict['timestamp1'],\"minid13\":data_dict['id13']},\n \"$max\": {\"last\": data_dict['timestamp1'],'maxid13':data_dict['id13']},\n \"$avg\":{\"avg_id13\":data_dict['id13']},\n \"$inc\": {\"nsamples\": 1,\"totid13\":data_dict['id13']}\n },\n upsert=True\n )\npymongo.errors.WriteError: Unknown modifier: $avg. Expected a valid update modifier or pipeline-style update specified as an array, full error: {'index': 0, 'code': 9, 'errmsg': 'Unknown modifier: $avg. Expected a valid update modifier or pipeline-style update specified as an array'}\n", "text": "I am wondering if i can find the average and upload it along with my data.\nMy code is this:My data look like this:I use totid13 for that purpose but i if need to to find the average in many document its not very helpful.\nI tried something like that:But the output is:Thanks in advance!", "username": "harris" }, { "code": "\"$avg\":{\"avg_id13\":data_dict['id13']},collection.update_one$avg", "text": "\"$avg\":{\"avg_id13\":data_dict['id13']},Hello @harris, the statement you are using with the collection.update_one is not valid - this is because there is no such $avg update operator (see the available Update Operators). The error message says that much.You have to use a pipeline to do such an update operation - see Update with Aggregation Pipeline.", "username": "Prasad_Saya" }, { "code": "\"$avg\":{\"avg_id13\":data_dict['id13']},update_oneupdate_manymaxminmax minnsamples", "text": "Thank you! I didnt know that for \"$avg\":{\"avg_id13\":data_dict['id13']},I have one thing more to ask.Is it fine that i use update_one instead of update_many for finding max and min?I did it like that because i thought it find the max and min for the nsamples less than 13.", "username": "harris" }, { "code": "update_oneupdate_manymaxminupdate_oneupdate_one", "text": "I have one thing more to ask.Is it fine that i use update_one instead of update_many for finding max and min ?Yes, it is fine to use update_one. Please note that update_one will always update only one document (or none if there is no match found as per the specified filter). Even if more than one document is matching, only one document will be updated.", "username": "Prasad_Saya" } ]
Finding average while updating documents
2021-04-09T16:57:34.995Z
Finding average while updating documents
2,797
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "I’m having an issue where syncing a large number of documents to a realm sync client worked once, but NOT on subsequent attempts. The first attempt worked fine (slowly, it’s an M2) but then slows down and just …stops! The document count is ~27000, but the size of the docs averages at 262B. So, the entire download is ~7MB. Surely even an M2 could perform this task more than once? It only made it to ~26,000 documents sync’d.I delete the app documents and data off of the client, and run again.Now, clients receive 0 documents from this collection (all others syncing normally). Is this a limit with the M2s or even above?", "username": "Eric_Lightfoot" }, { "code": "", "text": "Hi Eric. a couple of questions.", "username": "Andrew_Morgan" }, { "code": "", "text": "Hello, thanks for the reply. I’m using RealmSwift SDK.I was unable to find any logs yesterday that I thought were relevant. Initially there were some instances where I needed to tidy up the dataset as its source is a .csv file. At first I thought that it could trip up the sync if the mongo encoding encountered some documents that didn’t fit the schema. In other words I did see a bunch of red MongoEncode errors that cleared out once I had done a better job of scrubbing the csv.At the end of the day I had set up a new realm-app to continue testing to get a grip on the behaviour of sync when applied to relatively larger datasets (as above). I could observe at least 3 different behaviours on a fresh test app install.I got an alert email around that time saying Sync had “been paused” and needed attention. I assume that something I did during my experimentation was not good for the system…Anyhow, I’m trying to accomplish at least two things in this spot I’m in now, which are to gain an excellent understanding of the sync behaviour when applied to larger datasets (like > 10,000 documents - which I’m told is a sort of “soft” limit), and to understand any unwritten caveats surrounding tiered clusters and their effect on sync performance.Thanks again for getting back!Eric", "username": "Eric_Lightfoot" }, { "code": "changesetPublisher.sinkRemoved 0 \nInserted 20268 \nUpdated 0 \nRemoved 0 \nInserted 4124 \nUpdated 0 \nRemoved 0 \nInserted 25 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 31 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 1\n", "text": "Further, I realized this morning that I may have began conducting the download tests a bit too soon after re-enabling sync. I caught a glimpse of the “Copying documents” progress message in the blue bar on the Sync page in Atlas, and then attempted a fresh test app install and run only AFTER the Sync enabling process and document copying had all been completed. However I am still observing the “trickle effect” on the client. I’m using a changesetPublisher with .sink callback, and the result is like so (truncated):I feel like this is definitely throttling. I hope no one minds but I’ve updated the post title. I’m going to switch to an M10 and check the results.I want to know what each one of these callbacks with an insert is costing in terms of requests / data transfer / sync runtime and how that plays out in my billing (i.e. how much of my free tier gets used up by one of these experiments).Edit: I can’t actually update the post title. I think it should read “Realm Sync M2 throttling experiments with collections > 10,000”", "username": "Eric_Lightfoot" }, { "code": "Removed 0 \nInserted 225 \nUpdated 0 \nRemoved 0 \nInserted 885 \nUpdated 0 \nRemoved 0 \nInserted 8993 \nUpdated 0 \nRemoved 0 \nInserted 716 \nUpdated 0 \nRemoved 0 \nInserted 6 \nUpdated 0 \nRemoved 0 \nInserted 9 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 6 \nUpdated 0 \nRemoved 0 \nInserted 2 \nUpdated 0 \nRemoved 0 \nInserted 2 \nUpdated 0 \nRemoved 0 \nInserted 14 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 5 \nUpdated 0 \nRemoved 0 \nInserted 900 \nUpdated 0 \nRemoved 0 \nInserted 656 \nUpdated 0 \nRemoved 0 \nInserted 696 \nUpdated 0 \nRemoved 0 \nInserted 1179 \nUpdated 0 \nRemoved 0 \nInserted 60 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 1 \nUpdated 0 \nRemoved 0 \nInserted 19 \nUpdated 0 \nRemoved 0 \nInitially received 0\nRemoved 0\nInserted 27685\nUpdated 0\nInitially received 0\nInserted 27685\n", "text": "Results of the same experiment on an M10:First run output, truncated (fresh install, new cluster, so Development Mode == Enabled)Second run, truncated (fresh install, Development Mode == Disabled):So with a brand new app and no data, but Realm Sync is not in Development mode, the trickle stops. But the amount of time betweenandwas 31 seconds.My “uneducated guess” is that the most significant factor in overcoming this “trickle factor” seems to be disabling development mode.So in ideal conditions:And the demand ofit takes 31 seconds to sync all of those documents down to the client.Same setup on an M2 takes 88 seconds, but with trickling. This means my “uneducated guess” about development mode causing the trickle effect was incorrect. The M2 trickles when development mode is disabled as well.This probably varies a lot with the transient conditions of the multi-tenant tier. Hope I wasn’t being too noisy for my neighbours.So, for my own requirements, it looks like I need at least an M10 to support this particular use case.I must apologize for hurling uneducated guesses and loosely-research based conclusions around but I think that’s what these forums are for. And if any specialists in the subject matter could correct me or support me anywhere that would be GREATLY appreciated.", "username": "Eric_Lightfoot" }, { "code": "", "text": "What does the iOS code look like for opening and measuring the download speed? Have you tried using asyncOpen when opening the realm? Have you taken a look at the Xcode profiler while running your tests?", "username": "Ian_Ward" }, { "code": "try! Realm(configuration: user.configuration(partitionValue: \"/places/NA\"))\n .objects(Airport.self)\n .changesetPublisher\n .sink(\n receiveCompletion: { completion in\n dump(completion)\n },\n receiveValue: { changes in\n print(\"Callback time: \\(Date())\")\n self.lastUpdate = Date()\n switch changes {\n case .initial(let documents):\n print(\"Initially received \\(documents.count)\")\n self.documents = Array(documents)\n case .update(let documents, deletions: let deletions, insertions: let insertions, modifications: let modifications):\n deletions.forEach { self.documents.remove(at: $0) }\n print(\"Removed \\(deletions.count) airports\")\n\n insertions.forEach { self.documents.insert(documents[$0], at: $0) }\n print(\"Inserted \\(insertions.count) airports\")\n\n modifications.forEach {\n self.documents.remove(at: $0)\n self.documents.insert(documents[$0], at: $0)\n }\n print(\"Updated \\(modifications.count)\")\n\n case .error(let err): fatalError(err.localizedDescription)\n }\n })\n .store(in: &cancellables)\n", "text": "Thanks for chiming in IanThe codeI prefer to stay away from the asyncOpen API as much as possible as I have spent too much time trying to integrate it with my application. In the end, that API has influenced my overall architecture in such a way that it is not needed.I’m less than fluent with profiler. I can give it a go and provide observations. Any suggestions what I should be looking at / for exactly?Thanks again,Eric", "username": "Eric_Lightfoot" }, { "code": "", "text": "This post’s title needs to be changed but I am unable to for some reason", "username": "Eric_Lightfoot" }, { "code": "", "text": "I don’t think we can deterministically measure performance for any shared instance - they are there for development and not for any testing or production usage.asyncOpen does download the initial seed realm in one big chunk as a performance improvement so I would be interested to know how much time it takes to download the realm using asyncOpen on a m10. I’d also be interested to know how much time it takes to download the partition after terminating and re-initializing sync.", "username": "Ian_Ward" }, { "code": "", "text": "Unfortunately I had to shut down the M10 since I didn’t intend to keep it running. I can say anecdotally that the sync enabling copy process on the M10 was similar to the M2, probably on the order of 30 seconds.", "username": "Eric_Lightfoot" } ]
Realm Sync dataset with M2 gives no documents to client after first run
2021-04-07T18:44:02.262Z
Realm Sync dataset with M2 gives no documents to client after first run
2,755
null
[ "swift" ]
[ { "code": "struct RecordsView: View {\n @ObservedResults(Record.self, NSPredicate(format: \"needsReview == True\")) var reviewRecords\n \n var body: some View {\n VStack {\n ForEach(reviewRecords) { record in\n RecordRowView(record: record)\n }\n RecordsSummaryView(records: reviewRecords)\n } \n }\n}\n", "text": "I have a question about passing an @ObservedResults property to a sub-view. Let’s say I have the following RecordsView where I am first filtering for certain records needing review. I loop through the results and pass individual records to a RecordRowlView. That part I understand.But separately I also want to use all the filtered records in another sub-view, RecordsSummaryView. My question is, does it make sense to pass in all the filtered records to the subview, and if so, how would that be done (what would the variable type be in the sub-view)? Or would I not pass in anything and then just repeated the @ObservedResults filtered variable in the RecordsSummaryView? Or is there some other way to handle the situation of getting filtered records into a sub-view?Thanks!–Tom", "username": "TomF" }, { "code": "RecordsSummaryView@ObservedResults(Record.self, NSPredicate(format: \"needsReview == True\")) var reviewRecordsRecordsSummaryView()RecordsViewlet results: Results<Record>RecordsSummaryViewimport RealmSwiftRecordsSummaryView", "text": "Hi @TomF I found a couple of options that work.The first is for RecordsSummaryView to include @ObservedResults(Record.self, NSPredicate(format: \"needsReview == True\")) var reviewRecords – where you can then embed that view with RecordsSummaryView().The second is to keep RecordsView as you currently have it, and then add let results: Results<Record> to RecordsSummaryView (as well as import RealmSwift). I guess you could use this if you don’t need to make any changes to the realm from RecordsSummaryView.", "username": "Andrew_Morgan" }, { "code": "", "text": "Thanks @Andrew_Morgan. I will use the first method as I do need to make changes in my RecordsSummaryView.", "username": "TomF" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to pass @ObservedResults to a SwiftUI sub-View
2021-04-08T13:15:20.271Z
How to pass @ObservedResults to a SwiftUI sub-View
4,045
null
[ "installation" ]
[ { "code": "", "text": "Hi\nMy name is Jose. I 've just installed mongodb in a synology (http://synology.acmenet.ru/) Someone knows where is it installed mongod.conf in a Synology DSM 6.2.3 please? I need find mongod.conf to configure and can not see is in /etc/…\nThank you in advance", "username": "Jose_Calvo" }, { "code": "", "text": "You must have used docker,container,shared folder concepts while install\nCheck volumes from your docker-composer.yml\nIt will show mapping of your localhost\ndocker or Synology forums can help you more", "username": "Ramachandra_Tummala" }, { "code": "", "text": "At first, thank you for your reply. But a I don’t have Docker in my synology system (DSM), I installed mongodb and runs correctly, but I would like to connect by Robomongo or similar program to manage my database remotly and need to change mongod.conf but I am not have any file in my system. I am using (http://synology.acmenet.ru/) [armada/xp]", "username": "Jose_Calvo" }, { "code": "", "text": "I do not think that the file mongod.conf is the file you need to change to connect with robomongo or others. This file is the configuration file of the server. With Compass, you would simply specify the host name of your NAS and it will connect. I am pretty sure robomongo is similar.", "username": "steevej" }, { "code": "mongod -f /path/to/config", "text": "Third party packaged.A config file is not required to run mongod. Looking at the package it does not look like configuration is done.If you crate your own configuration file you can execute with mongod -f /path/to/configIf I follow the link you provide the version is really old (v3.2.1).", "username": "chris" }, { "code": "", "text": "Great! Thank you now It is Working done! I had to create mongod.conf file and launch mongod with your point parameter “-f \\path\\mongod.conf” but it is a really old version and some parameters run in the mongod line as “–bind_ip 192.168.1.x” or “–auth” I can not run a Docker version in my NAS so I use this method. Thank you very much.", "username": "Jose_Calvo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongod.conf in Synology DSM 6.2.3 aprox
2021-04-05T14:21:23.229Z
Mongod.conf in Synology DSM 6.2.3 aprox
8,447
null
[ "queries", "indexes" ]
[ { "code": "{\n name: String\n}\n{\n name: {$in: [\"Jack\", \"Tom\"]}\n}\nname$inCOLSCAN$inCOLSCAN", "text": "Originally posted on StackOverflow, thought I’d try my luck here as well.\nHere is our document schemaHere is our queryI believe even if there isn’t an index on name, the query engine will turn the array in the $in into a hashset and then check for presence as it scans through each record or a COLSCAN. It will never do a naive O(m*n) search, right?I’m trying to find supporting documentation online but I’ve come up short. I’ve tried searching in the source code but I can’t seem to find the exact section responsible for this either.If the index exists I believe that it will use it directly instead and be faster. If I’m not wrong I think it will be O(m*log(n)) as it gets the result set in log(n) time from the b-tree for every element in the $in array and returns the union of them all. Though big Oh wise for large m it seems slower than the O(n) hashset approach, its faster in practice as the disk reads are much more expensive.Is this line of thinking correct when there is an index?And if there isn’t an index does it do the COLSCAN with a naive search or will it use a hashset to fasten the process.", "username": "Louis_Christopher" }, { "code": "", "text": "If there is no index on the queried field there will be a collection scan which is a slow O(n) if the collection is not in RAM. The O(n^2) complexity is for the most naive sort, not for a search, You might mean O(mn) where m is the size of the $in array and n the size of the collection.", "username": "steevej" }, { "code": "", "text": "Yes, thanks, I mean the O(mn) is the naive search, you can convert the array into a hashmap/hashset and then check if it’s present in the hashmap or not as it performs the COLSCAN which will be O(n).\nDoes it do this though when there is no index? Updated my question to clarify.", "username": "Louis_Christopher" } ]
Mongodb $in implementation and complexity
2021-04-09T12:28:39.701Z
Mongodb $in implementation and complexity
3,955
null
[]
[ { "code": "mongodb", "text": "Hi there,I am making a node.js express application (express-generator in case that helps) that is deployed to Heroku. I am using an AWS atlas cluster, and I am pretty knew to this. I am using mongoose in my express app. I was wondering, is there anything I should do so that me running on my localhost in development or people who visit my website at the heroku deployed can access and retrieve and send data with the site? I am simply doing a login, register, profile information functionality that stores in atlas.I read a mongodb blog post about heroku with mongodb, but it did not really apply to me because they were not using mongoose but an npm lib called mongodb?I don’t really do anything like a development or production environment, I simply write the code, run it on my localhost, then it is deployed automatically to Heroku with a ci on every commit and push.If this helps, my project structure and functionality is extremely similar to Build Rest API with MongoDB Atlas, Mongoose Schema Model, New Route Tutorial | Part 2 - YouTube this video. Thanks!", "username": "Cousin_Ale" }, { "code": "ATLAS_URImongodb", "text": "HI @Cousin_Ale, welcome to the forums!I was wondering, is there anything I should do so that me running on my localhost in development or people who visit my website at the heroku deployed can access and retrieve and send data with the site?In Heroku you could configure environment variables. With this you could set an environment variable where your application could read the env variable value to determine which database cluster to connect to. For example, you could set an environment variable called ATLAS_URI which contains a MongoDB Connection String URI that points to your Atlas cluster.I read a mongodb blog post about heroku with mongodb, but it did not really apply to me because they were not using mongoose but an npm lib called mongodb?mongoose is an ODM library that utilises MongoDB Node JS driver (mongodb npm package) behind the scene. Which MongoDB + Heroku blog post are you referring to ?If you still have a question about MongoDB and Heroku, it would be helpful if you could elaborate further on:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Extra steps for a Node.js application using atlas deployed to Heroku?
2021-04-03T16:40:44.493Z
Extra steps for a Node.js application using atlas deployed to Heroku?
3,483
https://www.mongodb.com/…f1d16a95a347.png
[ "php", "alpha" ]
[ { "code": "mongodbcomposer require mongodb/mongodb^1.9.0@alpha\nmongodb", "text": "The PHP team is happy to announce that the first alpha release of version 1.9.0 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsThis release adds support for Versioned API (which will be released with MongoDB 5.0) and using Azure and GCP keystrokes for client-side field level encryption.A complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=30928DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Library 1.9.0-alpha1 Released
2021-04-09T14:00:52.551Z
MongoDB PHP Library 1.9.0-alpha1 Released
3,080
null
[ "php", "alpha" ]
[ { "code": "MongoDB\\Driver\\Managerpecl install mongodb-1.10.0alpha1\npecl upgrade mongodb-1.10.0alpha1\n", "text": "The PHP team is happy to announce that the first alpha release of version 1.10.0 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release adds support for Versioned API (which will be released with MongoDB 5.0), using Azure and GCP keystrokes for client-side field level encryption, and a new option to disable libmongoc client persistence when creating a new MongoDB\\Driver\\Manager instance.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.10.0alpha1 Released
2021-04-09T13:59:51.154Z
MongoDB PHP Extension 1.10.0alpha1 Released
4,057
null
[ "performance", "realm-web" ]
[ { "code": "", "text": "HiMy web app is performing a poll request every 10 seconds and before each request checks if the access token is about to expire. In case the token is expired it is calling the refreshCustomData method which calls the /auto/session API to obtain a new access token.\nThis works ok for several hours but at one point I notices the request is stalled for nearly an hour and eventually times out without any failure code.\nIs there any reason for this behavior? Please note that the refresh token is still valid.Thx\nMichael", "username": "michael_schiller" }, { "code": "", "text": "Sounds like a glitch in the matrix to me.But in all seriousness, is this something that happened once or are you able to reproduce this deterministically?Another and slightly unrelated observation is that refreshing the access token is handled automatically by the Realm Web SDK, there should be no need to call the API to refresh it. If a request fails do to an expired access token it will refresh the token and replay the request.If that’s not the case, we have a bug and in that case my question would be, what “poll request” are you performing?", "username": "kraenhansen" }, { "code": "", "text": "So, it turns out it is caused only when the PC is sleeping, so it is not related to any server issue.", "username": "michael_schiller" } ]
Call to refreshCustomData on WEB SDK is stalled for a long time
2021-04-06T08:56:03.608Z
Call to refreshCustomData on WEB SDK is stalled for a long time
2,153
null
[ "queries", "node-js" ]
[ { "code": "[\n {\n \"_id\": {\n \"$oid\": \"606aa80e929a618584d2758b\"\n },\n \"id\": \"1\",\n \"tid\": \"1\",\n \"cid\": \"1\",\n \"title\": \"What is Time Complexity\",\n \"description\": \"So, the time complexity is the number of operations an algorithm performs to complete its task (considering that each operation takes the same amount of time).\",\n \"image\": \"https://i.pinimg.com/originals/af/e4/0b/afe40bd0ee0a6b0e34daacd74078391e.jpg\",\n \"author\": \"Hanry Kon\"\n }\n]\n> error: \nReferenceError: 'ObjectId' is not defined\n> trace: \nReferenceError: 'ObjectId' is not defined\n\tat exports (function.js:28:43(28))\n\tat function_wrapper.js:3:34(21)\n\tat <eval>:11:8(17)\n\tat <eval>:2:15(7)\n", "text": "After Running this :mcollection.find({“id”:“1”}).toArray();Output :I want same output based on ObjectId as Input\nlike :mcollection.find({\"_id\":ObjectId(“606aa80e929a618584d2758b”)}).toArray();Error after Running this Code:How can i do that By Using MongoDB Webhook", "username": "Its_Me" }, { "code": "import { ObjectId } from \"bson\"\n", "text": "Did you import?", "username": "steevej" }, { "code": "", "text": "Thanks for Reply I was Able to solve my problem through this Article.Syntax examples to query ObjectID in different MongoDB Products", "username": "Its_Me" }, { "code": "", "text": "Solution : mcollection.find({\"_id\":new BSON.ObjectId(“606aa80e929a618584d2758b”)}).toArray();", "username": "Its_Me" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How use ObjectId search from Webhook in MongoDB
2021-04-08T19:11:18.217Z
How use ObjectId search from Webhook in MongoDB
11,374
null
[ "dot-net" ]
[ { "code": " public void testDeserialization()\n {\n string jsonString = \"{ a: 1, b: 0.5710711036366942 }\";\n JsonReader reader = new JsonReader(jsonString);\n var context = BsonDeserializationContext.CreateRoot(reader);\n\n BsonDocument doc = BsonDocumentSerializer.Instance.Deserialize(context);\n Console.WriteLine(doc.ToString());\n }\n", "text": "Hi folks,I’m using c# 2.10.4 driver, and encountered data discrepancy using this codeThis simple test code will output below, there is one more ‘2’ digit in the last!\n{ “a” : 1, “b” : 0.57107110363669422 }Seems this is related to double/decimal deserialization, any idea how to resolve this discrepancy?\nThanks in advance", "username": "Jiaxing_Song" }, { "code": "", "text": "Actually I’m using BsonDocument.ToJson() extension.Console.WriteLine(doc.ToJson());This will output the same results with discrepancy", "username": "Jiaxing_Song" }, { "code": "ToString() var x = 0.5710711036366942D; \n Console.WriteLine(x.ToString(\"G17\"));\n // outputs \n // 0.57107110363669422\n string jsonString = \"{ a: 1, b: NumberDecimal('0.5710711036366942') }\";\n ...\n // output\n // { \"a\" : 1, \"b\" : NumberDecimal(\"0.5710711036366942\") }\n public class MyClass {\n public int a { get; set; }\n [BsonRepresentation(BsonType.Decimal128, AllowTruncation=true)]\n public decimal b { get; set; }\n }\n\n ... \n \n BsonDocument doc = BsonDocumentSerializer.Instance.Deserialize(context);\n MyClass foo = BsonSerializer.Deserialize<MyClass>(doc);\n Console.WriteLine(foo.ToJson());\n // output\n // { \"a\" : 1, \"b\" : NumberDecimal(\"0.5710711036366942\") }\n", "text": "Hi @Jiaxing_Song,Seems this is related to double/decimal deserialization,I don’t think this is specifically related to the MongoDB C# driver, but instead due to the binary representation of double and float numbers in C#. To elaborate, C# represents double and float numbers in binary, and unable to represent many decimal numbers accurately. Depending on the requested precision of the ToString() conversion, the binary value would be to the closest binary equivalent.To illustrate, see this simple code:any idea how to resolve this discrepancy?If you have control over the input string to be parsed, you could explicitly express the decimal format with NumberDecimal(). For example, you could useDepending on your use case, you could also try to deserialise into a class and truncate, i.e.Regards,\nWan.", "username": "wan" } ]
MongoDB C# driver deserialization introduce discrepancies
2021-02-04T13:02:14.445Z
MongoDB C# driver deserialization introduce discrepancies
4,214
null
[]
[ { "code": "", "text": "Hola! I am new to Mongo DB, in my organization I have a task to analyze where the majority of the Mongo DB pricing is focused on. I have gone through our billing invoices. Though most part of it is self explanatory I am not able to grasp the part of Storage eg: They have a component called - Atlas Standard Storage - AWS with 1000000 GB hours @ $0.000208/ GB hour which leads to 208 dollars. I am sure i don’t have 1000,000 GB i.e. 1 PB in my DB. Just wanted to know how this part is calculated. This is for month February (We have couple of M30 instances running). Can someone help me out in understanding the billing structure for storage with the ‘GB hours’ concept? Thanks in Advance", "username": "Ram_Gopalan" }, { "code": "", "text": "Hi Ram,Great question: the key can be. found in the units you mentioned: “GB hours”. This is the # of GB that you’ve got in each hour, times the number of hours.As an example if you had a 100 GB volume per node on a three node replica set for a week, you would calculate the number of GB hours by multiplying 100 GB * 3 nodes * 7 days * 24 hours = 50,400 GB hoursI hope that helps\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi Andrew,Thanks so much for this explanation. Yes this helps !", "username": "Ram_Gopalan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo DB Storage Cost Structure
2021-04-05T11:38:11.338Z
Mongo DB Storage Cost Structure
2,521
null
[ "dot-net", "transactions" ]
[ { "code": "public class TransactionTest\n{\n private const string DatabaseName = \"PressureTest\";\n private const string CollectionName = \"Test\";\n public const string ConnectionString = \"\";\n\n public MongoClient GetMongoClient(int timeout = 5)\n {\n var clientSettings = MongoClientSettings.FromConnectionString(ConnectionString);\n clientSettings.ConnectTimeout = TimeSpan.FromSeconds(5);\n clientSettings.ServerSelectionTimeout = TimeSpan.FromSeconds(timeout);\n clientSettings.AllowInsecureTls = true;\n var mongoClient = new MongoClient(clientSettings);\n\n return mongoClient;\n }\n\n public async Task TestTransactionAsync()\n {\n var client = GetMongoClient();\n var tasks = new List<Task>();\n for (int i = 0; i < 5; ++i)\n {\n tasks.Add(DoAsync(client));\n }\n\n await Task.WhenAll(tasks);\n }\n\n private async Task DoAsync(IMongoClient mongoClient)\n {\n Console.WriteLine(mongoClient.GetHashCode());\n while (true)\n {\n var collection = mongoClient.GetDatabase(DatabaseName).GetCollection<BsonDocument>(CollectionName);\n\n var uuid1 = Guid.NewGuid().ToString(\"N\").Substring(24);\n var uuid2 = Guid.NewGuid().ToString(\"N\").Substring(24);\n try\n {\n using (var session = await mongoClient.StartSessionAsync())\n {\n session.StartTransaction();\n\n await collection.InsertOneAsync(session, new BsonDocument(\"Uuid\", uuid1));\n await collection.InsertOneAsync(session, new BsonDocument(\"Uuid\", uuid2));\n\n await session.CommitTransactionAsync();\n }\n Console.WriteLine($\"[{uuid1}] [{uuid2}]\");\n }\n catch (Exception e)\n {\n Console.WriteLine(\"$$$\" + e.Message);\n }\n }\n }\n}\n public async Task TestTransactionAsync()\n {\n var tasks = new List<Task>();\n for (int i = 0; i < 5; ++i)\n {\n var client = GetMongoClient(i + 5);\n tasks.Add(DoAsync(client));\n }\n\n await Task.WhenAll(tasks);\n }\n", "text": "MongoDB.Driver version 2.11.5, Server version: 4.2.2-entI use 5 threads to execute transactions in parallel and encounter lots of 251 errors:MongoCommandException, 251, “NoSuchTransaction”, “Command insert failed: cannot continue txnId 35 for session 618e6cd1-4db1-40ea-8b22-6386e204c36b - xxx with txnId 36”MVP code to reproduce:If we not reuse the mongoClient by changing TestTransactionAsync(): create a dedicated mongoClient for each thread, then no error happens:The above modification intentionally pass different ServerSelectionTimeout value to prevent mongoclient from reusing.Refer to: Connectingmultiple MongoClient instances created with the same settings will utilize the same connection pools underneath.The document suggests re-use mongoclient by store it in a global place. However, a singleton mongoclient leads to parallel transaction failure.What’s the right way to execute transactions in parallel?Thanks a lot!", "username": "finisky" }, { "code": "", "text": "We are also experiencing this problem, did you managed to solve it @finisky ?", "username": "Joao_Passos" }, { "code": "", "text": "In my case, the root cause is that I created a load balancer in front of mongos. However, a transaction in sharded cluster must be executed in the same mongos instance.For details: Don't Use Load Balancer In front of Mongos | Finisky Garden", "username": "finisky" } ]
C# Multi-Document Transaction Error
2021-01-05T06:26:54.544Z
C# Multi-Document Transaction Error
4,208
null
[ "python", "production", "motor-driver" ]
[ { "code": "", "text": "We are pleased to announce the 2.4.0 release of Motor - MongoDB’s Asynchronous Python Driver. This release adds support for Client-Side Field-Level encryption and Python 3.9.See the changelog for a high-level summary of what is in this release or see the Motor 2.4.0 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!", "username": "William_Zhou" }, { "code": "", "text": "", "username": "system" } ]
Motor 2.4.0 Released
2021-04-08T22:31:04.719Z
Motor 2.4.0 Released
3,545
null
[]
[ { "code": "query QueryListings( $limit: Int!, $start: Int ) { listings( limit: $limit, start: $start, sort: \"likes:desc\" ) { id title street neighborhood zipcode city phones likes logo { url } slug } }", "text": "I am getting duplicate results when using sort, limit and start.1 - starting with 0\n2 - incrementing the startDo not bring the same record sought previously. \n\n\nquery QueryListings( $limit: Int!, $start: Int ) { listings( limit: $limit, start: $start, sort: \"likes:desc\" ) { id title street neighborhood zipcode city phones likes logo { url } slug } }", "username": "Rafael_Reale" }, { "code": "", "text": "Hi Rafaael,Did you figure this out? I am not quite clear if the issue is happening at the Strapi or the MongoDB layer?-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Thanks for the reply Andrew. Strapi’s support reported that it is a problem in Mongo and suggested that I move to a MYSQL database.[image]", "username": "Rafael_Reale" } ]
Results duplicate query
2021-03-29T23:55:28.692Z
Results duplicate query
3,123
null
[ "c-driver", "alpha" ]
[ { "code": "", "text": "Announcing 1.18.0-alpha of libbson and libmongoc, the libraries constituting the MongoDB C Driver.This is an unstable prerelease and is unsuitable for production applications.No changes since 1.17.5; release to keep pace with libmongoc’s version.Features:Bug fixes:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C driver 1.18.0-alpha released
2021-04-08T22:20:05.229Z
MongoDB C driver 1.18.0-alpha released
2,833
null
[ "atlas", "serverless" ]
[ { "code": "", "text": "Here is my scenario: My Database is on MongoDB Atlas. I need to access Atlas from Azure Function(Dynamic/Consumption Plan) and App Service. In doing so, I am facing A timeout issue. I know, that this is because I have to whitelist outbound IP addresses of Azure Function and App Service Plan in Atlas. But Azure Function(with Consumption Plan) and App Service update their outbound IP addresses during the autoscaling process. Now there are 2 solutions I can think of:So now my question is what other options do I have? Is there any way to connect them using some Authentication/Authorization approach using Azure AD or something like this? Is Federated Authentication fits for such a case?", "username": "Ahsan_Habib" }, { "code": "", "text": "Hi @Ahsan_Habib,Consider looking into azure private link :This should allow you to connect azure service through a private link connection…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Ahshan,Did you get this working? I believe you would need to use an Azure Function Premium Plan (Azure Functions Premium plan | Microsoft Learn) to take advantage of private endpoints. The alternative would be to leverage public IP (I don’t believe Azure Functions use a static public IP so you’d need to essentially add 0.0.0.0/0 to the Atlas IP Access List).Cheers\n-Andrew", "username": "Andrew_Davidson" } ]
How to connect Azure Function with MongoDB Atlas
2021-04-04T16:40:04.216Z
How to connect Azure Function with MongoDB Atlas
5,278
null
[]
[ { "code": "%%date", "text": "Is there a way to get the current date in a JSON expression? Something like %%date would be ideal. The only way I can see to do this right now would be to make a function that returns the current time, but adding a function invocation to every query role JSON expression sounds like a lot of overhead.", "username": "Andrew_Kaiser" }, { "code": "", "text": "Hi @Andrew_Kaiser - welcome to the community!I think $currentDate is what you’re looking for. Check out the official docs for more details: https://docs.mongodb.com/manual/reference/operator/update/currentDate/If it’s not, can you provide more details on what you’re trying to accomplish?", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hi Lauren,Sorry for the delayed reply. Yes, this should work correctly. Thank you!", "username": "Andrew_Kaiser" }, { "code": "", "text": "Woo hoo! ", "username": "Lauren_Schaefer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get current date/time in query role JSON expression
2021-03-29T14:37:59.092Z
Get current date/time in query role JSON expression
3,459
null
[ "production", "c-driver" ]
[ { "code": "", "text": "Announcing 1.17.5 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.Bug fixes:Improvements:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C driver 1.17.5 released
2021-04-08T18:57:25.216Z
MongoDB C driver 1.17.5 released
2,513
null
[ "data-modeling", "next-js" ]
[ { "code": "", "text": "Hi all,Currently I am working on a Next.js project for collectors of comics. We opted for MongoDB as our database, but since this is a crowdsourced database project I could use some pointers and tips from the experienced MongoDB community.So simply said its a crowdsourced database where users can add entries (comics) into the database, and also sell the items they have added into the database. So simply said, a user has a Spawn comic that isnt in the database yet, he can add it, and later sell this item from his account as well.I am having difficulties in going from relational databases (sql) to MongoDBs nosql and seeing the best schematic and architecture for this project in MongoDB. Anyone got any pointers, tips or suggestions?", "username": "Kevin_Parker" }, { "code": "", "text": "Hey Kevin,Welcome to the community!The first rule of thumb you’ll hear when modeling data in MongoDB is “data that is accessed together should be stored together.” So carefully consider your queries to determine the best way to model your data.It sounds like you’re going to want to have a collection named something like Comics that will contain all of the comics. Where things get tricky is how you want to handle sales and users.When working with a relational database, you’d probably have a table for comics, a table for sales, and a table for users. Then you’d probably create references between the three.When using MongoDB, you’ll want to consider how you’re going query the data. How will you view the sales? Will they be displayed on a user’s profile page or on a comic page or both? You may find yourself duplicating data–and that’s ok–especially if you won’t be updating it very often.I’ll share a few resources that will help you get started in data modeling:Free MongoDB University Course all about data modeling:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Schema Design Patterns blog series. My guess is that Extended Reference, Subset, and Computed patterns will be helpful to you as you figure out how much data you want to duplicate and if you want to pre-compute any stats.A summary of all the patterns we've looked at in this seriesI can’t resist the chance to plug my own content. Here are some anti-patterns to avoid:\nhttps://www.mongodb.com/article/schema-design-anti-pattern-summary/Finally, since you’re moving from a relational database to MongoDB, you may find this blog series helpful:\nhttps://www.mongodb.com/article/map-terms-concepts-sql-mongodb/", "username": "Lauren_Schaefer" } ]
Need some basic tips
2021-03-24T23:19:00.580Z
Need some basic tips
2,469
null
[ "queries" ]
[ { "code": "", "text": "I have got an exception with “bad operator” using $exists operator in my query.\nI have next query: {$match: {“id”: {\"$ne\": 1}, “tagGroup”: 16, “tagElement”: 16, value: /‘LOUWS^AARON’/{$exists: true}}}Error: unknown operator: $exists:: generic server error", "username": "111406" }, { "code": "", "text": "You are missing the name of the field for which you want to test for $exists.", "username": "steevej" } ]
Issue with $exists operator
2021-04-08T16:03:30.073Z
Issue with $exists operator
2,764
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.4.5 is out and is ready for production deployment. This release contains only fixes since 4.4.4, and is a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.4.5 is released
2021-04-08T15:17:36.160Z
MongoDB 4.4.5 is released
3,203
null
[ "php", "developer-hub" ]
[ { "code": "", "text": "Hey folks - the first two articles in this PHP Quickstart are out… I’d love to get your feedback.Getting Started with MongoDB and PHP - Part 1 - SetupI’ve started with the basics of getting your environment set up… Then I continue with the basics of CRUD operations with PHP/MongoDB.Getting Started with MongoDB and PHP - Part 2 - CRUDNext, I’ll be publishing a sample application where we put these things into practice. Stay tuned for that… also let me know what you’d like to see covered in future articles!Thanks!", "username": "Michael_Lynn" }, { "code": "", "text": "Curious… what should be next in the series?Any suggestions?", "username": "Michael_Lynn" }, { "code": "redswitch/php-mongodb:8.0-1.9.0object ID", "text": "What a coincidence! Just as I was looking for a guide for using PHP with MongoDB, this was published a few days before I found it!Its a good and comprehensive introduction, thanks. I had to adjust this, to use a Docker container for the PHP (and Apache) server, rather than on a local Mac machine.I published my working / final image to Docker Hub:redswitch/php-mongodb:8.0-1.9.0(PHP 8.0 with MongoDB client 1.9.0)Can I ask how you would insert a related / linked document? 2 collections, with a 1-to-Many relationship between them.I dont know where to start. I expect I will need to query the parent collection to get the object ID, then use that as 1 of the fields / properties in an InsertOne() request?Is it possible to use a simple PHP variable containing the object ID in the same way as just a String variable?", "username": "Dan_Burt" } ]
Getting started with MongoDB and PHP... Series is out
2021-03-23T13:22:49.412Z
Getting started with MongoDB and PHP&hellip; Series is out
4,227
null
[ "production", "cxx" ]
[ { "code": "", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.6.2.Please note that this version of mongocxx requires the MongoDB C driver 1.17.0 .See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.6.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Clyde_Bazile_III" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C++11 Driver 3.6.2 Released
2020-12-01T19:45:03.122Z
MongoDB C++11 Driver 3.6.2 Released
1,766
null
[ "production", "cxx" ]
[ { "code": "", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.6.3.Please note that this version of mongocxx requires the MongoDB C driver 1.17.0 .See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.6.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Clyde_Bazile_III" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C++11 Driver 3.6.3 Released
2021-04-07T20:16:26.411Z
MongoDB C++11 Driver 3.6.3 Released
1,974
null
[ "security" ]
[ { "code": "", "text": "Hi guys, I’m trying to audit CRUD events in a specific collections and user events, but when I add a second atype to my filter it stops to register events. It doesn’t throws any error at the start the mongod process.Here is my filter:auditLog:\ndestination: file\nformat: JSON\npath: /mongodb/audit.json\nfilter: ‘{ atype: { $in: [ “createCollection”,“createDatabase”,“createIndex”,“renameCollection”,“dropCollection”,“dropDatabase”,“dropIndex”,“createUser”,“dropUser”,“dropAllUsersFromDatabase”,“updateUser”,“grantRolesToUser”,“revokeRolesFromUser”,“createRole”,“updateRole”,“dropRole”,“dropAllRolesFromDatabase”,“grantRolesToRole”,“revokeRolesFromRole”,“grantPrivilegesToRole”,“revokePrivilegesFromRole”,“shutdown” ] }, atype: “authCheck”,“param.ns”: “test.orders”, “param.command”: { $in: [ “find”, “insert”, “delete”, “update”, “findandmodify” ] } }’setParameter: { auditAuthorizationSuccess: true }What I am doing wrong?", "username": "Oscar_Cervantes" }, { "code": "mongo> var j1 = { a: 12, b: \"xyz\" }\n> j1 // returns { \"a\" : 12, \"b\" : \"xyz\" }\n\n> var j2 = { a: 761, b: \"mno\", a: 900 }\n> j2 // returns { \"a\" : 900, \"b\" : \"mno\" }\nj2aatype", "text": "Hello @Oscar_Cervantes, the filter is a JSON. A JSON with same keys (or fields) result in one key only. For example, in mongo shell try this:Note the j2 has a single key with the name a. That is exactly what is happening with your query filter - only one atype is considered.", "username": "Prasad_Saya" }, { "code": "filter: '{ atype: { $in: [ \"createCollection\", ... ,\"authCheck\"]},\n \"param.ns\": \"test.orders\", \n\t\t\t \"param.command\": { $in: [ \"find\", \"insert\", \"delete\", \"update\", \"findandmodify\" ] }'\n", "text": "Got it!, thank you very much, then should be like thisThen I shuld apply querys in expressions, right?question, is every expression an “and”?, I mean, in my example, results should accomplish “param.ns” and “param.command”Thanks", "username": "Oscar_Cervantes" }, { "code": "$and{ a: 3, b: \"apples\", c: 24.55 }{ $and: [ { a: 3 }, { b: \"apples\" }, { c: 24.55 } ] }", "text": "Hello @Oscar_Cervantes, in a query filter the $and is implicitly applied - so you don’t need to specify it explicitly. For example,{ a: 3, b: \"apples\", c: 24.55 }\nis same as\n{ $and: [ { a: 3 }, { b: \"apples\" }, { c: 24.55 } ] }", "username": "Prasad_Saya" }, { "code": "", "text": "then if I need ors they should be in the same expression?{ $or: [ { param.ns: 3 }, { param.command: “apples” }, { param.user: 24.55 } ] }", "username": "Oscar_Cervantes" }, { "code": "", "text": "{ $or: [ { param.ns: 3 }, { param.command: “apples” }, { param.user: 24.55 } ] }Yes, that is the correct usage of the $or operator.", "username": "Prasad_Saya" } ]
Audit a specific collection and all user events
2021-04-05T16:52:10.284Z
Audit a specific collection and all user events
2,841
null
[]
[ { "code": "\n{\n \"_id\": \"joe\",\n \"addresses\": \n {\n \"file\": \"16MB\"\n\n }\n }\n", "text": "Greetings,\nPlease help me understand\nThe limitation is for document.\nWhat if i have document inside the documentDo i understand that if the file will be 16MB lets say, that i cant not insert more emded documents inside the addresses?", "username": "Ukro_Ukrovic" }, { "code": "{\n \"_id\": \"joe\",\n \"addresses\": \n {\n \"file\": \"16MB\"\n }\n }\naddresses", "text": "Hello @Ukro_Ukrovic, the 16 MB limit is for the entire document - includes all fields of any data type (arrays, sub-documents (objects or embedded documents), strings, binary data, etc.).In the above document, the field addresses is of type sub-document, and it is considered as part of the main document.You can check the size of a document using the $bsonsize aggregate operator.", "username": "Prasad_Saya" }, { "code": "", "text": "thank you for info )", "username": "Ukro_Ukrovic" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
16MB limit explained
2021-04-08T09:35:29.944Z
16MB limit explained
2,899
null
[ "replication" ]
[ { "code": "", "text": "Hi, I have upgraded the MongoDB replica set, which consists of 3 members, from 4.0.11 to 4.2.5. After upgrading, startup lasts about 5 minutes. Before upgrading it was instant. It is related to oplog size, because I tested with dropping oplog on new mongo 4.2 and startup was instant. Max oplog size was 25GB, I decreased it to 5GB and the startup is still slow. Mongo db is on AWS with EBS standard disks. However mongo worked well until this upgrade. Do you have any idea what can cause slow startup? Thanks.", "username": "Danka_Ivanovic" }, { "code": "2020-04-07T16:36:19.391+0000 I STORAGE [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2020-04-07T16:36:19.391+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7279M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],\n2020-04-07T16:36:19.931+0000 I STORAGE [initandlisten] WiredTiger message [1586277379:931910][29458:0x7f2d17521b40], txn-recover: Recovering log 2439 through 2440\n2020-04-07T16:36:20.002+0000 I STORAGE [initandlisten] WiredTiger message [1586277380:2298][29458:0x7f2d17521b40], txn-recover: Recovering log 2440 through 2440\n2020-04-07T16:36:20.086+0000 I STORAGE [initandlisten] WiredTiger message [1586277380:86290][29458:0x7f2d17521b40], txn-recover: Main recovery loop: starting at 2439/15997696 to 2440/256\n2020-04-07T16:36:20.087+0000 I STORAGE [initandlisten] WiredTiger message [1586277380:87145][29458:0x7f2d17521b40], txn-recover: Recovering log 2439 through 2440\n2020-04-07T16:36:20.140+0000 I STORAGE [initandlisten] WiredTiger message [1586277380:139996][29458:0x7f2d17521b40], txn-recover: Recovering log 2440 through 2440\n2020-04-07T16:36:20.189+0000 I STORAGE [initandlisten] WiredTiger message [1586277380:189056][29458:0x7f2d17521b40], txn-recover: Set global recovery timestamp: (1586277366, 1)\n2020-04-07T16:36:20.197+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1586277366, 1)\n2020-04-07T16:36:20.205+0000 I STORAGE [initandlisten] Starting OplogTruncaterThread local.oplog.rs\n2020-04-07T16:36:20.205+0000 I STORAGE [initandlisten] The size storer reports that the oplog contains 260431 records totaling to 5145508040 bytes\n2020-04-07T16:36:20.205+0000 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation\n2020-04-07T16:36:20.207+0000 I STORAGE [initandlisten] Sampling from the oplog between Apr 6 14:15:08:222 and Apr 7 16:36:06:1 to determine where to place markers for truncation\n2020-04-07T16:36:20.207+0000 I STORAGE [initandlisten] Taking 981 samples and assuming that each section of oplog contains approximately 2654 records totaling to 52436838 bytes\n2020-04-07T16:38:48.156+0000 I STORAGE [initandlisten] Placing a marker at optime Apr 6 14:15:16:84\n…..\n2020-04-07T16:38:48.156+0000 I STORAGE [initandlisten] Placing a marker at optime Apr 7 16:29:31:13\n2020-04-07T16:38:48.156+0000 I STORAGE [initandlisten] WiredTiger record store oplog processing took 147951ms\n2020-04-07T16:38:48.160+0000 I STORAGE [initandlisten] Timestamp monitor starting\n", "text": "Here is the part of the log related to STORAGE enigne:The problem is when WiredTiger starts to count samples from oplog. I decreased oplog size to 5GB, and still has a lag when starting, more than 2 minutes. It was above 5 minutes when max oplog size was 25GB.\nBefore upgrading to 4.2, oplog size was also 25GB, but startup was instant.", "username": "Danka_Ivanovic" }, { "code": "", "text": "I tried with changing following 3 WiredTiger default eviction parameters:\nstorage.wiredTiger.engineConfig.configString:\n“eviction_dirty_target=60,\neviction_dirty_trigger=80,eviction=(threads_min=4,threads_max=4)”Now mongo is starting immediately.\nIs it safe to set eviction_dirty_target and eviction_dirty_trigger values like this? Default is : eviction_dirty_target (default 5%) and eviction_dirty_trigger (default 20%). Thanks.", "username": "Danka_Ivanovic" }, { "code": "mongod", "text": "I have very slow startups too, but I count mine in hours, not minutes.This is because I have thousands of collections and mongod needs to open about 130k file pointers when it starts up. In reality this settles down to about 5k-10k when the database is running under normal usage, but I desperately need to reduce these insane startup times.I don’t see the configString option here: https://docs.mongodb.com/manual/reference/configuration-options/#storage-wiredtiger-optionsIs it undocumented, or has it been removed?", "username": "timw" }, { "code": "", "text": "Hi timw,What is your MongoDB version? This situation should be a little better in MongoDB >=4.0.14 and >=4.2.1 due to improvements in SERVER-25025.If your MongoDB version is above the aforementioned versions, there are further planned improvements in startup time in SERVER-43664. Please comment/upvote on the ticket if startup time is still an issue for you.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "I resolved the worst of my issue by using a more powerful server.The startup time is unworkable on a single core VPS. I was playing around with restoring backups, so skimped on the CPU. The solution for me (even for development and testing) is to use a bigger VPS.As mentioned in my linked topic, even on my live servers (which are easily powerful enough to run the application) I get a 5 minute startup time. I will keep an eye on your planned improvements, thank you.", "username": "timw" }, { "code": "", "text": "A post was split to a new topic: Increased startup time after upgrading from 3.6.18 to 4.0.23", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB very slow at startup after upgrade
2020-04-07T22:29:15.041Z
MongoDB very slow at startup after upgrade
7,517
null
[ "swift" ]
[ { "code": "List {\n ForEach(huntlet.tasks) { task in\n TaskListEditRowView(task: task, huntlet: huntlet)\n }.onMove(perform: $huntlet.tasks.move)\n .onDelete(perform: $huntlet.tasks.remove)\n}\n.onDelete(perform: $self.showDeleteAlert)Alert()huntlet.tasks", "text": "I have a list of tasks that can be edited easily in SwiftUI:This totally works.But! Since removing this task from this list has bigger implications, I’d like to give a warning alert that deleting the task will also remove all posts that are associated with this task and allow the user to delete the task along with all of the posts, or cancel deleting the task.When attempting to do this .onDelete(perform: $self.showDeleteAlert) and then attempting to pass around some state variables and processing the decision of the user through the Alert() dialogue and buttons, there isn’t a clear way to actually remove this task from huntlet.tasks.Does anyone know a way to do this?", "username": "Kurt_Libby1" }, { "code": "performonDeleteList {\n ForEach(huntlet.tasks) { task in\n TaskListEditRowView(task: task, huntlet: huntlet)\n }.onMove(perform: $huntlet.tasks.move)\n .onDelete(perform: alertAndDelete)\n}\n\nprivate func alertAndDelete(at offsets: IndexSet) {\n // Display an alert, and if the user confirms then continue\n $huntlet.tasks.remove(atOffsets: offsets)\n}", "text": "The perform function or closure that you provide to onDelete receives the set of indexes for the item that you’re attempting to delete and so you should be able to use something like this…", "username": "Andrew_Morgan" }, { "code": " @ObservedRealmObject var huntlet: Huntlet\n @State private var showingAlert = false\n @State private var deleteRow = false\n @State private var index: IndexSet?\n \n var body: some View {\n List {\n ForEach(huntlet.tasks) { task in\n TaskListEditRowView(task: task, huntlet: huntlet)\n }.onMove(perform: $huntlet.tasks.move)\n .onDelete(perform: alertAndDelete)\n }\n .alert(isPresented: $showingAlert, content: {\n \n let button = Alert.Button.default(Text(\"Cancel\")) {}\n let deleteButton = Alert.Button.default(Text(\"Delete\")) {\n removeRow()\n }\n return Alert(title: Text(\"Delete Task\"), message: Text(\"If you delete this task, every user post associated with this task will be deleted with it as well. Are you sure you want to delete the task and all associated posts?\"), primaryButton: deleteButton, secondaryButton: button)\n })\n .environment(\\.editMode, .constant(EditMode.active))\n }\n \n private func alertAndDelete(at offsets: IndexSet) {\n self.showingAlert = true\n self.index = offsets\n }\n \n private func removeRow() {\n $huntlet.tasks.remove(atOffsets: self.index!)\n }", "text": "Thanks Andrew!In case anyone sees this in the future, this is how I added to the solution to make it work in my app:", "username": "Kurt_Libby1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SwiftUI Warn Before using .remove
2021-04-07T15:04:14.962Z
SwiftUI Warn Before using .remove
3,565
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "HiWhy I can’t download epub?Then try to download\nhttps://docs.mongodb.com/master/MongoDB-manual.epub", "username": "111404" }, { "code": "//", "text": "The link is wrong on the page, it appends a / when opening. Drop the / off the address bar.https://docs.mongodb.com/master/mongodb-manual-master.epub", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Manual Epub
2021-04-08T09:26:43.055Z
MongoDB Manual Epub
2,601
null
[ "data-modeling" ]
[ { "code": "", "text": "MongoDB uses B-Trees for their indexing, which is great unless you have a specific need for anything else. In this case, I need to use AVL trees to keep a sorted list of documents based on a key within the documents (think keeping a sorted list) as the insertion, deletion, and searching should stay within O(log n) time. Does anybody have any idea how this can be done?", "username": "Alex_Mac" }, { "code": "", "text": "Hi @Alex_Mac,Welcome to MongoDB Community.Why wouldn’t indexing the relevant fields that you use for predicates will not work for you, if thy ar the only ones which gets projected those can be considered covered queries and therefore utelize only index.Perhaps if you can share more on your data design and expected queries we might help you with your design better.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,Thank you for your response. In my current design, I am crudely clustering objects based on a similarity score. In memory, I am creating an AVL tree where my key (or sorted value) is this similarity score and the value is an _id.For MongoDB, I am considering two possibilities:Using an AVL tree, for me, can guarantee sublinear performance in queries (insert, delete, and search).Edit: Also, a key part of the AVL tree is the ability to view the parents and children of a node. AVL trees (when the key is the similarity) guarantees that the parents and children are closely related.Thank you in advance,\nAlex", "username": "Alex_Mac" }, { "code": "", "text": "@Alex_Mac,Perhaps you can module your objects to best match your access pattern.We do have several tutorials on saving tree dataThe Tree Pattern - how to model trees in a document databaseSee if any of those help youThanks", "username": "Pavel_Duchovny" } ]
Storing Unique Datastructres (AVL Tree) in MongoDB
2021-04-08T07:44:45.241Z
Storing Unique Datastructres (AVL Tree) in MongoDB
2,299
null
[]
[ { "code": "{\"t\":{\"$date\":\"2021-04-08T09:45:43.547+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.551+02:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.552+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.552+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":3408,\"port\":27017,\"dbPath\":\"/media/ukro/data2/mongodb/\",\"architecture\":\"64-bit\",\"host\":\"work3\"}}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.552+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.4\",\"gitVersion\":\"8db30a63db1a9d84bdcad0c83369623f708e0397\",\"openSSLVersion\":\"OpenSSL 1.1.1d 10 Sep 2019\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"debian10\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.552+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"LinuxMint\",\"version\":\"4\"}}}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.552+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"/media/ukro/data2/mongodb/\"}}}}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.552+02:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}}}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.552+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2021-04-08T09:45:43.553+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n{\"t\":{\"$date\":\"2021-04-08T09:44:08.840+02:00\"},\"s\":\"W\", \"c\":\"COMMAND\", \"id\":20525, \"ctx\":\"conn11\",\"msg\":\"Failed to gather storage statistics for slow operation\",\"attr\":{\"opId\":8415,\"error\":\"lock acquire timeout\"}}\n{\"t\":{\"$date\":\"2021-04-08T09:44:08.840+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"db.col1\",\"appName\":\"MongoDB Compass\",\"command\":{\"aggregate\":\"col1\",\"pipeline\":[{\"$match\":{\"maxTimeMS\":1111111}},{\"$skip\":0},{\"$group\":{\"_id\":1,\"n\":{\"$sum\":1}}}],\"cursor\":{},\"maxTimeMS\":5000,\"lsid\":{\"id\":{\"$uuid\":\"497f25d5-1ef9-429d-bad7-f344cbb5bb6b\"}},\"$db\":\"db\"},\"planSummary\":\"COLLSCAN\",\"numYields\":5626,\"queryHash\":\"37ECF029\",\"planCacheKey\":\"37ECF029\",\"ok\":0,\"errMsg\":\"Error in $cursor stage :: caused by :: operation exceeded time limit\",\"errName\":\"MaxTimeMSExpired\",\"errCode\":50,\"reslen\":160,\"locks\":{\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":5628}},\"Global\":{\"acquireCount\":{\"r\":5628}},\"Database\":{\"acquireCount\":{\"r\":5628}},\"Collection\":{\"acquireCount\":{\"r\":5628}},\"Mutex\":{\"acquireCount\":{\"r\":2}}},\"protocol\":\"op_msg\",\"durationMillis\":5010}}\n", "text": "Greetings,\nso i’m a long time MSSQL->MYSQL user.\nI am preparing my brain and environment for migrating to mongodb.\nI put a bit of stress test and now i cant figure out how to recover from that.1.I put a script with fake name generator to fill the mounted disk with mongodb data.\n2.When i woke up, it was obviously filled (5GB) with 60+ mil documents.\nIt was not starting,so i mounted another drive 8GB, i rsync mongodb to 8GB drive\nand now its starts\n3.The problem is that in mongodb compas it was showing 0 - 20 of N/A\nAtleast i seen the data\n4.after mongodb -repair\n0 - 0 of N/A\nI don’t see any data.Please advise what is the best practice to recover from this.Thank youLOGZ:\nwhen i run repairmongodb log", "username": "Ukro_Ukrovic" }, { "code": "", "text": "Hi @Ukro_Ukrovic,It looks like you run the repair while the instance was running which is not how the repair works.Why did you need to repair the dbPath if you can resync it. Resync is the best option to repair a failed node files…Compass cannot calculate all data so its better to use a shell connection and run a count on the documents and se that the value is the expected one. Additionally you can run a validate command on the collection to verify its integrity…Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\"t\":{\"$date\":\"2021-04-08T10:08:11.297+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20306, \"ctx\":\"initandlisten\",\"msg\":\"Validation complete for collection. No corruption found\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuid\":\" (UUID: 8793bae9-ab3c-4baa-bd8b-456411f9361e)\"}}\n", "text": "Greetings,\nthank for your reply\nSorry for that, had no idea that repair would not do what ever is neccesary to process the repair (automaticaly stop instance and run again)\nIn robo3t it is showing the correct numbers, but i need to see it in pure GUI etc compass.\nWill try to find the validate command and verify command.btw repair:", "username": "Ukro_Ukrovic" }, { "code": "", "text": "Hi @Ukro_Ukrovic,The log line looks good so I expect data is consistent.You can use a shell connection from Compass when clicking the bar on the lower side.Additionally, you can just count all documents with aggregation tab:Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \"ns\" : \"db.col1\",\n \"nInvalidDocuments\" : 0,\n \"nrecords\" : 66207905,\n \"nIndexes\" : 1,\n \"keysPerIndex\" : {\n \"_id_\" : 66207905\n },\n \"indexDetails\" : {\n \"_id_\" : {\n \"valid\" : true\n }\n },\n \"valid\" : true,\n \"warnings\" : [],\n \"errors\" : [],\n \"extraIndexEntries\" : [],\n \"missingIndexEntries\" : [],\n \"ok\" : 1.0\n}\n", "text": "As i said, i understand that i can count it, but i need the pure view of DB for debug/fixing purposes.db.col1.validate()still:\nScreenshot from 2021-04-08 10-23-111279×836 49.4 KBAny other ideas? :> ", "username": "Ukro_Ukrovic" }, { "code": "use db;\ndb.col1.findOne();\n", "text": "The header says it has 66M documents .The UI seems to not load yet so I am not sure the “No Rows” should be taken seriously when the query still running.Can you run in mongo shell beta:Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "use db;\ndb.col1.findOne();\n{ _id: ObjectId(\"606e1b562d18931239ee1860\"),\n name: 'Kevin',\n surname: 'Walker',\n job: 'Lobbyist',\n country: 'Moldova' }\n", "text": "Error: Line 1: Unexpected identifier\nI run it from robo3tFrom compas:", "username": "Ukro_Ukrovic" }, { "code": "", "text": "So you are thinking that the compas gui gave up after some timeout?", "username": "Ukro_Ukrovic" }, { "code": "1 - 20 of N/A\n", "text": "Ok so i figured it out.\nThe problem was between chair and keyboard.\nWhen i was testing find, after that i removed it from textbox, but didn’t press find to clear the memory i guess.\nSo whenever i press refresh it was refreshing from the saved memory even if textbox has empty string.\nWhen i pressed find and then refresh, all see ok\nStill withBut its fine, atleast i see all stuff.\nThank you for your time.P.S. i want to add that i love mongodb and so looking forward to change all of our apps to mongo ", "username": "Ukro_Ukrovic" }, { "code": "", "text": "Thanks for the worm words!We love you being part of the community , keep up the good work migrating to the best database there is ;p", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
DB testing failed, pls advise
2021-04-08T07:57:26.564Z
DB testing failed, pls advise
4,111
null
[ "golang" ]
[ { "code": "", "text": "Hello,\nIs it possible to update different documents, provided as a Slice in one bulk-operation?From what I read, we have to set the filter on the opdateOneModel., so isn’t this a single operation?I mean, creatng a WriteModel for earch document in the slice (“batch”) I want to process, doesn’t feel right. Am I on the right path here?In other words: What’s the best way to perform updates with different values to different documents (but always the same field of these documents)", "username": "Roger_Bieri" }, { "code": "", "text": "well, I did it that way and it works quite fast\n(only using tiny data on my dev VM)", "username": "Roger_Bieri" } ]
Update many (different) docs using BulkWrite?
2021-04-08T08:46:42.865Z
Update many (different) docs using BulkWrite?
1,674
null
[]
[ { "code": "python buildscripts/scons.py install-mongod CCFLAGS=\"-g -fprofile-arcs -ftest-coverage\" LINKFLAGS=\"-fprofile-arcs\"\n", "text": "Hi community,I am trying to build MongoDB (latest version) with the goal to measure the branch coverage. I have no prior knowledge of building MongoDB, so I could use a little help. First, I wanted to ask as this has probably been done, if you can point me in the direction of any kind of documentation.Second, I already was partially successful with running:It seems to produce the correct files(.gcno) in sconf_temp, but I’m not sure how to proceed from here. The created executable doesn’t seem to create the relevant analysis files(gcda).Thanks for any help in advance", "username": "Patrick_S" }, { "code": "v4.4mastermaster.gcno.gcda.gcnosconf_temp.gcnobuild/opt.gcdabuilt/opt$ find build/opt -name \"*.gcno\" | wc -l\n341\n$ find build/opt -name \"*.gcda\" | wc -l\n317\nbuild/install/bin/wtinstall-mongodmongodwt --help.gcdainstall-mongod-g-ftest-coverageLINKFLAGS.gcda", "text": "Hi -As far as build documentation, I’m sorry to report that it is more or less lacking. We have plans to address that, but it won’t happen right away. For now, your best resource to get build system questions answered is probably to post here.Regarding your coverage build: by “latest version” do you mean v4.4 (i.e. latest stable release), or the master branch which is used for active development?Meanwhile I just tested out your build command on the master branch and it seems to work for me. I get .gcno files, and after I run a binary I get .gcda files as well.The .gcno files should exist in a lot more places than in sconf_temp though: you should find .gcno files for every object file under build/opt (that directory may vary with build options, but is correct for the build command you have provided). After you run a binary, you should also find .gcda files under built/opt:In the above case I built the target build/install/bin/wt rather than install-mongod since it is much faster to build than the whole mongod binary. I then ran wt --help to produce the .gcda files. You might experiment a bit with that target to see if you can get it working before moving on to building install-mongod. I think it will save you some time.It might also be useful if you provided details on your OS, distro version, tooolchain, etc., if you still find it isn’t working.A few other notes:I hope this helps!Thanks,\nAndrew", "username": "Andrew_Morrow" }, { "code": "masterbuild/install/bin/wt-g-ftest-coverageLINKFLAGSinstall-mongodinstall-mongodbuild/optUbuntu 18.04.5gcc 8.4.0gcov 8.4.0python 3.6.9", "text": "Thanks Adrew for the quick reply and the helpful answer. First of all, I am running the master branch. Secondly, your absolutely right about the location of the .gcno files.Now, I built the wired tiger target build/install/bin/wt and it just works with that. I get the same amount of .gcno files that you get and after running it, also the .gcda files. It didn’t matter if I put -g, it didn’t matter if I ran with -ftest-coverage in LINKFLAGS. So I rebuilt the install-mongod target because I thought I missed something, but still no .gcda files after executing mongod. Note: After building install-mongod I get around 3120 .gcno files in build/opt. I am now gonna look into more targets that I can build and test them.FYI, I am running the build on Ubuntu 18.04.5 with gcc 8.4.0, gcov 8.4.0(probably not used for building, but later on for the analysis tool lcov) and python 3.6.9.My follow up question would be, if I missed some steps, executing the target. Currently, I just run it from the install/bin folder directly, as the root user. Additionally, what kind of targets could I test.", "username": "Patrick_S" }, { "code": "wt.gcdamongodmongomongosbase_test", "text": "Hi @Patrick_S -So, this is a nice find. I reproduced your result that while the wt binary produces .gcda files, the mongod binary apparently does not. Thanks for taking the time to write it up and let us know.Would you please open a SERVER ticket in the MongoDB JIRA at https://jira.mongodb.org/projects/SERVER describing your findings?If you would like to continue investigating, other interesting binaries to try out include mongo, mongos, and unit tests like base_test. Please feel free to include any results you may generate for those targets in your ticket and link back to this discussion.Thanks,\nAndrew", "username": "Andrew_Morrow" }, { "code": "", "text": "@Patrick_S - Actually, hold off on that ticket. I think I know what is going on here, but I need to run a quick build to verify.", "username": "Andrew_Morrow" }, { "code": "CPPDEFINES=MONGO_GCOV.gcdamongodgcov", "text": "Hi @Patrick_S -If you add CPPDEFINES=MONGO_GCOV to your SCons invocation you will start getting .gcda files. The mongod binary goes through a non-standard exit path. That define is needed so that we can explicitly invoke the gcov related routine to dump the coverage data at exit.Thanks,\nAndrew", "username": "Andrew_Morrow" }, { "code": ".gcda", "text": "Hi Andrew,the additional SCons target made it also for me possible to produce .gcda files.Thank you very much for all the help,\nPatrick", "username": "Patrick_S" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Building MongoDB with Branch Coverage Compiler Flag
2021-03-31T20:46:47.930Z
Building MongoDB with Branch Coverage Compiler Flag
3,520
https://www.mongodb.com/…7_2_1023x515.png
[]
[ { "code": "", "text": "Hello, i have a question about the following, in my database i have save a user with his login infos as a document into the “users” collection. I want to add into this user a couple of “user questions”, the questions i want to write as a object which contains all questions. I see now during testing that my querys does update all user questions, what i want to have is that a user question gets only updated if already exist or it should be add if not exist. What i understand currently is that the databse does see my query for insert and update like a query which should update the complete questions object, but i want the query to only update the questions which already exist or that it adds the questions as a new object entry if the questions does not already exist and every questions also have one number, so maybe it could be possible to archive this, but i dont know how to write the query correctly, please take a look at my current database entry and my query:\nquestion1370×690 44.7 KB", "username": "Florian_Silbereisen" }, { "code": "", "text": "Hi @Florian_Silbereisen,If ai understand correctly you are looking for an upsert whithin an array , I think there are 2 ways :I will try to write an example for you.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "db.users2.updateOne({\"name\" : \"phil\"},{$set : {\"questions.nr1\" : 1, \"questions.question1\" : \"how am i?\", \"questions.answer1\" : \"bad\"}})\n{ acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0 }\n\n{ _id: ObjectId(\"606d6cf0d81fef690531cedf\"),\n name: 'phil',\n email: '[email protected]',\n questions: \n { nr1: 1,\n question1: 'how am i?',\n answer1: 'bad',\n nr2: 2,\n question2: 'how are you?',\n answer2: 'good' } }\n\ndb.users2.updateOne({\"name\" : \"phil\"},{$set : {\"questions.nr3\" : 3, \"questions.question3\" : \"how am i?\", \"questions.answer3\" : \"good\"}})\n{ acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0 }\n\ndb.users2.findOne()\n{ _id: ObjectId(\"606d6cf0d81fef690531cedf\"),\n name: 'phil',\n email: '[email protected]',\n questions: \n { nr1: 1,\n question1: 'how am i?',\n answer1: 'bad',\n nr2: 2,\n question2: 'how are you?',\n answer2: 'good',\n answer3: 'good',\n nr3: 3,\n question3: 'how am i?' } }\n\n\n", "text": "Hi @Florian_Silbereisen,So I noticed that you are using questions as an object and not array of objects. So my previous comment is irrelevant.In that case a simple update will update or insert new fields:Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "thank you i will check your example soon, currently i have create somethink with 2 querys where the first query try a $set and if the matchedcount from the query results is 0 then i have code to make a $addtoset query to add then new questions, but i think your example if it does the same work with only one query will be better, or?", "username": "Florian_Silbereisen" }, { "code": "", "text": "@Florian_Silbereisen,addToSet is only if questions is an array , but its not…", "username": "Pavel_Duchovny" }, { "code": "", "text": "I have build afterward a new construct where i use a array of objects, in my current codeing its works with AddtoSet and 2 database querys but i want to try also your solution, because i will find it better if i can do both taks with only one query.", "username": "Florian_Silbereisen" }, { "code": "", "text": "Hi @Florian_Silbereisen,Using an array and not naming convention of a nested object makes more sense here. If fields are named the same its eaiser to index them and operate objects.So I will suggest you try to use a 2 command approach rather than having a bad structure.Best regards,\nPavel", "username": "Pavel_Duchovny" } ]
Insert or update into a object with one query
2021-04-06T12:34:10.060Z
Insert or update into a object with one query
8,481
null
[ "queries" ]
[ { "code": "", "text": "I have a db with a collection of images with no references and the size has increased to about 160GB.\nNow I am not able to delete any records as any operation times out. I have multiple instances of files, which can be queried. But the delete many and any other operation times out.Pleas help", "username": "Chandra_Bhagavatula" }, { "code": "", "text": "May be it is doing collscan\nAre you using index on the fields from your query.Check explain planCheck this link", "username": "Ramachandra_Tummala" } ]
Unable to delete Stale mongo data
2021-04-06T15:46:54.538Z
Unable to delete Stale mongo data
1,879
null
[ "production", "cxx" ]
[ { "code": "", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.6.1.Please note that this version of mongocxx requires the MongoDB C driver 1.17.0 .See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.6.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C++11 Driver 3.6.1 Released
2020-11-04T00:16:35.111Z
MongoDB C++11 Driver 3.6.1 Released
1,820
https://www.mongodb.com/…c980589e69d4.png
[]
[ { "code": "", "text": "Hi,I 'm very new to MongoDB Charts and enjoying it.But I’m experiencing and issue and while have searched for the same issue but haven’t found it.My problem is that while my value (Y-axis) over time (X-axis) line graph looks right the time labels are jumbled up. This doesn’t happen if I render the same data as a text table. Have I done something daft?Thanks for reading this. Here’s a screenshot.Tonypossible_bug_zoomed_Screenshot from 2021-04-06 20-51-53848×394 41.8 KB", "username": "Tony_Walsh" }, { "code": "", "text": "Ah I think the line graph is in ascending value while the text chart is in ascending date. Hopefully I’ll find out how to fix this and remove this post Again I’m new to Charts.", "username": "Tony_Walsh" }, { "code": "", "text": "Solved it. Needed to select CATEGORY rather than VALUE for the X-axis. Still writing this got my to focus on the problem and may help some other newbie Graph n ow looks good!", "username": "Tony_Walsh" }, { "code": "", "text": "Great stuff! Note that it only defaulted to the Value sort because your X values are strings. If they were dates, it would always show them chronologically, and also unlock additional options such as Binning.Even if you can’t change your data to use the correct Date type, you can still convert the field type directly within Charts. Just click the … button on the field in the left panel and choose Convert Type.HTH\nTom", "username": "tomhollander" }, { "code": "", "text": "Hi Tom,Thanks for that explanation. I’m almost there now I’d just like the graph to show the oldest on the left not the newest of the the last 50 readings. Any suggestions?If I reverse it I get the first 50 readings and if I don’t limit to 50 my graph is too big.I tried writing a query to limit the data to the last 50 but that didn’t work for me.Kind regards,Tonyreversed_graph_Screenshot from 2021-04-07 19-59-49596×504 53 KB", "username": "Tony_Walsh" }, { "code": "", "text": "Hi @Tony_Walsh -Again, the solution is to use the a Date type on your X axis, either by modifying your data or using the Convert Type option on your existing string field.Once you do that, you will be able to use the Filters tab to create a filter on your last field, which could be something like Last 1 Day (or whatever time period you want).Tom", "username": "tomhollander" }, { "code": "", "text": "Thanks Tom. Wow! There’s a lot of power lurking in filter/custom. I even manged to apply Irish Summer Time and remove the year.I’ve learned a lot thanks to your help the last two days and can now show and promote these graphs to others.", "username": "Tony_Walsh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Charts dates (tod) displayed incorrectly on X-axis for line graphs but OK for text tables
2021-04-06T19:50:28.211Z
Charts dates (tod) displayed incorrectly on X-axis for line graphs but OK for text tables
2,842
null
[]
[ { "code": "", "text": "I’ve created a bar chart but it has more than 50 bars.I’d like to filter only the top 10 bars, or create an ‘Others’ category.If I try to add a [{ “$limit”: 10 }] filter, it affects only the raw documents, not the bars.Does anybody know how to do it?", "username": "Roberto_de_Oliveira" }, { "code": "", "text": "Hi @Roberto_de_Oliveira!In most cases you can achieve this by using the “Limit Results” option shown on the field card mapped to the category axis. Does this work for you?image1139×754 69.2 KBTom", "username": "tomhollander" }, { "code": "", "text": "Jezz!!! How did I miss that?! Thank you very much!", "username": "Roberto_de_Oliveira" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to Filter Top N results in MongoDB Charts
2021-04-05T17:46:45.150Z
How to Filter Top N results in MongoDB Charts
3,007
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that addresses some issues reported since 2.12.1 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.12.2%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "Dmitry_Lukyanov" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver 2.12.2 Released
2021-04-07T20:23:34.241Z
.NET Driver 2.12.2 Released
2,128
null
[ "production", "php" ]
[ { "code": "Cursor::current()key()pecl install mongodb-1.9.1\npecl upgrade mongodb-1.9.1\n", "text": "The PHP team is happy to announce that version 1.9.1 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release fixes return values for Cursor::current() and key() when the cursor’s position is invalid. It also addresses an issue where the PHP version information reported in the client metadata handshake could be truncated. The bundled version of libbson and libmongoc has also been updated to 1.17.4.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "jmikola" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.9.1 Released
2021-04-07T17:55:49.861Z
MongoDB PHP Extension 1.9.1 Released
3,362
null
[ "data-modeling", "anti-patterns" ]
[ { "code": "", "text": "I know that a document have something like 16 MB limit, if i want to create for every forum topic of my website a new collection and into that collection i save all posting of a topic, then i would have maybe after some time a lot of collections in the database, will that be a good practise or how would you recommend to slove this, is there a limmit how many collections you can create or is it for any other reasons not recommend to code like that?", "username": "Florian_Silbereisen" }, { "code": "", "text": "You might find the following interesting.https://www.mongodb.com/article/schema-design-anti-pattern-massive-number-collections/", "username": "steevej" }, { "code": "", "text": "thank you i will check that, so it sounds like it is not good idea to create a collections for something like every new forum topic.", "username": "Florian_Silbereisen" } ]
How many collections can i add into a database
2021-04-06T21:57:20.253Z
How many collections can i add into a database
3,243
null
[ "queries" ]
[ { "code": "exports.search = async (req, res) => {\n try {\n //await aggregated query on mongodb\n const searchRequest = await Product.aggregate([\n {\n '$search': {\n 'autocomplete': {\n 'query': `${req.query.productName}`,\n 'path': 'productName',\n 'fuzzy': {\n 'maxEdits': 2,\n 'prefixLength': 3\n }\n }\n\n }\n }]).toArray();\n\n //send result of search query from mongodb\n res.send(searchRequest);\n }\n catch (error) {\n...\n};\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"productName\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 3,\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n", "text": "Dear membersI’m a mongodb (and backend) beginner and try to create an autocomplete search. Unfortunately, I always get back an empty array in my console. In another thread the reason was that the user had renamed the index. This is not the case with me.Here’s my code://controller for search query with autocomplete function (get request)This is my index:I have already tried different variants (e.g. with or without quotes etc.), but I just can’t find a solution. Maybe someone has an idea? Thanks in advance for the help.", "username": "dj_ch" }, { "code": "productName{\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 3,\n \"type\": \"autocomplete\"\n }\nproductName", "text": "@dj_ch Welcome to the community forum. Beginners totally welcome!I don’t know exactly what’s going on yet.Could you try three steps for me?Paste a sample document from your collection so I can take a look.Remove fuzzy to isolate this issue.Change your index definition for productName to this:In other words, eliminate the document type definition for productName to isolate the issue. Looking forward to getting autocomplete to work for you.", "username": "Marcus" }, { "code": "1.\t_id: 60678678756c434034a2f562\n2.\tproductName: \"Dragon Fruit\"\n3.\tproductCategory: \"Früchte & Gemüse\"\n4.\tlastModificationDate: 2021-04-02T21:02:48.427+00:00\n5.\t__v: 0\n try {\n const searchRequest = await Product.aggregate([\n {\n $search: {\n 'autocomplete': {\n 'query': `${req.query.productName}`,\n 'path': 'productName',\n /*fuzzy: {\n maxEdits: 2\n prefixLength: 3\n }*/\n }\n }\n }])//.toArray()\n\n //send result of search query from mongodb\n res.send(searchRequest);\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"productName\": [\n {\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 3,\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\n", "text": "Dear Marcus,\nthank you so much for your quick answer! Here’s a sample document from my collection:I’ve removed fuzzy, it now looks this way:exports.search = async (req, res) => {Further I’ve changed the JSON of my index accordingly:Unfortunately I still get the same result (empty array). Btw I commented out the toArray() because I got an error that toArray() is not a function.", "username": "dj_ch" }, { "code": "$searchdefault'$search': {\n 'autocomplete': {\n 'query': 'Drago',\n 'path': 'productName'\n }\n\n }\n", "text": "Can you try to build this query in the aggregation pipeline builder in Atlas to confirm it is not your application code:This will help us to isolate this issue. See the parenthetical in step 2 also, in case you should have an index name in your original query.", "username": "Marcus" }, { "code": "try {\n //await aggregated query on mongodb\n const searchRequest = await Product.aggregate([\n {\n '$search': {\n 'index': \"<index name>\", // optional, defaults to \"default\"\n 'autocomplete': {\n 'query': `${req.query.productName}`,\n 'path': 'productName',\n 'fuzzy': {\n 'maxEdits': 2,\n 'prefixLength': 3\n }\n }\n\n }\n }]).toArray();\n", "text": "Hi!I just want to emphasize what Marcus says above for step 2. Unless your index is named “default,” make sure to include it in your $search stage.So in your code listed above, it would look like this:Hope this helps.Karen", "username": "Karen_Huaulme" }, { "code": "", "text": "Dear Marcus and Karen\nThanks for the tip with the aggregation pipeline builder! I have now created a new index with a specific name and tested everything. I copied out the code from the aggregation pipeline builder and now it works! I honestly still do not know where the error was exactly. But I am very happy and thankful that I can use autocomplete now. Thank you very much for the help!\nDaniela", "username": "dj_ch" } ]
$search autocomplete returns an empty array
2021-04-03T16:56:05.683Z
$search autocomplete returns an empty array
3,977
null
[ "migration" ]
[ { "code": "", "text": "I initiated a live migration over the weekend and everything seems fine. I extended the cutover window once and then forgot about it and missed actually clicking the “Cut Over” button. Based on the help docs, pressing “Cut Over” does this:These steps sound internal to the live migration tool but also don’t sound like they would impact my application if they got missed. Should I be worried about anything?", "username": "Steven_Chan" }, { "code": "", "text": "Hi @Steven_Chan,Welcome to the community!Regarding your question in the title:what happens if if you miss the “Cut Over” window?As per the Live Migrate documentation, If the 72 hour period passes, Atlas stops synchronizing with the source cluster. This would be the same case if you had extended the timer and let the new time expire also.Should I be worried about anything?The sync destination would not have any changes made to it from the sync source from the time the cut over timer expired.Hope this answers your questions.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks Jason! So it’s pretty much as though I hit the “Cut Over” button at the point when the window passed? For instance, nothing gets rolled back right? I gather the resultant error message at next login is just scarier than the true situation since it reports that the migration failed and that’s what got me worried.", "username": "Steven_Chan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Live Migration - what happens if if you miss the "Cut Over" window?
2021-04-07T09:48:30.475Z
Live Migration - what happens if if you miss the &ldquo;Cut Over&rdquo; window?
3,748
null
[ "vscode" ]
[ { "code": "", "text": "The VS Code extension is my tool of choice for developing aggregation pipelines. I am experiencing two problems. Can someone advise on how to fix these?Whenever I drag to select a bit of code, the editor ‘inserts’ a line asking me if I want to execute the selected code. It causes all the code below to shift down a line and makes selecting code very irritating.The editor does not sense and redline JSON syntax errors. To work around this, I switch the file association to “JavaScript” whenever I am seriously editing a pipeline. Then I switch the file association back to “MongoDB” just before testing/executing the pipeline. This is inconvenient.Anybody know workarounds for these? Is there some Preference or Setting I’ve missed?", "username": "Peter_Winer" }, { "code": "", "text": "Hi @Peter_Winer, thank you for your feedback and for sharing your frustrations. It’s really valuable for us to know how you use playgrounds in VS Code and where we can improve the experience.For 1, we currently don’t have a setting to disable partial execution support. Let us take a look and see if it’s something we can easily add. Out of curiosity, is partial execution a functionality you see yourself using in some situations?About 2, we need to check the language grammar we are using for playground. It’s possible that for some reason it only checks for valid JS but not for valid JSON. Could you share a screenshot of where you’d expect to see syntax errors?Lastly, would you mind adding these as suggestions in our feedback portal?", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Hi, thanks for the rapid reply.I don’t really see myself using the partial execution feature. I understand it can be useful, and I sometimes use similar features in other tools. For example, there is a RegEx extension where this is quite valuable.Here is a little snip that should display an error:If I switch to JavaScript file association, the error is shown as a redline:… and it’s also logged in the Problems display:I would like to have this capability without having to change the file association for .mongodb files.Thanks - Peter.", "username": "Peter_Winer" }, { "code": "", "text": "@Peter_WinerI totally agree about better JSON parsing as a baseline, especially while the intellisense support gets ramped up.", "username": "NeilM" }, { "code": "", "text": "@Peter_Winer thank you for clarifying. I captured your suggestion for the syntax checking here: Add syntax checking for JS and JSON – MongoDB Feedback Engine.Regarding the code lenses for partial execution, we are currently evaluating what’s the best way to allow customizing this behavior so users who don’t need this feature don’t have that jumpy behavior.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "@Massimiliano_Marcon - Thank you so much. Looking forward to the solution.FYI, I’ve tried various tools for building aggregation pipelines. The VS Code extension is the best. One additional feature would make it truly awesome:Provide a way to run an aggregation pipeline one step at a time, just like MongoDB Compass and Studio 3T. IMO, that’s the only missing feature.Occasionally, I will run an aggregation in MongoDB Compass, just to see such intermediate results. Otherwise, I always use the VS Code extension.Thanks - Peter.", "username": "Peter_Winer" }, { "code": "", "text": "@Peter_WinerTo make that feature really useful, it would make sense for the parser to clearly recognize the stages in the pipeline.Since in reality, I would like to be able run say stage 1 and 2, while working on stage 3, without having to comment out stage 3, to re look at output from stage 1 + 2.However because of the way I format my, I end having to format my pipleline code like so: -{\n, {$facet}So I can easily comment out the $facet stage for instance. Maybe the aggregation pipleline, needs a optional parameter, where you enter the stages you want to run {1,2,5} ", "username": "NeilM" }, { "code": "", "text": "Another, related question. This one is very specific.When I launch mongo from within the VS Code extension, it uses a connection string that sets “w=majority”. As a result, I can’t run explain() on my aggregation pipelines. Mongo won’t allow this for pipelines with write concerns.Where is this connection string stored, so I can access it and edit it?Thanks - Peter.", "username": "Peter_Winer" }, { "code": "mongow=majority", "text": "I am almost certain that the connection string the extension passes to mongo is the same connection string you used when you added the connection to the extension. You don’t have a way to edit the connection string of an existing connection, but you can copy it and recreate the connection without w=majority.", "username": "Massimiliano_Marcon" } ]
Love the VS Code extension but
2021-03-01T10:08:29.065Z
Love the VS Code extension but
4,391
null
[ "crud", "mongodb-shell" ]
[ { "code": "", "text": "I am using insertMany() to add data to a collection which works successfully - However, some underlying mechanism is messing up the types. When I try to use $inc on one of the documents added, it says nonnumeric type (athough the type is numberInt)What’s most strange about this situation is that if I manually update the fields, the problem is solved. If I don’t manualy update, they don’t appear in the search results… But I update, change nothing, and voila.For context, the documents have a field “status” which is set to 2, ($numberInt) - I have a cron constantly looking for documents with {“status”: 2} that updates status to 6 (and this never fails, except in this case, where even though status is 2 it never changes unless I ‘update’)When I try to update in the shell, it fails with “Cannot apply $inc to a value of non-numeric type. {_id: ObjectId(’ excluded ')} has the field ‘status’ of non-numeric type object”\" but even on the webapp I can see that the status field is properly set to numberIntthanks!\n", "username": "Beulr_Team" }, { "code": "mongo", "text": "Hello @Beulr_Team, welcome to the MongoDB Community forum!These are your questions or concerns:Please include samples of documents and the code that is producing (or not producing) the expected results, for each of the cases. How does the cron job does the updates - include the code the job runs.Some general information about using number data types in MongoDB. By default, when using with mongo shell, a number is stored as type double. Application programs may have different data types as default (for example, Java programming language has an int as the default number type).", "username": "Prasad_Saya" } ]
Bizarre issue with insertMany() CLI command
2021-04-07T11:25:12.631Z
Bizarre issue with insertMany() CLI command
1,477
null
[]
[ { "code": "", "text": "Is there any problem if i will use same mongo default port 27017 in all my replica set members on different machines with bind ip’is ?", "username": "Nanuka_Zedginidze" }, { "code": "", "text": "If nodes are different , then no issue to use same default port.", "username": "ROHIT_KHURANA" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replica set different machines with same mongo port
2021-04-07T12:03:34.391Z
Replica set different machines with same mongo port
2,090
null
[ "installation" ]
[ { "code": "root@raspberrypi:~ # uname -a\nLinux raspberrypi 5.10.17-v8+ #1403 SMP PREEMPT Mon Feb 22 11:37:54 GMT 2021 aarch64 GNU/Linux\nroot@raspberrypi:~ # \nroot@raspberrypi:~ # mongo --version\nMongoDB shell version v4.4.4\nBuild Info: {\n \"version\": \"4.4.4\",\n \"gitVersion\": \"8db30a63db1a9d84bdcad0c83369623f708e0397\",\n \"openSSLVersion\": \"OpenSSL 1.1.1d 10 Sep 2019\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu1804\",\n \"distarch\": \"aarch64\",\n \"target_arch\": \"aarch64\"\n }\n}\nroot@raspberrypi:~ # \nroot@raspberrypi:~# cat /home/mongodb/mongo-One.cfg\nstorage:\n dbPath: /mnt/mongoDB-One/DB_Data\n journal:\n enabled: true\nnet:\n #bindIp: 192.168.10.114\n port: 22330\nsystemLog:\n destination: file\n path: /mnt/mongoDB-One/DB_Data/mongod.log\n logAppend: true\nreplication:\n replSetName: mngoRepSet\nroot@raspberrypi:~# \nroot@raspberrypi:~# cat /lib/systemd/system/mongod-One.service\n[Unit]\nDescription=MongoDB Database Server\nDocumentation=https://docs.mongodb.org/manual\nAfter=network-online.target\nWants=network-online.target\nRequires=network-online.target\n\n[Service]\nUser=mongodb\nGroup=mongodb\nEnvironmentFile=-/etc/default/mongod\nExecStart=/usr/bin/mongod --config /home/mongodb/mongo-One.cfg\nPIDFile=/var/run/mongodb/mongod.pid\n# file size\nLimitFSIZE=infinity\n# cpu time\nLimitCPU=infinity\n# virtual memory size\nLimitAS=infinity\n# open files\nLimitNOFILE=64000\n# processes/threads\nLimitNPROC=64000\n# locked memory\nLimitMEMLOCK=infinity\n# total threads (user+kernel)\nTasksMax=infinity\nTasksAccounting=false\nType=idle\n\n\n# Recommended limits for mongod as specified in\n# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings\n\n[Install]\nWantedBy=multi-user.target\nroot@raspberrypi:~# \n{\"t\":{\"$date\":\"2021-03-20T12:51:15.862+09:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Cannot assign requested address\"}}}root@raspberrypi:~ # systemctl start mongod-One", "text": "I am having a problem with starting mongoDB on a Rasberry Pi with systemd, using Raspberry Pi OS 64 bits.This is my environment:And this is the configuration file for the mongod server:This is the systemd service file:In this configuration the server starts as expected (note that the bindIp line in the server configuration is commented out). But troubles come when I remove the comment on the bindIp line.I then get this error in the logs:{\"t\":{\"$date\":\"2021-03-20T12:51:15.862+09:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Cannot assign requested address\"}}}It seems to show a network issue, but I have done everything I can think of in the systemd service file to make sure the network would be ready when starting the service. Can someone see something I am missing?I can also confirm that after failing at boot time, the service can be manually started using:root@raspberrypi:~ # systemctl start mongod-One", "username": "Michel_Bouchet" }, { "code": "", "text": "The IP isn’t there yet to bind to.A systemd/init issue. There seems to be quite a few hits on google regarding this, so there is bound to be one that solves this.Are you addressing more than 1 ip/interface on this host? If not I would just bind_ip_all.", "username": "chris" }, { "code": "", "text": "No I am not addressing more than one ip/interface on this host.I used bind_ip_all as you suggest and it works.Since I don’t know yet what it implies to use this instead of what I was doing before, I cannot tell if this is a good solution on the long term. But at least on the short term, it solves my problem. The server starts at boot time and then I can initiate a replica set and use mongoDB normally.", "username": "Michel_Bouchet" }, { "code": "", "text": "The thing to be aware of is that any interface added to the system, or any ip added to an existing interface, will be listening on the mongod port too.If adequate system and/or database protections are in place this really isn’t an issue.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trouble launching mongoDB on a Rasberry Pi with systemd
2021-04-06T13:50:37.074Z
Trouble launching mongoDB on a Rasberry Pi with systemd
2,707
null
[ "mongodb-shell" ]
[ { "code": "", "text": "I was testing the date data.\nBut I found something strange in the results of the new date().\nI wonder why this result came out.replica:PRIMARY> date = new Date()\nISODate(“2021-04-07T06:28:04.624Z”) // current time => rightreplica:PRIMARY> date = new Date(2000,1,1)\nISODate(“2000-01-31T15:00:00Z”) // 2000/01/01 => wrongI don’t think this is a mongodb problem, but I think it is my lack of knowledge about js. But I want to know the reason.", "username": "Kim_Hakseon" }, { "code": "Date()Date()Date1day0monthIndex0111monthIndexnew Date(“2000-01-01”)\nISODate(“2000-01-01T00:00:00Z”)\nnew Date(“2000-01-01T00:00+11:00”)\nISODate(“1999-12-31T13:00:00Z”)\n", "text": "Hi @Kim_Hakseon,You may wish to refer to this Date() constructor documentation.Please see the following information when passing through Individual date and time component values as parameters into the Date() constructor:Given at least a year and month, this form of Date() returns a Date object whose component values (year, month, day, hour, minute, second, and millisecond) all come from the following parameters. Any missing fields are given the lowest possible value ( 1 for day and 0 for every other component). The parameter values are all evaluated against the local time zone, rather than UTC.In addition to this, the monthIndex is an Integer value representing the month, beginning with 0 for January to 11 for December. In your example, you have passed in a value of 1 for the monthIndex.I would recommend using the ISO Date string format instead:Note: this constructor variation assumes UTC +0 (aka Z for Zulu time) for the date string.A UTC offset can be included in the string, eg:Hope this helps!Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks to you @Jason_Tran.\nI learned the mechanism for Date().Jan is 0 in Date()…And the reason why day is different is because Mongodb uses UTC and JS uses KST, which is the area where I am (Korea).\n※ KST = UTC + 9hSo if you add 9 hours to the result, you get the desired result.replica:PRIMARY> date = new Date(2000,1,1)\nISODate(“2000-01-31T15:00:00Z”) + 9hours\n=> ISODate(“2000-02-01T00:00:00Z”)", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using new Date() in MongoDB shell
2021-04-07T06:35:39.828Z
Using new Date() in MongoDB shell
8,461
null
[ "aggregation", "python" ]
[ { "code": "userAccess = [ \"deserunt sint cupidata\" ,\"dolore pariatur aliqua\",\"eu magna cupidatat dolore\",\"amet irure nulla\"]\n\ncursor = coll.name.aggregate(\n [\n \n { \"$redact\": {\n \"$cond\": {\n \"if\": { '$gt': \n [ \n { \"$size\": \n { \"$setIntersection\": \n [ \"$tags\", userAccess ] \n } }, 0 ] \n },\n then: \"$$DESCEND\",\n \"else\": \"$$PRUNE\"\n }\n }\n }\n ]\n)\n\nfor x in cursor:\n print(x)\n", "text": "All,\nI am very new to mongodb and python. I tried to write a redact query but having issue. It works in Mongo Shell but not in python. Can someone help? TIA", "username": "Tony_Tran" }, { "code": "", "text": "Hi @Tony_TranA quick approach to getting valid code for an aggregation is to use MongoDB’s Compass tool where you can input the Mongo Shell syntax and then export a specific language’s code, say Python. This is probably the quickest and easiest way to debug your current problem.In general, this type of broader question outside of a relevant lesson should be asked in the Drivers & ODM category so you can more expose for your question as questions in this category are meant to be related to lessons and exercises within M220P.Hope this helps.\nEoin", "username": "Eoin_Brazil" } ]
How to construct redact query in python
2021-04-06T11:28:31.888Z
How to construct redact query in python
1,964
null
[ "database-tools" ]
[ { "code": "mongodumpmongodump --uri mongodb+srv://<USERNAME>:<PASSWORD>@sandbox.4izby.mongodb.net/sample_mflixmongodump --host=\"Sandbox/sandbox-shard-00-00.4izby.mongodb.net:27017,sandbox-shard-00-01.4izby.mongodb.net:27017,sandbox-shard-00-02.4izby.mongodb.net:27017\" --db=\"sample_mflix\" --collection=\"movies\"2021-04-06T13:38:46.923+0800\tFailed: can't create session: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: sandbox-shard-00-00.4izby.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(sandbox-shard-00-00.4izby.mongodb.net:27017[-181]) incomplete read of message header: EOF }, { Addr: sandbox-shard-00-01.4izby.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(sandbox-shard-00-01.4izby.mongodb.net:27017[-182]) incomplete read of message header: EOF }, { Addr: sandbox-shard-00-02.4izby.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(sandbox-shard-00-02.4izby.mongodb.net:27017[-180]) incomplete read of message header: EOF }, ] }", "text": "…and if yes, what should I do?I am learning how to use mongodump to export data from a collection. I tried two ways:I got the following error:\n|2021-04-06T13:54:17.235+0800|error parsing command line options: error parsing uri: lookup sandbox.4izby.mongodb.net on 127.0.0.53:53: cannot unmarshal DNS message|\n|—|—|\n|2021-04-06T13:54:17.235+0800|try ‘mongodump --help’ for more information|I got the following error:\n2021-04-06T13:38:46.923+0800\tFailed: can't create session: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: sandbox-shard-00-00.4izby.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(sandbox-shard-00-00.4izby.mongodb.net:27017[-181]) incomplete read of message header: EOF }, { Addr: sandbox-shard-00-01.4izby.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(sandbox-shard-00-01.4izby.mongodb.net:27017[-182]) incomplete read of message header: EOF }, { Addr: sandbox-shard-00-02.4izby.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(sandbox-shard-00-02.4izby.mongodb.net:27017[-180]) incomplete read of message header: EOF }, ] }I looked around the forum and found that some people encountered similar problems due to their ISP blocking port 27017. How do I check if that is the case?", "username": "nisajthani" }, { "code": "curl http://portquiz.net:27017\nPort 27017 test successful!\nYour IP: 172.12.34.123\n", "text": "Hi @nisajthani,Thanks for sharing the commands being used and the errors returned.I looked around the forum and found that some people encountered similar problems due to their ISP blocking port 2701. How do I check if that is the case?I assume meant 27017 here as your commands used state port 27017. However, please correct me if I am wrong.One way can be to use the http://portquiz.net:27017 website. From the client attempting to connect to the Atlas cluster, you can either visit the above website and if it loads it would indicate that you can connect to port 27017.Alternatively, you can run:If successful, you should receive response similar to the following:Hope this helps.Best Regards,\nJason", "username": "Jason_Tran" }, { "code": "mongodump", "text": "I have used the curl command and have confirmed my ISP is not blocking the port number. Unfortunately, that means I once again have no clue why mongodump isn’t working.But since main the question has been answered, I’ll mark the topic as ‘Solved’. Is it a good idea if I post my problem in new topic?", "username": "nisajthani" }, { "code": "mongodump --uri \"mongodb://USERNAME:[email protected]:27017,sandbox-shard-00-01.4izby.mongodb.net:27017,sandbox-shard-00-02.4izby.mongodb.net:27017/?replicaSet=REPLICASETNAME&authSource=admin\" --ssl --db DBNAMESandbox", "text": "But since main the question has been answered, I’ll mark the topic as ‘Solved’. Is it a good idea if I post my problem in new topic?Thanks for marking that as the solution in regards to the port testing question.Are you able to try the following:mongodump --uri \"mongodb://USERNAME:[email protected]:27017,sandbox-shard-00-01.4izby.mongodb.net:27017,sandbox-shard-00-02.4izby.mongodb.net:27017/?replicaSet=REPLICASETNAME&authSource=admin\" --ssl --db DBNAMEYou’ll need to replace USERNAME, PASSWORD, REPLICASETNAME and optionally DBNAME.Atlas requires TLS/SSL connections for all Atlas clusters which is why you may have received the connection failure error through the second method.I can see you have used Sandbox as your replicaSet name when performing the mongodump via your second method. To get the correct replicaSet name for the Atlas cluster to be used in my above example, you can go through the connect modal in Atlas for your cluster and choose mongoshell, from there you can change the version to 3.4 and you will find the replicaSet value as shown in the example below:\n\nimage795×811 70.6 KB\nAlso, you may experience this error if the client is running Ubuntu 18.04 as noted on the documentation:cannot unmarshal DNS messageIf you are still getting any errors, please send the full command run as well as the error output. Please redact any credentials before doing so.Hope this helps,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you. Your suggestion worked! Yes, my client is indeed an Ubuntu 18.04. I will read the documentation more thoroughly after this.", "username": "nisajthani" }, { "code": "", "text": "Glad to hear it worked ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to check if my ISP is blocking port 27017
2021-04-06T05:59:33.724Z
How to check if my ISP is blocking port 27017
6,533
null
[ "replication", "configuration" ]
[ { "code": "[root@mongo01 ~]# cat /etc/mongod.conf\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1,10.10.24.17 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\nthis is 2 node config:\n[root@mongo03 ~]# cat /etc/mongod.conf\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1,10.10.24.18 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n#\n##replication:\nreplication:\n replSetName: \"replicaset01\"\n127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1 localhost localhost.localdomain localhost6 localhost6.localdomain6\n10.10.24.18 mongo03.qarva.info mongo03\n10.10.24.17 mongo01.qarva.info mongo01\n2 node /etc/host output:\n127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1 localhost localhost.localdomain localhost6 localhost6.localdomain6\n10.10.24.18 mongo03.qarva.info mongo03\n10.10.24.17 mongo01.qarva.info mongo01.\nrs.add(\"mongo03\")\nrs.add(\"mongo03:27017\")\nrs.add({host: \"mongo03.darva.info:27017\"})\n", "text": "Hi, when configur replica set, while adding second member get this error: Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2. I have two machines in vmware configured, communication between them is configured. this is 1 node config:1 node /etc/host output:i am trying to add second node by this command:none above is working , what can i do?", "username": "Nanuka_Zedginidze" }, { "code": "", "text": "The error message tells you what to do. You got Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2. You cannot have localhost and non-localhost in same replica set.", "username": "steevej" }, { "code": "", "text": "Hi @Nanuka_Zedginidze,rs.initiate( {\n_id : “replicaset01”,\nmembers: [\n{ _id: 0, host: “mongo01.qarva.info:27017” },\n{ _id: 1, host: “mongo03.qarva.info:27017” }\n]\n})", "username": "ROHIT_KHURANA" }, { "code": "", "text": "ROHIT_KHURANA\nIf you scroll down you can see replication parameter in that config fileI suspect default configuration binding to localhost or some hostname/IP resolution issuesCan you show rs.conf() output\nDid your try with IP?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Ramachandra_Tummala,Replication parameter is showing for mongo03(10.10.24.18) I didn’t find in first host mongo01 as output is of cat command.So just want to make sure that @Nanuka_Zedginidze added these settings on both nodes", "username": "ROHIT_KHURANA" }, { "code": "", "text": "Hi, I have removed 127.0.0.1 and instead i have added just ip’s on both side like that:\n1 node: cat/etc/mongod.confnet:\nport: 27017\nbindIp: 10.10.24.17 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\ncat/etc/hosts output:\n##127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4\n#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6\n10.10.24.17mongo01.qarva.info mongo01\n10.10.24.18 mongo02.qarva.info mongo02\nand the problem solved. by the way when connecting to mongo, corect is to connect mongo shell with\n$mongo 10.10.24.17", "username": "Nanuka_Zedginidze" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2
2021-04-05T10:42:54.207Z
Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2
10,779
null
[ "mongodb-shell", "security", "atlas" ]
[ { "code": "use admin\nswitched to db admin\nMongoDB Enterprise atlas-feuibs-shard-0:PRIMARY> db.createUser({user:'root', pwd:'1234', roles: ['root']})\nuncaught exception: Error: couldn't add user: (Unauthorized) not authorized on admin to execute command { createUser: \"root\", pwd: \"1234\", roles: [root], digestPassword: true, writeConcern: { w: \"majority\", wtimeout: 600000.0 }, lsid: { id: {4 [171 148 62 18 97 202 76 208 154 77 242 106 20 185 219 71]} }, $clusterTime: { clusterTime: {1617362319 8}, signature: { hash: {0 [4 101 176 140 52 204 125 92 233 30 123 196 177 26 148 123 7 241 173 182]}, keyId: 6942616266624466944.000000 } }, $db: \"admin\" } :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1386:11\n@(shell):1:1\nmongod -f /etc/mongod.conf\nuncaught exception: ReferenceError: mongod is not defined :\n@(shell):1:1\nMongoDB Enterprise atlas-feuibs-shard-0:PRIMARY> db.getUser('m001-student')\n{\n\t\"_id\" : \"admin.m001-student\",\n\t\"user\" : \"m001-student\",\n\t\"db\" : \"admin\",\n\t\"roles\" : [\n\t\t{\n\t\t\t\"role\" : \"readWriteAnyDatabase\",\n\t\t\t\"db\" : \"admin\"\n\t\t}\n\t]\n}\nMongoDB Enterprise atlas-feuibs-shard-0:PRIMARY> db.auth('m103-student')\nEnter password: \nError: Authentication failed.\n0\nuse rutas\nswitched to db rutas\n\nMongoDB Enterprise atlas-feuibs-shard-0:PRIMARY> db.createUser({user: 'dios', pwd: 'divine', roles: ['root']})\nuncaught exception: Error: couldn't add user: CMD_NOT_ALLOWED: createUser :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1386:11\n@(shell):1:1\n", "text": "im in mongo shell traying to create the 1st superuser with all permissions;I can’t access config file:Created another db called ‘rutas’ but no go:any ideas how to solve this?\nthanks !!!", "username": "Leo_Aramburu" }, { "code": "", "text": "You cannot manipulate users from the shell when connected to Atlas. You have to use the Atlas GUI or API.See https://docs.atlas.mongodb.com/security-add-mongodb-users/", "username": "steevej" }, { "code": "", "text": "excellent thanks !!!", "username": "Leo_Aramburu" }, { "code": "", "text": "Hi @Leo_Aramburu,I hope you got the solution to your problem.\nIf you still have any questions feel free to reach out.Thanks\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "thanks !!!\ndidnt solve the problem but its ok", "username": "Leo_Aramburu" }, { "code": "", "text": "From the mongo shell prompt, we can see that he is connected to an Atlas cluster. There is no configuration file to share. Database users cannot be manipulated with the shell. It is not a problem. It is a security feature of Atlas.", "username": "steevej" }, { "code": "mongo", "text": "Hi @Leo_Aramburu,As @steevej noted, Atlas database users have to be managed via the Atlas UI or API. You cannot manage Atlas database users via the mongo shell. If you’re still having trouble creating a new user, can you provide more information on the steps you are taking and any error messages or outcomes?Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Good to know thanks !!!regards ,\nLeo.", "username": "Leo_Aramburu" }, { "code": "", "text": "", "username": "SourabhBagrecha" } ]
Can't create a root user from mongo shell
2021-04-02T12:46:47.573Z
Can&rsquo;t create a root user from mongo shell
14,132
null
[ "api" ]
[ { "code": "", "text": "Good day Mongo people!!\nYup, I’m still super new to this and while I’m in and out between different non-Mongo work I have a question I’m sure you’ve had at some point. (not in the search here).Because of the brilliance of the product I’m curious if it ever came up or was even ever a possibility to return an object or data back from an async Web API call?Essentially I’m curious if there is ANY way to create a new document and get back the new ID without having to query again with a filter on it’s other attributes?In a perfect world the Response would have a document with it. Probably impossible but for you smart people what’s the best strategy otherwise? Multiple query’s?Thanks in advance!!\nCPTP.S. Likely the wrong category…sorry :-/", "username": "Colin_Poon_Tip" }, { "code": "_idacknowledgedtruefalseinsertedId_idcollection.insertManycollection.insert", "text": "Essentially I’m curious if there is ANY way to create a new document and get back the new ID without having to query again with a filter on it’s other attributes?Hello @Colin_Poon_Tip, in general, MongoDB collection’s insert methods return data about the newly inserted document - the _id value; for example collection.insertOne documentation says:Returns: A document containing:It is similar for other insert methods like collection.insertMany and collection.insert.Please post the code you are using to perform the insert operation. Also, include the versions of MongoDB and the driver your application is using.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks for the response Prasad…Well, I have this Asynchronous Web API with a simple:public async Task CreateTransaction(Transaction transIn)\n{\nawait _transactions.InsertOneAsync(transIn);\n}\nWhich will return a success or fail Http response, I’m sure you’re aware of. So, it’s not a synchronous process but happy to hear it.", "username": "Colin_Poon_Tip" }, { "code": "", "text": "Thinking a little out the box (my box that is), maybe the solutions is an in the odds game?In real world terms the odds of a duplicate key (_Id) are nearly impossible based on the makeup of an ObjectID structure:Maybe “A” solution is to generate an ObjectId on the client side and insert it with the document to be inserted. Then I’d have that ID when the Async request is processed and on success it’s reasonable to assume that’s the ID inserted. Giving me a single function call and an ID I can track and do other things with during the life of the application.I believe the Mongo documents states a duplicate error would be thrown or some HTTP Error code, so I could trap and react that.Am I WAY off best practice? probably :-oAzure functions don’t support synchronous processing last I heard, or they at least discourage it for performance and blocking issues.", "username": "Colin_Poon_Tip" }, { "code": "Task task = CreateTransaction(transIn);\ntask.Wait()\nConsole.WriteLine(transIn); // transIn will have the newly created document _id property", "text": "public async Task CreateTransaction(Transaction transIn)\n{\nawait _transactions.InsertOneAsync(transIn);\n}", "username": "Prasad_Saya" }, { "code": "", "text": "G’dang!! Well I proved out my first strategy, but I like yours WAY better.I’m trying!! \nGreatly appreciated.\nC", "username": "Colin_Poon_Tip" }, { "code": "_idstatic async Task CreateDocument(IMongoCollection<BsonDocument> collection, BsonDocument bsondoc) {\n await collection.InsertOneAsync(bsondoc);\n}\n\nstatic void Main(string[] args) {\n var client = new MongoClient();\n var collection = client.GetDatabase(\"test\").GetCollection<BsonDocument>(\"test\");\n var doc = \"{ 'title' : 'Tom Sawyer', 'author' : 'Mark Twain' }\";\n var bson = BsonDocument.Parse(doc);\n var task = CreateDocument(collection, bson);\n task.Wait();\n Console.WriteLine(bson); // { \"_id\" : ObjectId(\"606d4665183671c4c1343c38\"), \"title\" : \"Tom Sawyer\", \"author\" : \"Mark Twain\" }", "text": "And, an example to see the newly generated _id:", "username": "Prasad_Saya" }, { "code": "Task task = CreateTransaction(transIn);\ntask.Wait()\n", "text": "Not sure that actually works. that Function is hosted on Azure. So it’s an async HTTP Post to that azure function. I could be wrong but testing didn’t return anything in the id field.", "username": "Colin_Poon_Tip" }, { "code": "collection.InsertOneAsync_id", "text": "Hello @Colin_Poon_Tip, I am not familiar with the workings of Azure and how the code fits into your application. I only showed how the MongoDB C# / .NET driver’s collection.InsertOneAsync method works and how to get the newly inserted _id value after the method’s run.", "username": "Prasad_Saya" }, { "code": "", "text": "Well, I appreciate the responses. I have a lot to learn/cover but it all helps.\nI’m not sure technically what the difference is necessarily other than a separation of application spaces. Azure being out on the net vs local processing. Perhaps that’s by design…OR I’m just missing something, which is entirely possible.I’ll keep digging, but while I know I can do it in the method I discussed I’d rather have a separation from the application and the Mongo backend. Hence the client neutral WebAPI.We’ll see. I’m just testing the functional ability’s I have to work with as it drives the design.Best regards,\nC", "username": "Colin_Poon_Tip" }, { "code": "createpublic async Task<ObjectId> Create(Car car)\n{\n await _cars.InsertOneAsync(car);\n return car.Id;\n}\n", "text": "@Colin_Poon_Tip, see the following post’s create method - may be this example works for you:When you think about database providers for ASP NET Core apps, you probably think about Entity Framework Core (EF Core), which handles…\nReading time: 6 min read\n", "username": "Prasad_Saya" } ]
WebAPI and returning an object or ID on insert
2021-04-06T20:52:08.004Z
WebAPI and returning an object or ID on insert
18,010
https://www.mongodb.com/…e_2_1024x394.png
[ "node-js", "crud" ]
[ { "code": "", "text": "I am using the node.js framework from this tutorial: Node.js MongoDB Get StartedI have a strange problem which i dont understand, i have make a picture for better understanding:\nupdate question1600×616 38.8 KB\nIf i just try to update another object in this array it does not work, but if i update the first object it does work, so please does somebody know why, is the mongo client which i use not good or does my query have a error?", "username": "Florian_Silbereisen" }, { "code": "", "text": "i have find out that the problem was that i use a array of array objects my structur was wrong in the database, i have change it and now it work", "username": "Florian_Silbereisen" }, { "code": "", "text": "Hi Florian!So glad you found the solution!!Karen", "username": "Karen_Huaulme" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trying to update a object in a array why it dont work?
2021-04-06T19:55:34.930Z
Trying to update a object in a array why it dont work?
3,808
null
[ "kafka-connector" ]
[ { "code": "", "text": "Team,We are using MongoDB kafka connector in an ec2 instance which runs in standalone mode and connects to AWS confluent instance.When we try to create source connector which send data from collection to kafka topic in AWS it throws ‘Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1066826 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.’ Exception.We have already configured the max.message.bytes in kafka topic to 8388608 bytes but still facing the exception. Are we missing any config. I have read that we need to add max.request.size config with apporpriate value, in this scenario where can we make this change in AWS kafka ?", "username": "vinay_murarishetty" }, { "code": "", "text": "If you are using Confluent Cloud, did you consider using the Atlas source connector?", "username": "Robert_Walters" }, { "code": "max.request.sizeproducer.consumer.RecordTooLargeException", "text": "Hi,With regard to your issue there are the following aspects to consider:if you want to support larger records than the default settings you have to make config changes not only on the broker side but also for the producer that sends these recordsin your case we are talking about a kafka source connector scenario which means that there some default kafka connect worker settings for the underlying producer configurations in place. you can provide overrides for these settings e.g. to change max.request.size accordingly.The official docs state the following about this: “For configuration of the producers used by Kafka source tasks and the consumers used by Kafka sink tasks, the same parameters can be used but need to be prefixed with producer. and consumer. respectively.”Using this approach you should be able to reconfigure your source connector to make use of the override and thereby get rid of the RecordTooLargeException.Hope this helps!", "username": "hpgrahsl" } ]
Unrecoverable exception from producer send callback. RecordTooLargeException
2021-03-10T06:29:21.886Z
Unrecoverable exception from producer send callback. RecordTooLargeException
4,622