image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "app-services-user-auth", "app-services-data-access" ]
[ { "code": "let creds = Credentials.emailPassword(email: username, password: password)\napp.login(credentials: creds) { result in\nlet client = app.currentUser!.mongoClient(\"some-client-db-atlas\")\nlet database = client.database(named: \"my-database\")\nlet collection = database.collection(withName: \"MyCoolCollection\")\nlet whichPartition = \"CoolPartition\"\ncollection.find(filter: [\"_partitionKey\": AnyBSON(whichPartition)], { (result) in\nlet myData = [\n (\"key0\", AnyBSON(stringLiteral: \"value0\")),\n (\"key1\", AnyBSON(stringLiteral: \"value1\"))\n]\nlet myDoc = Document(uniqueKeysWithValues: myData)\ncollection.insertOne([myDoc], { results in\n print(results)\n})\nlet username = \"UserA\"\nlet password = \"kaijas09ajadjjad\"\nlet creds = Credentials.emailPassword(email: username, password: password)\napp.login(credentials: creds, { result in\n print(result)\n})\n", "text": "I am working with MongoDB Remote Access - iOS SDK to retrieve and (attempt to) insert documents.At the moment when we authenticate using the standard MongoDB Realm authenticationwe can retrieve data using the remote access functions like thisAnd the data is fetched and returns correctly.However, inserting/writing isn’t workingresults in an errorfailure(Error Domain=realm::app::ServiceError Code=-1 “insert not permitted”It’s possible (?) that we need to create a separate user in the Realm Console Atlas->Database Access and Network Access sections based on this post and then try a separate login/authBut we get an invalid email/password error.I see that inserting is not support on Data Lake but that’s not applicable to our situation.What’s the correct process to make this work so we can insert?", "username": "Jay" }, { "code": "", "text": "Hey Jay!That “insert not permitted” error most likely means that your server-side rules aren’t set up to allow the write. I’d start by checking your rules to make sure they’re defined correctly.Note that you won’t need to create db users in Atlas for anything in Realm - those users are totally separate from the email/password auth provider. Realm creates db users for each app and uses them behind the scenes to handle your app’s requests.", "username": "nlarew" }, { "code": "", "text": "Thank you @nlarewThis is an existing MongoDB Realm user that has full read/write privileges and can read and write to our App’s database collection successfully.Again, we can read via Remote Access, but it only fails when attempting a write.There are no Rules defined for the app either; Sync Permissions for read and write are both set to true.Any suggestions as where to look?", "username": "Jay" }, { "code": "", "text": "Hi @Jay, you mention that there are no Rules defined for the Realm app – you should have some set up so to enable Realm users to read/write this collection.Are you also seeing errors in the Realm logs?", "username": "Andrew_Morgan" }, { "code": "Error:\ninsert not permitted : could not validate document: (root): _id is required\nStack Trace:\n\nFunctionError: insert not permitted at <eval>:16:4(4)\nDetails:\n{\n \"serviceAction\": \"insertOne\",\n \"serviceName\": \"mongodb-atlas\",\n \"serviceType\": \"mongodb-atlas\"\n}\n{\n \"arguments\": [\n {\n \"database\": \"my-database\",\n \"collection\": \"TaskClass\",\n \"document\": {\n \"status\": \"Open\",\n \"name\": \"Inserted Task\",\n \"_partitionKey\": \"Task Tracker\"\n }\n }\n ],\n \"name\": \"insertOne\",\n \"service\": \"mongodb-atlas\"\n} \nlet taskName = AnyBSON(stringLiteral: \"Inserted Task\")\nlet status = AnyBSON(stringLiteral: \"Open\")\nlet partition = AnyBSON(stringLiteral: Constants.REALM_PARTITION_VALUE)\nlet taskDict = (\"name\", taskName )\nlet statusDict = (\"status\", status)\nlet partitionDict = (\"_partitionKey\", partition)\nlet myTaskDoc = Document(dictionaryLiteral: taskDict, statusDict, partitionDict)\ncollection.insertOne(myTaskDoc, { result in\n print(result)\n})\n", "text": "@Andrew_MorganThere are no rules in the console: Realm->App->Rules. But again, the user can read and write with no issues from the SDK. They can read this using Remote Access but not write.There is an error but the message isn’t really clear.We are using insertOne, which, according to the docs, if _id is missing, one will be generated and returned. Here’s the object construction in Swift", "username": "Jay" }, { "code": "_id", "text": "I confess that I haven’t yet worked with this feature of the Realm SDK, but it could well be that its behavior doesn’t exactly match the documented Swift driver behavior as it passes through the Realm service rather than connecting directly to Atlas.Have you tried adding the _id attribute to confirm if that’s the issue?Is there a schema defined in the Realm app for this collection?", "username": "Andrew_Morgan" }, { "code": " //your Realm App Id. Console->Realm->Click app, AppID at top\n let app = App(id: \"xxxx-yyyy\") \n\n //this is the default name. aka the same name as the Service Name or Linked Cluster name\n // found in Console->Realm->click your app->Linked Data Sources in left column\n let client = app.currentUser!.mongoClient(\"mongodb-atlas\") \n\n //found in Console->Realm->Click app, AppID->Schema in left column\n let database = client.database(named: \"track-tracker-database\") \n\n //also found in Schema section for whatever database you're using\n let collection = database.collection(withName: \"TaskClass\") \n\n //the data to be written must be in a <String, AnyBSON> format so here are the values\n // we're going to write\n let _id = AnyBSON(ObjectId.generate()) //generates the required objectID for the objects\n let taskName = AnyBSON(stringLiteral: \"Inserted Task\") //a value for your objects properties\n let status = AnyBSON(stringLiteral: \"Open\") //another value for your objects properties\n let partition = AnyBSON(stringLiteral: \"my_partiton\") //whatever partition (realm) you want the object to be inserted into\n\n //these are the keys and associated values to store with the object in a <String, AnyBSON> format\n let idDict = (\"_id\", _id)\n let taskDict = (\"name\", taskName )\n let statusDict = (\"status\", status)\n let partitionDict = (\"_partitionKey\", partition)\n\n //note that inserting data **requires** a populated Document object\n let myTaskDoc = Document(dictionaryLiteral: idDict, taskDict, statusDict, partitionDict)\n\n collection.insertOne(myTaskDoc, { result in\n print(result)\n }) \n", "text": "@Andrew_MorganAnd there we have it - success. Here’s the low down for future readersSee Document and insertOne for more readingThe .insertOne print’s the following to console indicating successsuccess(RealmSwift.AnyBSON.objectId(609195b56312471d6b083fe6))", "username": "Jay" } ]
MongoDB Remote Access - iOS SDK insert
2021-05-02T16:25:53.050Z
MongoDB Remote Access - iOS SDK insert
4,924
null
[ "python", "connecting" ]
[ { "code": "ServerSelectionTimeoutError: cluster0-shard-00-02.wapdt.mongodb.net:27017: [SSL: \nCERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate \n(_ssl.c:1045),cluster0-shard-00-01.wapdt.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] \ncertificate verify failed: unable to get local issuer certificate (_ssl.c:1045),cluster0-shard-00- \n00.wapdt.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to \nget local issuer certificate (_ssl.c:1045), Timeout: 30s, Topology Description: <TopologyDescription id: \n6090ddc81b3959247ae64e09, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription \n('cluster0-shard-00-00.wapdt.mongodb.net', 27017) server_type: Unknown, rtt: None, \nerror=AutoReconnect('cluster0-shard-00-00.wapdt.mongodb.net:27017: [SSL: \nCERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate \n(_ssl.c:1045)')>, <ServerDescription ('cluster0-shard-00-01.wapdt.mongodb.net', 27017) server_type: \nUnknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.wapdt.mongodb.net:27017: [SSL: \nCERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate \n(_ssl.c:1045)')>, <ServerDescription ('cluster0-shard-00-02.wapdt.mongodb.net', 27017) server_type: \nUnknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.wapdt.mongodb.net:27017: [SSL: \nCERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate \n(_ssl.c:1045)')>]>\nuri = 'mongodb+srv://xxx:[email protected]/shopalot? \nretryWrites=true&w=majority'\nclient = MongoClient(uri, tls=True, tlsCAFile=certifi.where())\n", "text": "Hi,\nI’m having an issue connecting to my Atlas cluster in a Jupyter notebook. When I run a query, I receive the following traceback:This is what my MongoClient initialization looks like:I’ve tried all possible steps at TLS/SSL and PyMongo — PyMongo 4.3.3 documentation (including running the “Install Certificates.command”) and ensured that my IP address is whitelisted in Atlas. I also tried setting tlsAllowInvalidCertificates to True and I somehow still get the same error.I am able to connect to my cluster through MongoDB Compass but I still do need to find a way to be able to run my Jupyter notebook.What am I missing here?Thank you for any help you can offer!", "username": "Sahil_S_Railkar" }, { "code": "ssl=True,ssl_cert_reqs='CERT_NONE'\nclient = pymongo.MongoClient(\"mongodb+srv://<username>:<password>@xxx.yyyy.mongodb.net/myFirstDatabase?retryWrites=true&w=majority\")", "text": "Did you try connecting without specifying a certificate?or any options, like Atlas example shows:", "username": "Asya_Kamsky" }, { "code": "", "text": "Interestingly, when I ran my queries today/just now, everything began to work. I’m able to connect with the options I mentioned in the post, without specifying a certificate, and without any options. Not sure what changed overnight, but thank you so much for taking the time to provide some options for me!", "username": "Sahil_S_Railkar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot connect to Atlas cluster on MacOS
2021-05-04T07:35:07.300Z
Cannot connect to Atlas cluster on MacOS
2,307
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that addresses some issues reported since 2.12.2 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.12.3%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "James_Kovacs" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver 2.12.3 Released
2021-05-04T18:32:23.967Z
.NET Driver 2.12.3 Released
1,813
null
[]
[ { "code": "", "text": "Hello,Retention policies (deletion/anonymization) to be applied on our PRD mongodB . Data can’t be kept forever.Collections to be in scope of the deletion: (where personal data = yes).P.S. please tell me if further info needed.Thanks", "username": "Haytham_Mostafa" }, { "code": "", "text": "Hi @Haytham_Mostafa and welcome in the MongoDB Community,Usually TTL indexes (Time To Live) are an easy wait to enforce an automated deletion to enforce a retention policy.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
GDPR - Retention policies-deletion
2021-05-04T14:27:35.906Z
GDPR - Retention policies-deletion
2,011
null
[ "transactions" ]
[ { "code": "", "text": "Hello, after adding transaction support, I noticed a lot of write conflicts since my app does a lot of concurrent updates on the same document.This is by design, as it’s a chat application and multiple users can, for example, update a “group chat” at the same time by writing a lot of messages concurrently.Is there some way to avoid write conflicts by telling Mongo to queue operations and wait for the write lock to be released instead of throwing a write conflict exception? Or how would you handle these issues?", "username": "stefan_fai" }, { "code": "", "text": "Welcome to the MongoDB Community @stefan_fai,It ls always hard to predict outcomes without seeing actual code. What I can say is that concurrent writes to any single document were and are atomic even before we announced support for ACID transactions. So you can just fire those writes at the single doc for the chat and have them update just fine.", "username": "Joe_Drumgoole" }, { "code": "Query query = new Query();\nquery.addCriteria(Criteria.where(\"conversationId\").is(conversation.getId()));\nquery.addCriteria(Criteria.where(\"ownerId\").is(receiver.getId()));\n\nUpdate update = new Update();\nupdate.inc(\"unreadCount\", 1);\n\nmongoTemplate.updateFirst(query, update, Contact.class);\n", "text": "Thanks for the reply, it’s true that before introducing transactions, concurrent updates on the same document worked without problems, however it looks like I’m running into the following issue:https://docs.mongodb.com/manual/core/transactions-production-consideration/#in-progress-transactions-and-write-conflicts.If a transaction is in progress and a write outside the transaction modifies a document that an operation in the transaction later tries to modify, the transaction aborts because of a write conflictI’m not sure how to handle this case, since I might have dozens of concurrent transactions writing to the same document.This is an example that produces write conflicts if run in parallel:", "username": "stefan_fai" }, { "code": "", "text": "Just remove the transaction and let the database resolve the writes in the order they come in? Transactions really come into their own when updates must be atomic across multiple documents. In that case without a transaction the updates to two documents may not be atomic without a transaction.", "username": "Joe_Drumgoole" }, { "code": "", "text": "That was just a snippet of my code, I actually do update multiple documents in different collections as part of the transaction… I guess I’ll need to remove the transaction anyway?", "username": "stefan_fai" }, { "code": "", "text": "If the transaction is required you should leave it in. Then if their are write-conflicts you will have to retry.", "username": "Joe_Drumgoole" } ]
Recommended way to handle write conflicts inside transactions?
2021-04-27T10:41:32.384Z
Recommended way to handle write conflicts inside transactions?
17,081
null
[]
[ { "code": "", "text": "Hi Team,May i know the location for Source RPM for the mongo releases? We would like to see if we can create a binary from the source RPM by adding a specific diff.Thanks\nVenkataraman", "username": "venkataraman_r" }, { "code": "", "text": "Not SRPM but spec files for RPM:master/rpmThe MongoDB Database. Contribute to mongodb/mongo development by creating an account on GitHub.", "username": "chris" }, { "code": "", "text": "Hi @venkataraman_r -We don’t currently offer SRPMs from which you can build the release RPMs. We have plans to improve our package creation story in the future, but for now it is a custom scripted process. If you would like to create a ticket in jira.mongodb.com (under the SERVER project) requesting this feature, we can consider including SPRM production when we schedule the work to improve our packaging story.Also, if you are referring to the enterprise release, it won’t be possible for you to build from source, since we do not make the enterprise source code available. So, even with SRPMs, you would be limited to building the community version.If a community build is acceptable, you can easily build tarballs of the releases. If you would like instructions on how to do so, please follow up in this thread and we can discuss further.Thanks,\nAndrew", "username": "Andrew_Morrow" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo SRPM location
2021-05-03T22:33:54.895Z
Mongo SRPM location
3,079
null
[ "app-services-cli" ]
[ { "code": "", "text": "Hi,I am using the 2.0beta realm-cli and trying to pull the app.trying this:\nrealm-cli pull --remote myapp-rqpmyand I get:\nexport failed: must specify --remote or run command from inside a Realm app directoryI’ve already logged in. Any idea what’s missing?Thanks…", "username": "donut" }, { "code": "realm-cli version 2.0.0-beta.4myapp-rqpmyrealm-cli", "text": "Hi @donut, I’ve just tried this with realm-cli version 2.0.0-beta.4.The only way I’ve found to get that error if the realm-app-id provided doesn’t match an existing Realm app from the project where I created the API key.Is myapp-rqpmy an existing Realm app in the same project as the API key you used to log into realm-cli?", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to pull with realm-cli
2021-05-03T21:27:45.913Z
Unable to pull with realm-cli
3,395
null
[ "queries", "atlas-device-sync" ]
[ { "code": "", "text": "To fully understand my case consider an app like Instagram, is it possible do develop a similar app (in IOS - Swift) using Realm SDK synchronized with Atlas?Ive read a lot of info and tutorials about Realm (task tracker app too), but I think Im missing something or its impossible to make a similar app with it. Realm let user access data through a partition key but in a social app most of data is accessible to everyone (maybe only private infos like mail or phone number are not shared), so my doubt is that each time I call a sync function with a partition key to retrieve all posts it will download all posts present in atlas (since they all have the same partition value) and at the same time will merge them into my local Realm, you can imagine its impracticable.Is this a case where choosing an own backend framework suits better or am I missing something?", "username": "Marco_Vastolo" }, { "code": "", "text": "MongoDB Realm can create an app like Instagram (or Twitter, or Facebook or…)most of data is accessible to everyoneYes, and there is public and private data in Realm as well.so my doubt is that each time I call a sync function with a partition key to retrieve all posts it will download all posts present in atlasThat’s correct. If you want to see the posts, they need to be downloaded. However, there are a number of ways to limit the number of posts including by partitionKey (maybe posts with a certain ‘tag’) or posts from some user (limited by uid) or even by date (using the date as a partition key).There are also server side functions and remote access that can limit the data delivered to your app.Reading Partition Atlas Data into Realms is a great starting point - it shows how to take the large amount of data stored by your application and present it into smaller chunks needed by individual clientsWrite some code and give it try. Start small, and expand the app as you gain confidence and familiarity with Realm.", "username": "Jay" }, { "code": "", "text": "@Marco_Vastolo as Jay suggests partitioning is “key” here.I’ve just published this article on Realm partitioning strategies, which includes a lot of examples: https://www.mongodb.com/how-to/realm-partitioning-strategies/", "username": "Andrew_Morgan" } ]
Realm SDK vs own backend
2021-05-03T20:30:37.042Z
Realm SDK vs own backend
1,853
null
[ "performance", "field-encryption" ]
[ { "code": "", "text": "Hi,I am using the Automatic Client-Side Field Level Encryption feature on a MongoDB Atlas cluster. I am able to encrypt and decrypt my documents. However, I am facing a performance issue. For every write I make using the connector with autoEncryption turned on, even if the document I am saving doesn’t have any encrypted fields it takes 0.45 s (twice the time it takes without the autoEncryption mode). I don’t understand why it is taking so long when the document I am saving is not performing any encryption or decryption computation.When I have many concurrent writes (20+) it just doesn’t work.Is someone having the same issue?Thanks!", "username": "Emilio_Lopez" }, { "code": "", "text": "HiI am facing the same problem. Did you find a solution?", "username": "Valerio_Como" } ]
Automatic Client-Side Field Encryption mongocryptd performance issues
2020-07-17T23:52:50.876Z
Automatic Client-Side Field Encryption mongocryptd performance issues
2,360
null
[ "monitoring" ]
[ { "code": "", "text": "I refactored my data model in order to reduce document size, and thus managed to gain about 20% in “size” per document. However looking at “storage size” the collection containing documents in the new structure weighs more than the original one!!Even for a single document: old structure collection has size 30814 and storage-size 24576 while new structure collection has size 24864 and storage-size 28672I am using WiredTiger + zlib, on MongoDB v4.2.13Example documents (old, then new structure, I had to take out most of the data for it these to fit into this message, there should be 440 subkeys instead of 10 in each subsection):{ “_id” : { “pi” : 1, “rn” : “1”, “vi” : “17917f26055000000000” }, “_class” : “R”, “ai” : { “AC” : 137, “MQRankSum” : 0.0, “filt” : “PASS”, “MQ” : 60.0, “AF” : 0.173, “InbreedingCoeff” : 0.8784, “MLEAC” : 144, “BaseQRankSum” : 1.75, “ExcessHet” : -0.0, “MLEAF” : 0.181, “DP” : 6018, “ReadPosRankSum” : 0.508, “AN” : 794, “FS” : 0.0, “QD” : 29.26, “SOR” : 0.723, “qual” : 39861.1, “ClippingRankSum” : 0.0 }, “ka” : [ “G”, “A” ], “rp” : { “ch” : “chr1”, “ss” : { “$numberLong” : “45229” } }, “sp” : { “1” : { “gt” : “0/0”, “ai” : { “AD” : “6,0”, “GQ” : 18, “DP” : 6, “PL” : “0,18,218” } }, “2” : { “gt” : “0/0”, “ai” : { “AD” : “6,0”, “GQ” : 15, “DP” : 6, “PL” : “0,15,225” } }, “3” : { “gt” : “0/0”, “ai” : { “AD” : “12,0”, “GQ” : 33, “DP” : 12, “PL” : “0,33,495” } }, “4” : { “gt” : “0/0”, “ai” : { “AD” : “16,0”, “GQ” : 48, “DP” : 16, “PL” : “0,48,569” } }, “5” : { “gt” : “1/1”, “ai” : { “AD” : “0,32”, “GQ” : 96, “DP” : 32, “PL” : “1085,96,0” } }, “6” : { “gt” : “0/0”, “ai” : { “AD” : “6,0”, “GQ” : 15, “DP” : 6, “PL” : “0,15,225” } }, “7” : { “gt” : “0/0”, “ai” : { “AD” : “7,0”, “GQ” : 21, “DP” : 7, “PL” : “0,21,240” } }, “8” : { “gt” : “0/0”, “ai” : { “AD” : “14,0”, “GQ” : 39, “DP” : 14, “PL” : “0,39,585” } }, “9” : { “gt” : “0/0”, “ai” : { “AD” : “8,0”, “GQ” : 24, “DP” : 8, “PL” : “0,24,307” } }, “10” : { “gt” : “0/0”, “ai” : { “AD” : “12,0”, “GQ” : 33, “DP” : 12, “PL” : “0,33,495” } } }, “ty” : “SNP”, “v” : { “$numberLong” : “0” } }{ “_id” : { “pi” : 1, “rn” : “1”, “vi” : “179138f0da6000000000” }, “M” : { “AD” : { “1” : “6,0”, “2” : “6,0”, “3” : “12,0”, “4” : “16,0”, “5” : “0,32”, “6” : “6,0”, “7” : “7,0”, “8” : “14,0”, “9” : “8,0”, “10” : “12,0” }, “GQ” : { “1” : 18, “2” : 15, “3” : 33, “4” : 48, “5” : 96, “6” : 15, “7” : 21, “8” : 39, “9” : 24, “10” : 33 }, “DP” : { “1” : 6, “2” : 6, “3” : 12, “4” : 16, “5” : 32, “6” : 6, “7” : 7, “8” : 14, “9” : 8, “10” : 12 }, “PL” : { “1” : “0,18,218”, “2” : “0,15,225”, “3” : “0,33,495”, “4” : “0,48,569”, “5” : “1085,96,0”, “6” : “0,15,225”, “7” : “0,21,240”, “8” : “0,39,585”, “9” : “0,24,307”, “10” : “0,33,495” } }, “_class” : “R”, “a” : [ “G”, “A” ], “g” : { “1” : “0/0”, “2” : “0/0”, “3” : “0/0”, “4” : “0/0”, “5” : “1/1”, “6” : “0/0”, “7” : “0/0”, “8” : “0/0”, “9” : “0/0”, “10” : “0/0” }, “i” : { “AC” : 137, “MQRankSum” : 0.0, “filt” : “PASS”, “MQ” : 60.0, “AF” : 0.173, “InbreedingCoeff” : 0.8784, “MLEAC” : 144, “BaseQRankSum” : 1.75, “ExcessHet” : -0.0, “MLEAF” : 0.181, “DP” : 6018, “ReadPosRankSum” : 0.508, “AN” : 794, “FS” : 0.0, “QD” : 29.26, “SOR” : 0.723, “qual” : 39861.1, “ClippingRankSum” : 0.0 }, “p” : { “0” : { “ch” : “chr1”, “ss” : { “$numberLong” : “45229” } } }, “t” : “SNP” }", "username": "Guilhem_SEMPERE" }, { "code": "", "text": "See https://docs.mongodb.com/manual/faq/storage/#how-do-i-reclaim-disk-space-in-wiredtiger-I am pretty sure that it is the same for documents that shrinks in size. It is more efficient to keep the unused space allocated to the file. However, by having smaller documents, more of them fits in RAM so you have a bigger working set.There are ways to claim back the unused space from within a file. It should be indicated how in the link above.", "username": "steevej" } ]
Inconsistency between size and storage size
2021-05-03T14:44:47.962Z
Inconsistency between size and storage size
1,648
https://www.mongodb.com/…64c6d24b9183.png
[ "php" ]
[ { "code": "", "text": "Is PHP worth learning in 2021? What new applications are you building with PHP… using MongoDB, or not.image600×600 5.25 KBThe market shows a healthy position for PHP still. It’s a fun, relatively easy language to learn and with MongoDB’s PHP driver, you have all the flexibility, ease of use and performance of nosql.Would love to hear about how you’re using PHP today.", "username": "Michael_Lynn" }, { "code": "", "text": "Dear PHP users,I’m also curious to know more about your PHP usage, so thought it is time to try out polls .\n0\nvoters\nOne of these choices is not like the others, but all have had many books written :). If you’re using an older version of PHP, what is keeping you from upgrading?\n0\nvoters\nRegards,\nStennie", "username": "Stennie_X" } ]
What are you doing with PHP in 2021?
2021-03-19T19:19:09.759Z
What are you doing with PHP in 2021?
2,592
null
[ "node-js", "next-js" ]
[ { "code": "3:57:32 PM: info - Creating an optimized production build...\n3:57:50 PM: Failed to compile.\n3:57:50 PM: \n3:57:50 PM: ModuleNotFoundError: Module not found: Error: Can't resolve 'mongodb-client-encryption' in '/opt/build/repo/node_modules/mongodb/lib'\n3:57:50 PM: > Build error occurred\n3:57:50 PM: Error: > Build failed because of webpack errors\n3:57:50 PM: at /opt/build/repo/node_modules/next/dist/build/index.js:17:924\n3:57:50 PM: at runMicrotasks (<anonymous>)\n3:57:50 PM: at processTicksAndRejections (internal/process/task_queues.js:97:5)\n3:57:50 PM: at async Span.traceAsyncFn (/opt/build/repo/node_modules/next/dist/telemetry/trace/trace.js:5:584)\n{\n \"name\": \"tangonext\",\n \"version\": \"0.1.0\",\n \"private\": true,\n \"scripts\": {\n \"dev\": \"next dev\",\n \"build\": \"next build\",\n \"start\": \"next start\"\n },\n \"dependencies\": {\n \"@auth0/nextjs-auth0\": \"^1.3.0\",\n \"@prisma/client\": \"^2.21.2\",\n \"mongodb\": \"^3.6.6\",\n \"next\": \"10.1.3\",\n \"next-auth\": \"^3.18.0\",\n \"react\": \"17.0.2\",\n \"react-dom\": \"17.0.2\",\n \"styled-components\": \"^5.2.3\",\n \"typeorm\": \"^0.2.32\"\n },\n \"devDependencies\": {\n \"netlify-plugin-cache-nextjs\": \"^1.6.1\"\n }\n}\n", "text": "2021-05-01T22:00:00ZHi everyone,while trying to deploy my Next.js app in Netlify, I keep getting the following error:ModuleNotFoundError: Module not found: Error: Can’t resolve ‘mongodb-client-encryption’ in ‘/opt/build/repo/node_modules/mongodb/lib’Do you know how can I solve it?Thank you for any help !This is how my package.json looks now:I just tried as Build Commmand: CI=’ ’ npm run build - but it didn’t work", "username": "Matias_F" }, { "code": "", "text": "So, after digging around I found this page: https://www.mongodb.com/how-to/client-side-field-level-encryption-csfle-mongodb-node/Which provides vast information regarding to cliente side encryption.\nI installed the package that was missing, and got it to work.I hope this fix will be durable have a good day!", "username": "Matias_F" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Module not found when deploying in NextJs
2021-05-02T12:35:41.751Z
Module not found when deploying in NextJs
11,055
null
[]
[ { "code": "ordered", "text": "I understand that with other operations such as insertMany there’s an option for ordered meaning that the inserts are performed in the order specified but its unclear how the update processes behaves. Especially in the event that more than one update to the same document is part of the list of updates.Is it possible that one update wins over the other every time? Or would it be seemingly random?We’re running into an issue where we have multiple updates being applied to documents, with a fix soon coming to address that. I was just unsure how Mongo itself would handle those updates assuming one or more documents are being updated multiple times during the updateMany command.What I would assume would be the case is that one update might be processed before the other (assuming they aren’t ordered) and then the update that modifies the document LAST is the state of the document going forward. Essentially “overwriting” the update from before (we aren’t doing $addToSet or anything - just property values) so a change to property x to value y might be changed to value z.Am I on the right track here?", "username": "Wyatt_Baggett" }, { "code": "", "text": "Hi @Wyatt_BaggettWelcome to MongoDB community.You are on the right track so each update is atomic but in a batch the last update wins.However, it will affect locking performance and result in increased writeConflicts witch will slow your write rate.You might be able to control the updates better with bulk.find().updateOne() updating values only once per document. To do this build the array command during your processing and execute when a definitive unique amount of updates accumulated…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
UpdateMany Behavior With Multiple Updates To Same Document
2021-05-03T17:48:32.867Z
UpdateMany Behavior With Multiple Updates To Same Document
2,913
null
[]
[ { "code": "", "text": "Aggregate below is very slow (30 minutes). Any tips?{\n“command”: {\n“aggregate”: “contacts”,\n“pipeline”: [\n{\n“$match”: {\n“isDeleted”: false,\n“tenant_id”: {\n“$oid”: “5ec2a723a73af34fd5964c93”\n},\n“$or”: [\n{\n“emails”: {\n“$exists”: true,\n“$not”: {\n“$size”: 0\n}\n}\n},\n{\n“cellphones”: {\n“$exists”: true,\n“$not”: {\n“$size”: 0\n}\n}\n}\n]\n}\n},\n{\n“$lookup”: {\n“from”: “events”,\n“let”: {\n“cId”: “$_id”\n},\n“pipeline”: [\n{\n“$match”: {\n“$expr”: {\n“$and”: [\n{\n“$eq”: [\n“$contact_id”,\n“$$cId”\n]\n}\n]\n}\n}\n},\n{\n“$group”: {\n“_id”: “$contact_id”,\n“60900d520548ab3d4f3c657f”: {\n“$sum”: {\n“$cond”: [\n{\n“$and”: [\n{\n“$eq”: [\n“$channel”,\n“form”\n]\n},\n{\n“$eq”: [\n“$event”,\n“submitted”\n]\n},\n{\n“$eq”: [\n“$form_id”,\n{\n“$oid”: “6000194aae91a5ea3fa9975b”\n}\n]\n}\n]\n},\n1,\n0\n]\n}\n}\n}\n}\n],\n“as”: “events”\n}\n},\n{\n“$unwind”: {\n“path”: “$events”,\n“preserveNullAndEmptyArrays”: true\n}\n},\n{\n“$addFields”: {\n“60900d520548ab3d4f3c657f”: “$events.60900d520548ab3d4f3c657f”\n}\n},\n{\n“$project”: {\n“events”: 0\n}\n},\n{\n“$lookup”: {\n“from”: “events”,\n“let”: {\n“cId”: “$_id”\n},\n“pipeline”: [\n{\n“$match”: {\n“$expr”: {\n“$and”: [\n{\n“$eq”: [\n“$contact_id”,\n“$$cId”\n]\n}\n]\n}\n}\n},\n{\n“$group”: {\n“_id”: “$contact_id”,\n“60900d520548ab80b13c657e”: {\n“$sum”: {\n“$cond”: [\n{\n“$and”: [\n{\n“$eq”: [\n“$channel”,\n“form”\n]\n},\n{\n“$eq”: [\n“$event”,\n“submitted”\n]\n},\n{\n“$eq”: [\n“$form_id”,\n{\n“$oid”: “604f5890d5460d89d93949df”\n}\n]\n}\n]\n},\n1,\n0\n]\n}\n}\n}\n}\n],\n“as”: “events”\n}\n},\n{\n“$unwind”: {\n“path”: “$events”,\n“preserveNullAndEmptyArrays”: true\n}\n},\n{\n“$addFields”: {\n“60900d520548ab80b13c657e”: “$events.60900d520548ab80b13c657e”\n}\n},\n{\n“$project”: {\n“events”: 0\n}\n},\n{\n“$lookup”: {\n“from”: “events”,\n“let”: {\n“cId”: “$_id”\n},\n“pipeline”: [\n{\n“$match”: {\n“$expr”: {\n“$and”: [\n{\n“$eq”: [\n“$contact_id”,\n“$$cId”\n]\n}\n]\n}\n}\n},\n{\n“$group”: {\n“_id”: “$contact_id”,\n“60900d520548ab3cd73c657d”: {\n“$sum”: {\n“$cond”: [\n{\n“$and”: [\n{\n“$eq”: [\n“$channel”,\n“form”\n]\n},\n{\n“$eq”: [\n“$event”,\n“submitted”\n]\n},\n{\n“$eq”: [\n“$form_id”,\n{\n“$oid”: “604a0401f4ac575280617d95”\n}\n]\n}\n]\n},\n1,\n0\n]\n}\n}\n}\n}\n],\n“as”: “events”\n}\n},\n{\n“$unwind”: {\n“path”: “$events”,\n“preserveNullAndEmptyArrays”: true\n}\n},\n{\n“$addFields”: {\n“60900d520548ab3cd73c657d”: “$events.60900d520548ab3cd73c657d”\n}\n},\n{\n“$project”: {\n“events”: 0\n}\n},\n{\n“$lookup”: {\n“from”: “events”,\n“let”: {\n“cId”: “$_id”\n},\n“pipeline”: [\n{\n“$match”: {\n“$expr”: {\n“$and”: [\n{\n“$eq”: [\n“$contact_id”,\n“$$cId”\n]\n}\n]\n}\n}\n},\n{\n“$group”: {\n“_id”: “$contact_id”,\n“60900d520548abc18b3c657c”: {\n“$sum”: {\n“$cond”: [\n{\n“$and”: [\n{\n“$eq”: [\n“$channel”,\n“form”\n]\n},\n{\n“$eq”: [\n“$event”,\n“submitted”\n]\n},\n{\n“$eq”: [\n“$form_id”,\n{\n“$oid”: “607f0e6c21f8060d785195cd”\n}\n]\n}\n]\n},\n1,\n0\n]\n}\n}\n}\n}\n],\n“as”: “events”\n}\n},\n{\n“$unwind”: {\n“path”: “$events”,\n“preserveNullAndEmptyArrays”: true\n}\n},\n{\n“$addFields”: {\n“60900d520548abc18b3c657c”: “$events.60900d520548abc18b3c657c”\n}\n},\n{\n“$project”: {\n“events”: 0\n}\n},\n{\n“$match”: {\n“$or”: [\n{\n“60900d520548ab3d4f3c657f”: {\n“$gte”: 1\n}\n},\n{\n“60900d520548ab80b13c657e”: {\n“$gte”: 1\n}\n},\n{\n“60900d520548ab3cd73c657d”: {\n“$gte”: 1\n}\n},\n{\n“60900d520548abc18b3c657c”: {\n“$gte”: 1\n}\n}\n]\n}\n},\n{\n“$project”: {\n“_id”: 1\n}\n},\n{\n“$count”: “count”\n},\n{\n“$unwind”: “$count”\n}\n],\n“allowDiskUse”: true,\n“cursor”: {},\n“lsid”: {\n“id”: {\n“$binary”: “5iRNOC0dSTaKQySLPvB5uQ==”,\n“$type”: “03”\n}\n},\n“$clusterTime”: {\n“clusterTime”: {\n“$timestamp”: {\n“t”: 1620056396,\n“i”: 6\n}\n},\n“signature”: {\n“hash”: {\n“$binary”: “PHh4eHh4eD4=”,\n“$type”: “00”\n},\n“keyId”: {\n“$numberLong”: “6902231878846119939”\n}\n}\n},\n“$db”: “production”\n},\n“planSummary”: [\n{\n“IXSCAN”: {\n“isDeleted”: 1,\n“tenant_id”: 1,\n“cookies”: 1\n}\n}\n],\n“numYields”: 1286555,\n“queryHash”: “B03447D9”,\n“planCacheKey”: “C5207446”,\n“ok”: 0,\n“errMsg”: “Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected”,\n“errName”: “ClientDisconnect”,\n“errCode”: 279,\n“reslen”: 311,\n“locks”: {\n“ReplicationStateTransition”: {\n“acquireCount”: {\n“w”: 4124187\n}\n},\n“Global”: {\n“acquireCount”: {\n“r”: 4124187\n}\n},\n“Database”: {\n“acquireCount”: {\n“r”: 4124187\n}\n},\n“Collection”: {\n“acquireCount”: {\n“r”: 4124188\n}\n},\n“Mutex”: {\n“acquireCount”: {\n“r”: 2837632\n}\n}\n},\n“protocol”: “op_msg”,\n“millis”: 1800325\n}", "username": "Admin_MlabsPages_mLa" }, { "code": "tenant_id : 1,\nIsDeleted : 1,\nemails : 1,\ncellphones : 1,\n_id : 1\n", "text": "Hi @Admin_MlabsPages_mLa,This aggregation is very complex and have lots of inefficient operators:I don’t know why you expect it to be performant or why do you need such a complex aggregation.Usually it indicates on a normalised schema design not fitted for MongoDB where you have to join many relationships to get to an application like document.Unfortunately, the only recommendations I might have is to have to do an index onAnd index any lookup related fields, but I don’t expect a dramatic change Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Slow Aggregate for large collection
2021-05-03T18:50:00.704Z
Slow Aggregate for large collection
2,329
null
[ "aggregation" ]
[ { "code": "{ \"_id\" : ObjectId(\"602018db89877fd06f61d2f2\"), \"from_stop_I\" : 1908, \"to_stop_I\" : 1491, \"dep_time\" : ISODate(\"2016-09-05T07:40:00.000Z\"), \"arr_time\" : ISODate(\"2016-09-05T07:41:00.000Z\"), \"route_type\" : 3, \"trip_I\" : 18, \"seq\" : 37, \"route_I\" : 104} \n{ \"_id\" : ObjectId(\"602018db89877fd06f61d2f3\"), \"from_stop_I\" : 1491, \"to_stop_I\" : 1500, \"dep_time\" : ISODate(\"2016-09-05T07:41:00.000Z\"), \"arr_time\" : ISODate(\"2016-09-05T07:44:00.000Z\"), \"route_type\" : 3, \"trip_I\" : 18, \"seq\" : 38, \"route_I\" : 104}\n{ \"_id\" : ObjectId(\"602018dc89877fd06f61d721\"), \"from_stop_I\" : 1156, \"to_stop_I\" : 1158, \"dep_time\" : ISODate(\"2016-09-05T08:06:00.000Z\"), \"arr_time\" : ISODate(\"2016-09-05T08:06:00.000Z\"), \"route_type\" : 3, \"trip_I\" : 72, \"seq\" : 1, \"route_I\" : 104}\n{ \"_id\" : ObjectId(\"602018dc89877fd06f61d722\"), \"from_stop_I\" : 1158, \"to_stop_I\" : 1160, \"dep_time\" : ISODate(\"2016-09-05T08:06:00.000Z\"), \"arr_time\" : ISODate(\"2016-09-05T08:07:00.000Z\"), \"route_type\" : 3, \"trip_I\" : 72, \"seq\" : 2, \"route_I\" : 104}\n{ \"_id\" : ObjectId(\"602018dc89877fd06f61d746\"), \"from_stop_I\" : 1491, \"to_stop_I\" : 1500, \"dep_time\" : ISODate(\"2016-09-05T08:27:00.000Z\"), \"arr_time\" : ISODate(\"2016-09-05T08:30:00.000Z\"), \"route_type\" : 3, \"trip_I\" : 72, \"seq\" : 38, \"route_I\" : 104}\n{ \"_id\" : ObjectId(\"6020193c89877fd06f639dec\"), \"from_stop_I\" : 1156, \"to_stop_I\" : 1158, \"dep_time\" : ISODate(\"2016-09-05T23:20:00.000Z\"), \"arr_time\" : ISODate(\"2016-09-05T23:20:00.000Z\"), \"route_type\" : 3, \"trip_I\" : 6972, \"seq\" : 1, \"route_I\" : 104}\n{ \"_id\" : ObjectId(\"6020193c89877fd06f639ded\"), \"from_stop_I\" : 1158, \"to_stop_I\" : 1160, \"dep_time\" : ISODate(\"2016-09-05T23:20:00.000Z\"), \"arr_time\" : ISODate(\"2016-09-05T23:21:00.000Z\"), \"route_type\" : 3, \"trip_I\" : 6972, \"seq\" : 2, \"route_I\" : 104}\n{ \"_id\" : ObjectId(\"6020193c89877fd06f639dee\"), \"from_stop_I\" : 1160, \"to_stop_I\" : 1162, \"dep_time\" : ISODate(\"2016-09-05T23:21:00.000Z\"), \"arr_time\" : ISODate(\"2016-09-05T23:21:00.000Z\"), \"route_type\" : 3, \"trip_I\" : 6972, \"seq\" : 3, \"route_I\" : 104}\nfunction mapFunction () {\n var key = this.route_I;\n var value = { totalTrips: this.trip_I,totalRegistros:1 };\n emit( key, value );\n};\n\nfunction reduceFunction (key, trips) {\n var reducedObject = { totalTrips: 0,totalRegistros:0 };\n trips.forEach(function(value) {\n reducedObject.totalTrips ++;\n reducedObject.totalRegistros += value.totalRegistros;\n })\n ;\n return reducedObject;\n};\n\nfunction finalizeFunction(key, reducedObject) { \n \n//???\n return reducedObject;\n};\n\ndb.routes.mapReduce(\n mapFunction,\n reduceFunction,\n {\n out: \"routes_out\",\n finalize: finalizeFunction \n }\n)\n", "text": "Hi, I’m trying to calculate the number of trips (trip_I) that each route (route_I) has with MapReduce function, but doesn’t work. They also ask me to use a finalize function but I have not been able to figure out how. This is the data:This is the code:The output should be 3 trips for route 104. Thanks", "username": "Johanna_Valenzuela_S" }, { "code": "", "text": "Hi @Johanna_Valenzuela_S,Welcome to MongoDB community.Why not to use an aggregation $group stage and count the trips?https://docs.mongodb.com/manual/reference/operator/aggregation/sum/#use-in--group-stageThis is the recommended way.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny , the professor asks me to use mapreduce only. With $group I know how do that, but it has to be with mapreduce ", "username": "Johanna_Valenzuela_S" }, { "code": "", "text": "Hi @Johanna_Valenzuela_S,I see. Well there are some examples on the map reduce command including a sum per customer.This sounds exactly what you need suming 1 per document:\nhttps://docs.mongodb.com/manual/reference/command/mapReduce/#return-the-total-price-per-customerThanks\nPavel", "username": "Pavel_Duchovny" } ]
Problems with mapreduce function to generate counts MongoDB JavaScript
2021-05-02T20:32:18.689Z
Problems with mapreduce function to generate counts MongoDB JavaScript
1,982
null
[ "kafka-connector" ]
[ { "code": "live-bckp-mongo-sk-campaigns", "text": "I’m struggling with planning my data bootstrap because I may lose some data. I was thinking make a two step bootstrap, something like this:1- Load existing mongo data at D-1 (D = Go Live Date, in this case, D-1 means the exact day before go live date) using a connector with copy.existing = true\n2 - After all data is loaded, I drop the connector.\n3 - At D-0 I want start collection live data, so i create a new connector without copy.existing property\n- It’s assumed that between D-1 and D-0 users worked on application normally.Using this strategy, my expectation was that i could able to collect the between before i dropped the first connector and the one created a day later. Unfortunately that wasn’t happened.So, i tried a new test.1 - Created a connector without copy.existing.\n2 - Made some changes to db and validated successfully that data flowed to Kafka.\n3 - Then i shutdown my KSQLDB Server (i use the KSQLDB embedded Kafka Connect)\n4 - Made some changes in some db collections\n5 - Started up again KSQLDB Server\n6 - After connector was working again, i noticed that the data from step 4 wasnt recoved (data changes after restart was ok)So, my question is. Do Kafka Source Connector manages downtimes or is some property in my connector config that i’m missing?CREATE SOURCE CONNECTOR live-bckp-mongo-sk-campaigns WITH (\n“connector.class” = ‘com.mongodb.kafka.connect.MongoSourceConnector’,\n“connection.uri” = ‘{uri}’,\n“database” = ‘{db}’,\n“collection” = ‘{collection}’,\n“topic.prefix” = ‘{myprefix}’,\n“change.stream.full.document” = ‘updateLookup’,\n“output.format.value” = ‘schema’,\n“output.schema.value”= ‘{myschema}’\n);", "username": "Andre_Almeida1" }, { "code": "", "text": "Why drop the connector itself? Why not just stop it? The Source connector uses MongoDB Change streams under the covers. When you start the connector for the first time it will capture the _id of the event (the resume token) and store this in one of two different locations depending on the topology of the Kafka Connect environment:As events come in on the source the latest resume token is kept up to date in these locations.Now if you stop the connector and time goes by upon restarting the connector, it will read the last resume token and read events from this location all the way to the current event and effectively catch up to the current events.When you delete the connector, you are deleting the resume token so when you recreate the connector and start it , it doesn’t know the past.", "username": "Robert_Walters" }, { "code": "", "text": "Do we have any documentation around how to deploy MongoDb Connector in distributed mode in kubernetes?", "username": "Gaurav_Danani" } ]
Kafka Source Connector - Recovering from down time
2020-12-16T18:48:08.842Z
Kafka Source Connector - Recovering from down time
3,866
null
[ "compass" ]
[ { "code": "", "text": "Hi,I have installed mongoDB compass GUI (MSI installer). After completing the installation i clicked MongoDB compass , the system shows blank screen(black).Kindly guide me thro this to open databasesSethu", "username": "Sethuraman_V" }, { "code": "", "text": "What is the version of the Compass you are trying? Try downloading the latest version 1.21.2 (I have a copy and it works fine on Windows 7).", "username": "Prasad_Saya" }, { "code": "", "text": "same version i downloaded.\n\nimage1366×768 36.2 KB\n", "username": "Sethuraman_V" }, { "code": "", "text": "I am facing the same problem. I have installed the latest version of mongodb compass 1.25.0 and mongodb of ion 4.2.12. When I am opening mongodb compass it is showing black screen.", "username": "BUSHRA_NIKHAT" }, { "code": "", "text": "Are you using Compass in a Citrix environment?", "username": "Felicia_Hsieh" } ]
MongoDB Compass blank screen on Windows 7
2020-05-18T06:23:37.939Z
MongoDB Compass blank screen on Windows 7
3,757
null
[ "swift" ]
[ { "code": "", "text": "I just have a lot confusion about what I need to create an IOS app Using MongoDB running on AWS, basically im building a social app but I have a lot of confusion in my mind about these topics:1 - Setup MongoDB on Xcode - What do I need to setup mongo in my project? In official documents there’s a Realm configuration, but my app is only on cloud so why should I need Realm that is a local database?2 Back End services - I see tutorials (very confusing) using Vapor, Express and Node JS, MongoKitten and other tools but I don’t get what do these tools do, can someone provide me a clear description about what I need and how to set up mongo on Xcode?3 Authetication - Id like to use Firebase Authentication in my project in order to login users and get their tokens given me by Firebase, then I’d put these tokens in a collections called “Users” on mongoDB in order to authenticate users to mongoDB too, is it possible?If needed I’d pay an expert who dedidates me sometime to clarify these topics, I want to get this knowledge, so please help me.", "username": "Marco_Vastolo" }, { "code": "", "text": "Hi @Marco_Vastolo,Welcome to MongoDB communityThe obsolutly best way to build swift apps with MongoDB on aws is with Atlas and Realm apps + Realm sdk:https://docs.mongodb.com/realm/tutorial/ios-swift/Realm allows custom function authontication so you might work with firebase api, however, there are many included auth providers like google,apple and Facebook so you might skip firebase .Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel, how can I connect an Xcode project Mongo DB atlas using MongoSwift driver?", "username": "Marco_Vastolo" }, { "code": "", "text": "@Marco_Vastolo here you go:https://docs.mongodb.com/drivers/swift/#connect-to-mongodb-atlasBut I really recommend exploring the swift realm sdk it will boost your Development in orders of magnitude.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Marco_Vastolo, I’m one of the developers of MongoSwift.The Swift driver would be useful if you’d like to write a traditional backend application in Swift. This might be useful if, for example, you’d like your iOS app to send requests to an HTTP server which handles database interaction. From your backend you can use MongoSwift to connect to a MongoDB deployment.You could use a web framework such as Vapor to implement the HTTP server. We have an example project available here that uses the driver with Vapor which you could use as a template for writing such a backend.That said, given you are building an iOS app I agree it is worth exploring what Realm supports as well to figure out what best suits your use case.", "username": "kmahar" } ]
MongoDB - Swift integration
2021-04-05T13:41:00.265Z
MongoDB - Swift integration
3,928
null
[ "dot-net" ]
[ { "code": "BsonSerializer.LookupSerializer<T>()BsonSerializer.cs, BsonSerializerRegistry.cs (_cache.TryAdd(), _cache.GetOrAdd())IBsonSerializationProviderDateTimeOffsetSerializerBsonDateTimeDateTimeOffsetBsonType.DateTime (BsonDateTime)class Test\n{\n [BsonRepresentation(BsonType.DateTime)]\n public DateTimeOffset ts { get; set; }\n}\nvar col = db.GetCollection<Test>(typeof(Test).Name);\nBsonSerializer.RegisterSerializer<DateTimeOffset>(new MyDateTimeOffsetSerializer());\nif(BsonSerializer.LookupSerializer<DateTimeOffset>() == null)\n BsonSerializer.RegisterSerializer<DateTimeOffset>(new MyDateTimeOffsetSerializer());\nRegisterSerializer", "text": "It’s a little bit complicated story, so I put my suggestion at top. I’ve gone through some part of the source on github so… If you’re interested in detail, please go on and read.Please don’t register a serializer for a type so automatically, like BsonSerializer.LookupSerializer<T>() is invoked.If #1 is somehow a must, please provide a way that I can replace the serializer with another one.\nRef: BsonSerializer.cs, BsonSerializerRegistry.cs (_cache.TryAdd(), _cache.GetOrAdd())Can I use IBsonSerializationProvider to replace the existing one with mine for a type? (I didn’t try it this way)Please make DateTimeOffsetSerializer stronger. It has timezone and so BsonDateTime and MongoDb date as well. I think it is better than DateTime in a C# class. And you really should consider to have it supports the conversion.I’m using NET 5 with C# Driver 2.12.2The issue all came from the datatype mapping from DateTimeOffset to BsonType.DateTime (BsonDateTime). I have a simplified class like:and I would like to have my code like following in order to get a typed-collection connection:I knew it won’t work and will get the exception says:‘DateTime is not a valid representation for a DateTimeOffsetSerializer.’Then I made my own serializer to do it and register it at very beginning of my code. Everything works perfect as I expected.Today, after review to this line, I felt it might be safer to check the existence before register it. so I modified it become:Boom, code crashes on the line of RegisterSerializer, all the time. And the exception says:There is already a serializer registered for type DateTimeOffset", "username": "jsnhsu" }, { "code": "DateTimeOffsetBsonSerializerBsonSerializer.LookupSerializer<T>()IBsonSerializationProviderDateTimeOffsetSerializerPrimitiveSerializationProviderDateTimeOffsetPrimitiveSerializationProviderBsonSerializer.RegisterSerializer<T>(IBsonSerializer<T> serializer)IBsonSerializationProviderBsonSerializer.RegisterSerializationProvider(IBsonSerializationProvider provider)IBsonSerializationProviderIBsonSerializationProviderIBsonSerializationProviderIBsonSerializationProviderDateTimeOffsetSerializerDateTimeOffsetDateTimeDateTimeOffsetBsonDateTimeDateTimeOffsetDateTimeDateTimeKindDateTimeDateTimeOffsetDateTimeKindDateTimeOffsetBsonDateTimeBsonDateTimeDateTimeDateTime.KindDateTimeOffsetDateTimeOffsetBsonDateTimeBsonDateTimeDateTimeOffsetSerializerDateTimeOffsetDateTimeDateTimeOffsetDateTimeOffsetBsonDateTime", "text": "Hi, Jason,Thank you for reaching out to us about the issue with DateTimeOffset and the .NET/C# driver.This is expected behaviour from BsonSerializer. When an unregistered type is looked up via BsonSerializer.LookupSerializer<T>() we walk the list of registered IBsonSerializationProvider instances looking for one that implements a serializer for the requested type. DateTimeOffsetSerializer is returned from PrimitiveSerializationProvider. So the first time that you lookup a serializer for DateTimeOffset, we don’t find one registered and register the one from PrimitiveSerializationProvider.If you want to override this behaviour and provide your own serializer for a type, you can register one at startup either via BsonSerializer.RegisterSerializer<T>(IBsonSerializer<T> serializer) or you can include it in a custom IBsonSerializationProvider registered via BsonSerializer.RegisterSerializationProvider(IBsonSerializationProvider provider). Note that IBsonSerializationProvider instances are consulted in reverse order of registration - e.g. last one first - so that users are able to customize serialization behaviour as needed.Hopefully that explains why #1 is required. In theory we could provide a way to determine if any IBsonSerializationProvider has a serializer for a particular type by adding additional query methods to the interface, but this would be a breaking change. As well there are catch-all IBsonSerializationProvider implementations that allow us to return a generic serializer if a custom one hasn’t been provided. So in practice it is very uncommon to not find any serializer for a particular type. The intent of the design is to override/customize serialization during application startup.Regarding question #2, we don’t provide a mechanism to swap serializers as this can lead to race conditions where different serializers could be used to serializer/deserialize instances. The intent of the design is to register all serializers or serialization providers during application startup and that the mapping of type to serializer instance remains stable throughout the application lifetime.Moving onto question #3, yes, you can implement your own IBsonSerializationProvider to override the built-in DateTimeOffsetSerializer. Since you will register the provider last during your application initialization, your custom provider will be used to find a serializer for DateTimeOffset.Switching gears a bit for question #4, this is challenging due to the representation mismatch between DateTime, DateTimeOffset, and BsonDateTime. I agree with you that .NET should have used DateTimeOffset for its date-time representation from the start, but Microsoft didn’t. DateTime originally stored the date and time without any timezone information. DateTimeKind was retrofitted later to differentiate between local and UTC. But DateTime is used extensively in .NET code for better or worse. DateTimeOffset includes timezone information, which removes a lot of ambiguity and is more nuanced than the DateTimeKind fix. So I understand your desire to use DateTimeOffset in your applications.Now how does this relate to MongoDB? MongoDB stores date-time instances as BsonDateTime, which is a 64-bit integer representing the number of milliseconds since the Unix epoch (January 1, 1970) in UTC. (See the BSON spec for details.) There is no timezone information as all date-times are converted to UTC for storage. Although you can use BsonDateTime in your applications, it is more natural to use DateTime. If the DateTime.Kind is local, then we convert to UTC based on the current timezone of the client before storing it to the database. If it is UTC already, we do not perform the conversion.How does this relate to DateTimeOffset? If we were to store DateTimeOffset as BsonDateTime, we would lose the timezone information as we store BsonDateTime as a simple int64 in UTC. The DateTimeOffsetSerializer serializes values as an array (default), document, or string, which allows us to store the timezone information along with the date-time value itself. You could implement your own custom serializer for DateTimeOffset values and make whatever assumptions about the timezone is appropriate for your application and thus only store the DateTime portion of DateTimeOffset, but we cannot make those simplifying assumptions in a generic way that would work for all users of our driver.Hopefully this provides some clarity on why serialization behaves the way it does and why we cannot automatically serialize DateTimeOffset values into simple BsonDateTime values in the database. Please let us know if you have any additional questions.Sincerely,\nJames", "username": "James_Kovacs" } ]
BsonSerializer issue
2021-04-30T17:34:31.318Z
BsonSerializer issue
8,936
null
[ "queries" ]
[ { "code": "", "text": "Hello, I’m pretty new at working with mongoDB. I’m using it as my database in a small javascript project. I’m looking for a solution to something I’d like to do.I have a cluster full of userdata, with several properties in each. What would be the best way to retrieve (preferably as an array?) every “id” (not _id) field content, but only where “propertyA” is bigger than “propertyB”. These two properties are both stored as numbers, of course. I’d like to have a list of their id so I can perform an action on all of them in my software, namely to increment propertyB by a number.If this is something not possible or too complex, getting an array list of EVERY “id” property would be useful aswell.Thanks in advance!", "username": "Lord_Wasabi" }, { "code": "var idObjArr = db.collection.find( { $expr: { $gt: [ \"$propB\", \"$propA\" ] } }, { _id: 0, id: 1 } ).toArray()\nvar idArr = []\nidObjArr.forEach(doc => idArr.push(doc.id))\nvar idArr = db.collection.distinct(\"id\", { $expr: { $gt: [ \"$propB\", \"$propA\" ] } } )", "text": "Hello @Lord_Wasabi, here are couple of ways to get the list of id values as an array from your collection based upon the condition:Or, you can use this approach; this one only retruns distinct (unique) id values.var idArr = db.collection.distinct(\"id\", { $expr: { $gt: [ \"$propB\", \"$propA\" ] } } )", "username": "Prasad_Saya" }, { "code": " let idObjArr = client.db.userdata.find( { $expr: { $gt: [ \"$currentHP\", \"$maxHP\" ] } }, { _id: 0, id: 1 } )\n let idArr = []\n idObjArr.forEach(doc => idArr.push(doc.id))\n message.channel.send(idArr);\n", "text": "Hello, thank you for your answer. I tried using your solution like this:However I’m getting an “Cannot send empty message” here, when trying to print out the array. Seems like nothing is being collected?I accessed my database like how I usually do. “currentHP” and “maxHP” are the properties. Let me know if I missed something!", "username": "Lord_Wasabi" }, { "code": "idObjArr.forEach(doc => idArr.push(doc.id))console.log(idArr)idObjArr.forEach(doc => idArr.push(doc.id))let count = client.db.userdata.find( { $expr: { $gt: [ \"$currentHP\", \"$maxHP\" ] } }, { _id: 0, id: 1 } ).count()\nconsole.log(\"Count: \", count);", "text": "idObjArr.forEach(doc => idArr.push(doc.id))Please do a console.log(idArr) after the statement idObjArr.forEach(doc => idArr.push(doc.id)) and see what is there in the array.Also, you can check if any documents are returned from the query itself:", "username": "Prasad_Saya" }, { "code": "", "text": "The first one returns [ ], aka an empty array and the count one returns 0", "username": "Lord_Wasabi" }, { "code": "mongotest{ id: 1, currentHP: 100, maxHP: 200 },\n{ id: 2, currentHP: 130, maxHP: 45 }\nlet objs = db.test.find( { $expr: { $gt: [ \"$currentHP\", \"$maxHP\" ] } }, { _id: 0, id: 1 } )\nlet idArr = []\nobjs.forEach(doc => idArr.push(doc.id))\nprintjson(idArr) // this returns [ 2 ]", "text": "count one returns 0That means there are no matching documents, hence the count zero. The query is correct, for example you can try this in the mongo shell:Take a test collection with two documents:And the code:", "username": "Prasad_Saya" }, { "code": "", "text": "That shouldn’t be the case, as for example my document already should meet the requirements of the query. Probably should have mentioned that these properties are nested, not sure if it matters(At the bottom)", "username": "Lord_Wasabi" }, { "code": "stats{ $gt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] }", "text": "my document already should meet the requirements of the query.It does meet the query filter requirement. But, the two fields are in an embedded document stats. So, you need to refer to the two fields as follows in your query (I omitted the remaining parts of the query for brevity):{ $gt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] }", "username": "Prasad_Saya" }, { "code": " let idObjArr = await client.db.userdata.find( { $expr: { $gt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] } }, { \n _id: 0, id: 1 } )\n let idArr = []\n idObjArr.forEach(doc => idArr.push(doc.id))\n console.log(idArr)", "text": "The code below still returns an empty array and count 0, unfortunately", "username": "Lord_Wasabi" }, { "code": "", "text": "The code below still returns an empty array and count 0, unfortunatelyWhat are the input documents you are working with? Can you sample couple of them?", "username": "Prasad_Saya" }, { "code": "", "text": "The picture above is an example. The project is related to a Discord server, people are saved to the database by their Discord ID (“id” field) and have several properties attached to them. Not everyone has the “CurrentHP” and “MaxHP” properties, but most of them do and everyone who does, has it in the format seen on the picture above. There are other properties in the “stats” object too", "username": "Lord_Wasabi" }, { "code": "currentHP100maxHP109currentHPmaxHPid: 2{ id: 1, stats: { currentHP: 100, maxHP: 200 } },\n{ id: 2, stats: { currentHP: 130, maxHP: 45 } }", "text": "The picture above is an example.In the image the currentHP (100) is less than the maxHP (109) - so it will not select the document. The document will get selected when the currentHP is greater then the maxHP. For example, in the following two documents only the document with id: 2 will be selected:", "username": "Prasad_Saya" }, { "code": "", "text": "I see, I probably explained it badly, however the opposite is what I’m trying to achieve. I am trying to get every document’s “id” property, where currentHP < maxHPI tried switching the two property values in the query around, but that still resulted in an empty array.", "username": "Lord_Wasabi" }, { "code": "$lt{ $lt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] }", "text": "where currentHP < maxHPYou can use the following filter with the $lt (less than comparison operator):{ $lt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] }", "username": "Prasad_Saya" }, { "code": "client.db.userdata.find( { $expr: { $lt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] } }, { _id: 0, id: 1 } )", "text": "The code below still returns an empty array. I seriously feel like I’m missing something here, lol", "username": "Lord_Wasabi" }, { "code": "", "text": "The code below still returns an empty array. I seriously feel like I’m missing something here,May be. I see you are using the MongoDB NodeJS driver to work with he database data. Can you write a query to count he number of documents in the collection and tell me about the result.", "username": "Prasad_Saya" }, { "code": "client.db.userdata.find( { } ).count()", "text": "The code below returns 202", "username": "Lord_Wasabi" }, { "code": "{ $expr: { $lt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] } }", "text": "@Lord_Wasabi, then the filter should work with the sample document in the image you had attached in the earlier comment:{ $expr: { $lt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] } }", "username": "Prasad_Saya" }, { "code": " let idObjArr = client.db.userdata.find( { $expr: { $lt: [ \"$stats.currentHP\", \"$stats.maxHP\" ] } }, { _id: 0, \n id: 1 } )\n let idArr = []\n idObjArr.forEach(doc => idArr.push(doc.id))\n console.log(idArr)\n", "text": "But it does not Here is everything I have:My full document:My code:My results:The results when I’m looking up every document in the cluster:", "username": "Lord_Wasabi" }, { "code": "", "text": "See the following documents with examples from MongoDB NodeJS Driver manual if you are writing the code properly:", "username": "Prasad_Saya" } ]
Getting a list of a specific data
2021-04-30T11:01:26.206Z
Getting a list of a specific data
12,066
null
[ "upgrading" ]
[ { "code": "sudo dnf update\n\n Problem 1: cannot install the best update candidate for package mongodb-org-database-tools-extra-4.4.4-1.el8.x86_64\n - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-4.4.5-1.el8.x86_64\n Problem 2: package mongodb-org-tools-4.4.5-1.el8.x86_64 requires mongodb-org-database-tools-extra = 4.4.5, but none of the providers can be installed\n - cannot install the best update candidate for package mongodb-org-tools-4.4.4-1.el8.x86_64\n - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-4.4.5-1.el8.x86_64\n Problem 3: package mongodb-org-4.4.5-1.el8.x86_64 requires mongodb-org-tools = 4.4.5, but none of the providers can be installed\n - package mongodb-org-tools-4.4.5-1.el8.x86_64 requires mongodb-org-database-tools-extra = 4.4.5, but none of the providers can be installed\n - cannot install the best update candidate for package mongodb-org-4.4.4-1.el8.x86_64\n - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-4.4.5-1.el8.x86_64\n Problem 4: problem with installed package mongodb-org-4.4.4-1.el8.x86_64\n - package mongodb-org-4.4.4-1.el8.x86_64 requires mongodb-org-mongos = 4.4.4, but none of the providers can be installed\n - package mongodb-org-4.4.5-1.el8.x86_64 requires mongodb-org-tools = 4.4.5, but none of the providers can be installed\n - cannot install both mongodb-org-mongos-4.4.5-1.el8.x86_64 and mongodb-org-mongos-4.4.4-1.el8.x86_64\n - package mongodb-org-tools-4.4.5-1.el8.x86_64 requires mongodb-org-database-tools-extra = 4.4.5, but none of the providers can be installed\n - cannot install the best update candidate for package mongodb-org-mongos-4.4.4-1.el8.x86_64\n - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-4.4.5-1.el8.x86_64\n==============================================================================================================================================================================================================================================\n Package Architecture Version Repository Size\n==============================================================================================================================================================================================================================================\nSkipping packages with conflicts:\n(add '--best --allowerasing' to command line to force their upgrade):\n mongodb-org-mongos x86_64 4.4.5-1.el8 mongodb-org 17 M\nSkipping packages with broken dependencies:\n mongodb-org x86_64 4.4.5-1.el8 mongodb-org 11 k\n mongodb-org-database-tools-extra x86_64 4.4.5-1.el8 mongodb-org 23 k\n mongodb-org-tools x86_64 4.4.5-1.el8 mongodb-org 11 k\n\nTransaction Summary\n==============================================================================================================================================================================================================================================\nSkip 4 Packages\n\nNothing to do.\nComplete!\nsudo rpm -e mongodb-org-database-tools-extra-4.4.4-1.el8.x86_64 mongodb-org-tools-4.4.4-1.el8.x86_64\n...\nsudo dnf update \nkeybase 36 kB/s | 3.3 kB 00:00 \nDependencies resolved.\n==============================================================================================================================================================================================================================================\n Package Architecture Version Repository Size\n==============================================================================================================================================================================================================================================\nUpgrading:\n mongodb-org-mongos x86_64 4.4.5-1.el8 mongodb-org 17 M\n mongodb-org-server x86_64 4.4.5-1.el8 mongodb-org 22 M\n mongodb-org-shell x86_64 4.4.5-1.el8 mongodb-org 14 M\n\nTransaction Summary\n==============================================================================================================================================================================================================================================\nUpgrade 3 Packages\n...\nsudo dnf install -y mongodb-org\nLast metadata expiration check: 0:00:19 ago on Tue 20 Apr 2021 10:27:49 AM EDT.\nDependencies resolved.\n\n Problem: package mongodb-org-4.4.5-1.el8.x86_64 requires mongodb-org-tools = 4.4.5, but none of the providers can be installed\n - package mongodb-org-tools-4.4.5-1.el8.x86_64 requires mongodb-org-database-tools-extra = 4.4.5, but none of the providers can be installed\n - cannot install the best candidate for the job\n - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-4.4.5-1.el8.x86_64\n==============================================================================================================================================================================================================================================\n Package Architecture Version Repository Size\n==============================================================================================================================================================================================================================================\nInstalling:\n mongodb-org x86_64 4.4.4-1.el8 mongodb-org 10 k\nInstalling dependencies:\n mongodb-org-database-tools-extra x86_64 4.4.4-1.el8 mongodb-org 20 k\n mongodb-org-tools x86_64 4.4.4-1.el8 mongodb-org 10 k\nDowngrading:\n mongodb-org-mongos x86_64 4.4.4-1.el8 mongodb-org 22 M\n mongodb-org-server x86_64 4.4.4-1.el8 mongodb-org 28 M\n mongodb-org-shell x86_64 4.4.4-1.el8 mongodb-org 18 M\nSkipping packages with broken dependencies:\n mongodb-org x86_64 4.4.5-1.el8 mongodb-org 11 k\n mongodb-org-database-tools-extra x86_64 4.4.5-1.el8 mongodb-org 23 k\n mongodb-org-tools x86_64 4.4.5-1.el8 mongodb-org 11 k\n\nTransaction Summary\n==============================================================================================================================================================================================================================================\nInstall 3 Packages\nDowngrade 3 Packages\nSkip 3 Packages\n", "text": "Tried using DNF to update from 4.4.4 to 4.4.5 from the repo.mongodb.org repository, but dependencies are not met.Also tried removing packages and upgrading manually, but they are downgraded on the next update:", "username": "Matt_B" }, { "code": "exclude=mongodb-org,mongodb-org-server,mongodb-org-shell,mongodb-org-mongos,mongodb-org-tools,mongodb-org-database-tools-extra", "text": "Hi !\nFor now, I think you should exclude this upgrade in your dnf.conf :exclude=mongodb-org,mongodb-org-server,mongodb-org-shell,mongodb-org-mongos,mongodb-org-tools,mongodb-org-database-tools-extra", "username": "mediaklan" }, { "code": "", "text": "I tried to exclude these packages but still no luck. Every time when I run “sudo dnf update -y” then I get below output.Last metadata expiration check: 0:03:54 ago on Sunday 02 May 2021 07:41:00 AM.\nDependencies resolved.Problem 1: cannot install the best update candidate for package mongodb-org-database-tools-extra-4.4.4-1.el8.x86_64Skip 2 PackagesNothing to do.\nComplete!", "username": "Ninad_Kulkarni" }, { "code": "", "text": "Downgrade your packages to 4.4.4-1.el8, then exclude the packages.\nThis is what I’m doing for now.\nNow, keep in mind that it doesn’t solve anything, we’re just hidding from updates that we can’t get, for now.", "username": "mediaklan" } ]
Fedora 33 update to 4.4.5 dependency problem
2021-04-20T14:48:31.650Z
Fedora 33 update to 4.4.5 dependency problem
5,298
null
[ "installation" ]
[ { "code": "sudo systemctl enable mongod && sudo systemctl start mongodFailed to execute operation: No such file or directory sudo systemctl status mongod [ec2-user@ip-172-31-20-12 ~]$ sudo systemctl status mongod\n\n● mongod.service\n Loaded: not-found (Reason: No such file or directory)\n Active: failed (Result: exit-code) since Thu 2021-04-15 17:32:06 UTC; 2 weeks 3 days ago\n Main PID: 23759 (code=exited, status=0/SUCCESS)\n\nApr 15 17:32:06 ip-172-31-20-12.eu-west-3.compute.internal systemd[1]: Starting MongoDB Database Server...\nApr 15 17:32:06 ip-172-31-20-12.eu-west-3.compute.internal systemd[2190]: Failed at step EXEC spawning /usr/bin/mongod: No such file or directory\nApr 15 17:32:06 ip-172-31-20-12.eu-west-3.compute.internal systemd[1]: mongod.service: control process exited, code=exited status=203\nApr 15 17:32:06 ip-172-31-20-12.eu-west-3.compute.internal systemd[1]: Failed to start MongoDB Database Server.\nApr 15 17:32:06 ip-172-31-20-12.eu-west-3.compute.internal systemd[1]: Unit mongod.service entered failed state.\nApr 15 17:32:06 ip-172-31-20-12.eu-west-3.compute.internal systemd[1]: mongod.service failed. \n", "text": "i want to install rocket chat on amazon linux 2 i need mongodb i followed all the steps but when i enter this command sudo systemctl enable mongod && sudo systemctl start mongod here is the message that appears Failed to execute operation: No such file or directory when i lunch sudo systemctl status mongod i gethelp me please", "username": "Armel_KOBLAN" }, { "code": "", "text": "If the file /usr/bin/mongod does not exist then the most likely reason is that the package for mongo is not installed. Alternatively, the file might be elsewhere, like /usr/sbin/mongod, /bin/mongod or /sbin/mongod. Adjust the systemd configuration file accordingly.", "username": "steevej" }, { "code": " / usr / bin / mongod/etc/systemd/system/mongodb.service", "text": "I have the / usr / bin / mongod directory I think I know the problem I need the contents of the file /etc/systemd/system/mongodb.service", "username": "Armel_KOBLAN" } ]
Enable and start mongodb on rocketchat
2021-05-03T09:10:33.702Z
Enable and start mongodb on rocketchat
5,430
null
[ "java", "crud" ]
[ { "code": "", "text": "Hi All,I’m facing the following task:My question is: what is the best way to accomplish this task ? Obviously, we can’t get 300M+ document at once, is there a streaming capability in mongo-java that allows my application process the documents returned from the server as a stream.Another question is what is the most effective way to update the document after it’s retrieved and amended ? Even better, is there a way that the above task is done at the server side (so we don’t have to get, amend, and put back) ? This is similar to a simple update query (where we can “set” the value we want, and send it at once, the update is done at the server).Comments, Suggestions, Questions are welcome.Thanks,\nTuan", "username": "Tuan_Dinh" }, { "code": "", "text": "Hello @Tuan_Dinh,To update large set of documents, use Bulk Write operations. The Java client (uses MongoDB Java Driver) code you write and execute submits the update query to the server, the update happens on the server and you get the result (status, number of documents updated, etc.) back at the Java client.Bulk updates send large set of updates calls as a single request; i.e., one request to the server and get one result of the update from the server. It is efficient as there is minimum network usage to get to the server. All the work happens at the database server.You can use Updates with Aggregation Pipeline for complex string manipulation and update operation.", "username": "Prasad_Saya" }, { "code": "token\t{\n\t\tname: \"John\",\n\t\ttoken : \"part1-part2-part3\" \n\t},\n\t\t{\n\t\tname: \"Peter\",\n\t\ttoken : \"part1-part2-part3\" \n\t},\n\t\t{\n\t\tname: \"Jack\",\n\t\ttoken : \"part1-part2-part3\" \n\t}\n\t....\n]\nALL\t{\n\t\tname: \"John\",\n\t\ttoken : \"sha256Hash(part1-part3)\" \n\t},\n\t\t{\n\t\tname: \"Peter\",\n\t\ttoken : \"sha256Hash(part1-part3)\" \n\t},\n\t\t{\n\t\tname: \"Jack\",\n\t\ttoken : \"sha256Hash(part1-part3)\" \n\t}\n\t....\n]\nBulkWriteUpdateOnewriteBulkWriteUpdateOnefilterQuery()", "text": "Hi @Prasad_Saya,Thank for the reply, but can you elaborate this a bit ? I’m familiar with BulkWrite operation but still can’t get the details right. Consider the following example:Let’s say we have current collection as below:Each document has a token field that is comprised of 3 parts.The task is to update ALL document so that for each document we update the token by the logic:Desired:How exactly would the BulkWrite look like ? Let’s say we choose UpdateOne as the write operation in the BulkWrite. UpdateOne requires a filter (to find the document to update, typically a Query() object but there’s no criteria here as we want to update every single document), and then the “update” part, how to set the value that we are after (with hasing)? Also, there’s another important constraint as it’s a large data set with 300M+ documents. Even if we use BulkWrite, it can’t finish the operation in ONE command (one round trip) can’t it ?Appreciate the follow-up.Regards,\nTuan", "username": "Tuan_Dinh" }, { "code": "", "text": "Hello @Tuan_Dinh,Also, there’s another important constraint as it’s a large data set with 300M+ documents. Even if we use BulkWrite, it can’t finish the operation in ONE command (one round trip) can’t it ?I guess, you have to plan your own operations. Please see this note on how bulk writes are batched: Bulk Write - Execution of Operations.The number of operations in each group cannot exceed the value of the maxWriteBatchSize of the database. As of MongoDB 3.6, this value is 100,000. This value is shown in the isMaster.maxWriteBatchSize field.This limit prevents issues with oversized error messages. If a group exceeds this limit, the client driver divides the group into smaller groups with counts less than or equal to the value of the limit. For example, with the maxWriteBatchSize value of 100,000, if the queue consists of 200,000 operations, the driver creates 2 groups, each with 100,000 operations.The task is to update ALL document so that for each document we update the token by the logic:\n1 Remove part2 from the token (as a string)\n2 Perforn a sha256 hash, then\n3 Update the token with that value.The update method has features which allow splitting the three part string token into two required tokens (see Update with Aggreagtion Pipelein); which in turn can be used to get the sha256 hash. Creating this sha256 hash is external to MongoDB functionality.MongoDB v4.4 and higher has the $function operator to create a JavaScript function to calculate the sha256 hash within the update operation. This means, with MongoDB v4.4 and JavaScript, you can perform the update operation for each document on the server side.If your MongoDB version is prior to 4.4, then you have to figure some way within your application code to perform the first two steps to generate a value, then perform the update operation to update the token with the generated value.How exactly would the BulkWrite look like ?Please try some code, and post for any improvements, corrections and suggestions. To start with, you can assume there are a small number (for example 3) of documents to update, and try.Since you had mentioned about using MongoDB Java Driver in the initial post, see this Java Driver Tutorials - Write Operations - Bulk Writes for guidance.", "username": "Prasad_Saya" } ]
Best way to update a field for all documents in (large) DB
2021-04-28T00:49:57.422Z
Best way to update a field for all documents in (large) DB
13,370
null
[ "python", "monitoring" ]
[ { "code": "", "text": "I’ve been searching for well over an hour and I can’t seem to find a source showing me how to get those statistics with pymongo, anyone know how I would get it?I’ve found mongostats and mongotop however I can’t find anything that explains how I would integrate this", "username": "Tenshi_Bot" }, { "code": "", "text": "Hi @Tenshi_Bot,Welcome to MongoDB community.I am not sure there is something built in on driver side, but you can use logging in your python code or a 3rd party library and wrap mongo calls.Further you can corollate with the mongo logs…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Ahh, alright thank you for the quick reply!", "username": "Tenshi_Bot" } ]
Statistics like write, latency data with pymongo?
2021-05-03T02:21:55.615Z
Statistics like write, latency data with pymongo?
2,356
null
[]
[ { "code": "", "text": "I don’t have any technical education background. I am MBA post-graduate. I am looking to move into tech field by learning mongodb and other MERN. I need suggestions and advises on how to go ahead. I already completed M001 from mongoDB university. I also started learning react. Please take some time to answer me. All people here are either experienced or looking to be one, thought your inputs help me in my journey.Thank You", "username": "Ajay_kumar_Gaddam" }, { "code": "", "text": " Welcome to the MongoDB Community @Ajay_kumar_Gaddam!Congrats on completing M001! That’s a great starting point to learn more about MongoDB.It is difficult to recommend learning resources without a clear end goal, but it sounds like your current focus might be a good fit for the Developer Learning Path on MongoDB University. M001 is the first step in that learning path.Some other resources that might be interesting for you:Articles mentioning React on the MongoDB Developer Hub.Tutorial to build a Task Tracker application using React and the MongoDB Realm GraphQL API.Building with Patterns: A Summary - MongoDB schema design patterns.A Summary of Schema Design Anti-Patterns and How to Spot Them - a complement to the design patterns link above.Free courses on React - general React resources recommended on the ReactJS site (not specific to MongoDB).If you can provide more specific ideas on how you prefer to learn (videos, courses, articles, building sample apps, … ) and what topics would be of interest to you, the community here may be able to provide more relevant suggestions.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I will definitely look into the resource you shared. Thank You Very Much", "username": "Ajay_kumar_Gaddam" }, { "code": "", "text": "Hi @Ajay_kumar_Gaddam!If you qualify for the GitHub Student Developer Pack, you can also access our student offer: MongoDB Student Pack. The MongoDB University On Demand license and free certification could be valuable to you! With the On Demand license the course dates are lifted and you can follow the courses in your own time, at your own pace.Let me know if you have any questions! ", "username": "Lieke_Boon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Need suggestions on learning MongoDB for non-tech (MBA)
2021-04-30T13:24:27.374Z
Need suggestions on learning MongoDB for non-tech (MBA)
5,040
null
[ "golang" ]
[ { "code": "", "text": "Hi,\nHow do we indicate which version of the go driver to downloadthe command provided in the documentation is downloading the latest driver (1.5.1):go get go.mongodb.org/mongo-driver/mongoHow can I download an earlier version ?thanks", "username": "Dror_Mikdash1" }, { "code": "go mod init \ngo get -d -v go.mongodb.org/[email protected]\ngo build \ngo.mod", "text": "Hi @Dror_Mikdash1, and welcome to the forums!How do we indicate which version of the go driver to downloadIf you have Golang version 1.11+, you can use go mod. For example within your project directory you could perform:The first command above will create a new go.mod file in the project that act as a dependency manifest.Regards,\nWan.", "username": "wan" } ]
How can we indicate a specific version of the go driver to download
2021-04-29T10:34:01.487Z
How can we indicate a specific version of the go driver to download
2,484
null
[ "replication", "connecting", "golang" ]
[ { "code": "", "text": "Hi all, wondering if anybody can help diagnose an issue connecting to a replica set using the go driver.Environment\nA replica set running as a pod in k8s cluster on port 27017 with replica set name and --bind_ip_all option. The mongo images are mongo:4.4.5 and the go driver is v1.5.1.Issue\nWhen I try to execute client.Ping or run “replSetGetStatus” command by db.RunCommand in my go program, the program will return \"server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.244.12.108:27017, Type: RSGhost, Average RTT: 1407516 }, ] }”. But I can connect to the replica set using mongo client and run command,such as rs.status(). The URI used in my program and mongo client are same as \"mongodb://serviceip:27017/.Thanks", "username": "yi_liu" }, { "code": "directConnection=trueDirectClientOptionsuri := \"mongodb://user:password@host/?directConnection=true\"\nclient, err := mongo.Connect(ctx, options.Client().ApplyURI(uri))\n\n// OR\n\nuri := \"mongodb://user:password@host\"\nopts := options.Client().ApplyURI(uri).SetDirect(true)\nclient, err := mongo.Connect(ctx, opts)\n", "text": "Hi @yi_liu,Per the Server Discovery and Monitoring specification, an uninitialized member of a replica set is represented via the RSGhost server type. Such servers are not considered selectable for operations, so attempting to execute a command will fail with a server selection error.To work around this issue in the Go Driver, you can specify that you would like to create a direct connection to the node by either appending directConnection=true to the URI or setting the Direct option in ClientOptions:Note that this will connect only to that specific node, so if there are other members in the replica set, they will not be targetted for operations.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "Hi @Divjot, thanks for your reply.Direct option fix server selection error,but there are some other errors. If I using ‘localhost’ or ‘127.0.0.1’ as host in URI, the error is ‘(NotYetInitialized) no replset config has been receivedresult’. If I using network card ip address, the error is ‘(NotPrimaryOrSecondary) node is not in primary or recovering state’.", "username": "yi_liu" }, { "code": "pingreplsetGetStatus", "text": "@yi_liu Based on the error messages, it seems that an uninitialized replica set member cannot be used to execute commands like ping and replsetGetStatus. You may have to run the replSetInitiate command to actually initialize the node.– Divjot", "username": "Divjot_Arora" } ]
Go driver, can't connect to an uninitialized replica set
2021-04-27T08:30:58.710Z
Go driver, can&rsquo;t connect to an uninitialized replica set
7,154
null
[ "app-services-user-auth", "next-js" ]
[ { "code": "", "text": "I am using nextjs for frontend and apolloclient for fetching graphlq on realm, How can I save the login user and use it on apolloclient for sending request like mutations or queries.", "username": "Aaron_Parducho" }, { "code": "", "text": "Hey Aaron, welcome to the forums! We have a guide in the docs that covers how to use Apollo client to connect to Realm GraphQL from your react app. It’s not built specifically in Next.js but should get you most if not all of the way. There’s even a fully-functional codesandbox app (linked on the page) that you can check out.", "username": "nlarew" }, { "code": "", "text": "Yes thank you for replying. I just want to ask how can I assure that i sent my request on behalf of the authenticated user not the anonymous user base on the example in vercel nextjs with realm-web", "username": "Aaron_Parducho" }, { "code": "", "text": "I saw that the example here was only for the anonymous user next.js/examples/with-realm-web at canary · vercel/next.js · GitHub. I want to know how can I possibly make it for the login user.", "username": "Aaron_Parducho" }, { "code": "", "text": "Gotcha! Our web tutorial shows how to authenticate with other types of users - it’s pretty much the same but the tutorial code shows how to integrate it into a larger app with user sign in. If you don’t want to run through the whole tutorial, check out the GraphQL step and its corresponding code in the tutorial project.", "username": "nlarew" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Need to send request on behalf of the login user
2021-05-03T00:08:05.529Z
Need to send request on behalf of the login user
3,704
null
[ "crud" ]
[ { "code": "", "text": "Hi AllIs there a way to find all documents in A collection with a field of typeA and update/change this so that field is of typeBsomething likedb.collection.updateMany({field: {$exists: true}, field: {isOfTypeA} }, {$set: {field : typeB } } )Thanks in advance", "username": "Barry_Fawthrop" }, { "code": "$type$match$project", "text": "Hey Barry, welcome to the forums! To check if a field has a certain type you can use the $type query operator.Updating the values once you’ve found them is just a bit trickier. Broadly you have two options:Run the query, update the values to the new type in your code, and then run some database update operations to save the new values. This can definitely work but has a few pieces to juggle.Use an aggregation pipeline to run the query and update the values in a single command. This is definitely the cleaner option so I recommend it! You’d use $type inside of a $match stage to find the documents and then you’d use $convert (or one of its shorthand helpers like $toInt) to do the actual conversion inside of a $project stage.Hopefully this helps! Please let us know if you have any trouble ", "username": "nlarew" } ]
Realm Validation
2020-07-24T19:04:27.247Z
Realm Validation
2,314
null
[]
[ { "code": "systemctl start mongod● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: core-dump) since Sun 2021-05-02 18:04:11 CEST; 39min ago\n Docs: https://docs.mongodb.org/manual\n Process: 21721 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=dumped, signal=ABRT)\n Main PID: 21721 (code=dumped, signal=ABRT)\n\nMay 02 18:04:10 Hakim systemd[1]: Started MongoDB Database Server.\nMay 02 18:04:11 Hakim systemd[1]: mongod.service: Main process exited, code=dumped, status=6/ABRT\nMay 02 18:04:11 Hakim systemd[1]: mongod.service: Failed with result 'core-dump'.\nulimitjournalctl -xesystemctl start mongod-- The unit UNIT has successfully entered the 'dead' state.\nMay 02 18:48:18 Hakim sudo[23430]: pam_unix(sudo:session): session closed for user root\nMay 02 18:48:22 Hakim sudo[23436]: hakim : TTY=pts/3 ; PWD=/etc/apt/sources.list.d ; USER=root ; COMMAND=/bin/systemctl start mongod\nMay 02 18:48:22 Hakim sudo[23436]: pam_unix(sudo:session): session opened for user root by hakim(uid=0)\nMay 02 18:48:22 Hakim systemd[1]: Started MongoDB Database Server.\n-- Subject: A start job for unit mongod.service has finished successfully\n-- Defined-By: systemd\n-- Support: http://www.ubuntu.com/support\n-- \n-- A start job for unit mongod.service has finished successfully.\n-- \n-- The job identifier is 2687.\nMay 02 18:48:22 Hakim sudo[23436]: pam_unix(sudo:session): session closed for user root\nMay 02 18:48:23 Hakim systemd[1]: mongod.service: Main process exited, code=dumped, status=6/ABRT\n-- Subject: Unit process exited\n-- Defined-By: systemd\n-- Support: http://www.ubuntu.com/support\n-- \n-- An ExecStart= process belonging to unit mongod.service has exited.\n-- \n-- The process' exit code is 'dumped' and its exit status is 6.\nMay 02 18:48:23 Hakim systemd[1]: mongod.service: Failed with result 'core-dump'.\n-- Subject: Unit failed\n-- Defined-By: systemd\n-- Support: http://www.ubuntu.com/support\n-- \n-- The unit mongod.service has entered the 'failed' state with result 'core-dump'.\nMay 02 18:48:24 Hakim sudo[23462]: hakim : TTY=pts/4 ; PWD=/home/hakim/repos/webapp ; USER=root ; COMMAND=/bin/journalctl -xe\nMay 02 18:48:24 Hakim sudo[23462]: pam_unix(sudo:session): session opened for user root by hakim(uid=0)\n", "text": "Hi everyone,I’ve installed Mongod v4.4.5 on Ubuntu 20.04.2 LTS but the service cannot be launched.First off, systemctl start mongod shows the following output:Is this due to the ulimit restrictions mentioned in recommended-ulimit-settings?Below is the relevant output from journalctl -xe after starting the service with systemctl start mongod:", "username": "Hakim_Benoudjit" }, { "code": "", "text": "Please share the content of your configuration file:/etc/mongod.confMost likely you did not create the required directories.", "username": "steevej" }, { "code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:", "text": "Most likely you did not create the required directories.Please find it below (I haven’t modified it as I’m not familiar with Mongodb yet):", "username": "Hakim_Benoudjit" }, { "code": "ls -ld /var/lib/mongodb /var/log/mongodb/", "text": "Share output ofls -ld /var/lib/mongodb /var/log/mongodb/", "username": "steevej" }, { "code": "apt purge", "text": "I looks like I got it fixed. All I had to do was to remove the database and log files as mentioned in the docs after deleting Mongodb with apt purge, and then reinstalling it from scratch.I think it might’ve been caused by an old installation of Mongodb that created the databases (and I didn’t bother deleting them when Mongodb was removed).\nThanks for your help.", "username": "Hakim_Benoudjit" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot launch Mongod v4.4.5 on Ubuntu 20.04.2 LTS
2021-05-02T16:50:15.925Z
Cannot launch Mongod v4.4.5 on Ubuntu 20.04.2 LTS
12,394
null
[ "queries", "data-modeling", "indexes" ]
[ { "code": "", "text": "Hi,\nI have a requirement of maintaining similarity. For e.g. I have a collections of articles on various topics. For each document in my collection, I have an array of similar/related articles. Now I need to write a query that will - given an article ID, fetch all the documents from the same collection where the article ID is within the array of similar articles for the query parameter. Currently I can think of only 2 solutions:Is there a more efficient way to a. get everything in single find/aggregation AND b. avoid creating separate collection?", "username": "Vitthal_Kulkarni" }, { "code": "", "text": "You are welcome to MongoDB Community @Vitthal_KulkarniThere are lots of options for your data modelling. I will suggest this algorithm:A simple specific example will have help to improve your idea.Hope this help.\nI am currently undergoing training at MongoDB online University. I recommend you do the same.", "username": "Saeed_Bello" } ]
How to create a self join OR is it better to structure collections differently?
2021-05-01T08:18:14.013Z
How to create a self join OR is it better to structure collections differently?
2,887
null
[]
[ { "code": "", "text": "Hi there,I have built a NodeJS app on my CentOS 7.\nThis app is running as a service.Each certain time, my data are deleted.I daily execute that app to insert data into my collection. I don’t think my app does something against my data.I access to those remote data via Compass from my local computer.Why could it happen?Best regards.", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "Have you enabled access control on your DB?Please check this link.Similar issue reported by another user", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Why doesn’t MongoDB enable access control by default? If MongoDB vainglories of its security, why doesn’t it implement by default?Why doesn’t MongoDB mention that it will delete my data if I don’t implement any secure access mechanism?Data are important for us and at any doc MongoDB doesn’t warn us about this important mechanism of deleting of data.", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "Awful tutorial for telling us how to secure our database and so that MongoDB doesn’t delete our data (!).\nClearly, it is not intended for novice people who want to use MongoDB along with their apps.\nWhen a beginner understands and knows how to configure MongoDB only with your tutorial, then your doc is OK; meanwhile, you should improve to be understandable and human readable for beginner people.", "username": "ABELARDO_GONZALEZ" }, { "code": "mongo[initandlisten] ** WARNING: Access control is not enabled for the database.\n[initandlisten] ** Read and write access to data and configuration is unrestricted.\n", "text": "Hi @ABELARDO_GONZALEZ,I’m sorry to hear your deployment was not properly secured. Can you provide some more information on the specific version of MongoDB server you installed and a link to the tutorial you followed?MongoDB 3.6+ only binds to localhost by default and there are multiple reminders about securing your deployment including:Enabling remote access to a deployment requires additional manual steps such as binding to non-localhost IP addresses. Defaults have been improved since earlier versions of the MongoDB server, but if you are installing on-premises software there is always some responsibility for fully securing your environment including exposure to remote network access.MongoDB is a distributed database, so security configuration needs to be coordinated with all of the members of a cluster. There are multiple security mechanisms for administrators to choose from, and they have different configurations. This is more straightforward to configure using a managed service (for example MongoDB Atlas) where the management software has control over deploying and configuring the cluster and can enforce best practices like access control, network encryption, and firewall restrictions.It is also important to note that security is only one aspect of production deployments. Backup, monitoring, and environment tuning should also be considered. The Operations Checklist and Production Notes in the MongoDB documentation include some considerations to avoid issues with your production MongoDB deployment.If you have suggestions on how we can further improve the default server configuration to avoid missteps, any ideas would be appreciated. We have a public Feedback site for feature suggestions and product ideas, and you can also share thoughts in forum discussion.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi there,Without a doubt, I understand that MongoDB, Inc.? is a big company full of brilliant minds available to think how to improve this product.I understand you highlight the importance of my suggestions in order to improve your product and make money with them.My ideas will make better your product. Feel free to reward them. Thanks.My data were deleted because your fault and on top of that you want to receive a goodness.I should at least be paid for my ideas, isn’t it?Sorry for my rudeness but I need money to pay my bills. If you consider to pay my ideas to improve your product, you will always be welcome. It’s a win-win strategy.Regards,\nAbelardo.", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "I have built a NodeJS app on my CentOS 7.andmy data are deleted.howeverI don’t think my app does something against my data.If you wrote it, then you must know for sure. We can help you be sure if you share the code. On which media do you store the data. If on tmpfs or RAM disk then it may be normal that you lose the data at reboot. You need to share more details before you can blame MongoDB. Post a screenshot of Compass and before and after the delete. Sharing the configuration file you use for your service will also be helpful help you figure what went wrong with your installation.Another thing that could happen is that by inexperience you created capped collection as https://docs.mongodb.com/manual/core/capped-collections/", "username": "steevej" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Data automatically deleted
2021-04-29T12:26:43.573Z
Data automatically deleted
9,445
null
[ "data-modeling" ]
[ { "code": "{\n \"firstName\": \"John\",\n \"lastName\": \"Doe\",\n \"emergencyContact: [\n {\n \"contactName\": \"John Smith\",\n \"contactNo\": \"123456788\",\n \"relationship\": \"Friend\"\n }\n ]\n}\n", "text": "What would be the best approach for this scenario?I have a property emergency contact but I don’t need the data every time I query the user because it will only be available on the profile page should I separate the data on its own collection?? or embed it on the user?", "username": "Christian_Angelo_15065" }, { "code": "db.users.findOne( \n { firstName: \"John\", lastName: \"Doe\" }, \n { emergencyContact: 0, _id: 0 } \n)\n{ emergencyContact: 0, _id: 0 }emergencyContact: 0firstNamelastName{ firstName: \"John\", lastName: \"Doe\" }", "text": "Hello @Christian_Angelo_15065, welcome to the MongoDB Community forum!It is correct to have the emergency contact field within the user document - that is embedded within the document. When you query the user document use a projection to retrieve the required fields from the user document and exclude other fields. For example, the following query retrieves the user’s first name and last name fields only:In the above query the { emergencyContact: 0, _id: 0 } is the projection. The emergencyContact: 0 specifies the query to exclude the field from the query output. See db.collection.findOne() - Projection for more details.This query will return just the two fields firstName and lastName (and excludes the remaining fields) from the result document.The output will be:{ firstName: \"John\", lastName: \"Doe\" }", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you for your recommendation this is very helpful. ", "username": "Christian_Angelo_15065" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Schema design pattern
2021-05-01T02:28:21.951Z
Schema design pattern
2,321
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Want to use mongo cli to autologin to my server using connection string from debian buster.Tried to put login info in home directory file~/.mongorc.jswith content\ndb = connect(‘localhost:27017/’);\ndb.auth(’’, ‘’);but mongo cli binary is not parsing this info,If anyone can help me understand, please do reply what am I doing wrong here???", "username": "Naresh_Bansal" }, { "code": "mongo", "text": "By default mongo shell connects to localhost:27017 with no credentials. It means you have nothing to do in order to do what you want to do.See https://docs.mongodb.com/manual/mongo/In particular You can run mongo shell without any command-line options to connect to a MongoDB instance running on your localhost with default port 27017:", "username": "steevej" } ]
Mongo cli not parsing .mongojs.rc file
2021-04-30T05:41:27.578Z
Mongo cli not parsing .mongojs.rc file
2,323
https://www.mongodb.com/…7c_2_1024x99.png
[]
[ { "code": "", "text": "The setup is in the local systemI have Mongodb server running in Windows2nd I have my C code running in virtual box3rd I have mongodb shell ‘mongosh’ to test queries and operations directly to my database in virtual box.But it is showing Network Error. I want to store the data in mongodb in windows by using C code that is in RHEL8 in virtualbox.mongosh1683×163 11.6 KB", "username": "Ayush_Bhat" }, { "code": "", "text": "The IP address of your Windows host is not the same as the IP address of your virtual machine. In addition localhost on your virtual machine refers to the virtual machine. The localhost of Windows refers to the Windows host.When you connect from the virtual machine, you must specify the IP address of your Windows host.", "username": "steevej" } ]
Network Error while connecting to local mongodb server
2021-05-01T15:16:14.744Z
Network Error while connecting to local mongodb server
4,257
null
[]
[ { "code": "", "text": "Does mongodb work with the new Apple M1 Silicon macs? The question also applies for MongoDB Compass.Would love to know if there’s a public roadmap or issue tracker for this.Thank you!", "username": "Edrich_Chua" }, { "code": "", "text": "Welcome to the community forums @Edrich_Chua!Apple Silicon (aka ARM64 on macOS) is not a Supported Platform yet for MongoDB server or database tools.Some relevant issues to watch in the Jira issue tracker are:Although there appear to be some required upstream dependency updates to get native M1 support implemented (for example, MozJS for the MongoDB server and Electron for Compass), it should be possible to run current Intel binaries using the M1’s Rosetta Translation Environment. However, this is not an environment that we have thoroughly tested with MongoDB products yet and there will be some performance overhead for Rosetta translation.The generally available M1 Macs have only been out for a few weeks now, but I expect platform support to improve as they become more widely adopted and tested.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" }, { "code": "", "text": "", "username": "Stennie_X" }, { "code": "", "text": "I would like to reiterate the prior question (Support for Apple M1 Silicon), in order to see if there might be an update on the following question:\"Does mongodb work with the new Apple M1 Silicon macs? The question also applies for MongoDB Compass.Would love to know if there’s a public roadmap or issue tracker for this.Thank you!\"Thanks Again,", "username": "Gregory_Baker" }, { "code": "mongod", "text": " Welcome to the MongoDB Community @Gregory_Baker!The relevant public issues to watch for native Apple Silicon support are still as per my answer above. You can subscribe to updates by Watching issues in the MongoDB Jira issue tracker (which uses the same Single Sign-On login as the forums).As I mentioned, you can run Intel binaries using the M1’s Rosetta Translation environment but this isn’t an officially supported or tested platform for MongoDB applications yet. Intel applications will generally work fine in Rosetta, but you may encounter unexpected issues.So far MongoDB Server (4.4.5) & Compass (1.26.1) appear to be working fine via Rosetta for my limited development and testing experiments. I currently do more of my development on an Intel MacBook because I have run into some annoying issues with some of my other Intel binaries that haven’t been updated yet. Older Electron apps tend to crash, and I noticed the Electron site has a warning about significant performance degradation with Rosetta 2.If you’re trying to decide whether to use a Mac with Apple Silicon as your sole computing environment, you might find isapplesiliconready.com handy for checking community feedback on some common applications.If you are concerned about possible performance or compatibility issues with an Intel mongod server process running via Rosetta, connecting to an externally hosted server or service like MongoDB Atlas is a good alternative.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Support for Apple M1 Silicon?
2020-12-02T09:23:43.518Z
Support for Apple M1 Silicon?
38,196
https://www.mongodb.com/…e3146f9cca59.png
[ "aggregation" ]
[ { "code": "stati: [\n { id: 1, str: 'ordered' },\n { id: 2, str: 'packed' },\n { id: 3, str: 'shipped' },\n]\n\norders: [\n { id: 100, status: 3, strDate: '2021-03-01', items: [] },\n { id: 101, status: 2, strDate: '2021-04-01', items: [] },\n { id: 102, status: 1, strDate: '2021-04-01', items: [] },\n]\n\nresult: [\n { id: 100, status: 3, strDate: '2021-03-01', items: [], strStatus: 'shipped' },\n { id: 101, status: 2, strDate: '2021-04-01', items: [], strStatus: 'packed' },\n { id: 102, status: 1, strDate: '2021-04-01', items: [], strStatus: 'ordered' },\n]\n", "text": "How can I use aggregate $lookup to include a single value from another collection as (root) object?Thanks,\nbluepuma", "username": "blue_puma" }, { "code": "$lookup$unwinddb.orders.aggregate({\n $lookup: {\n from: \"stati\",\n let: {\n status: \"$status\"\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\n \"$id\",\n \"$$status\"\n ]\n }\n }\n },\n {\n $project: {\n str: 1\n }\n }\n ],\n as: \"_strStatus\"\n }\n},\n{\n $unwind: {\n path: \"$_strStatus\",\n preserveNullAndEmptyArrays: false\n }\n},\n{\n $set: {\n strStatus: \"$_strStatus.str\"\n }\n},\n{\n $project: {\n _strStatus: 0\n }\n})\n", "text": "I found a working solution, but I am wondering if it really has to be so complicated.It seems very expensive to $lookup and $unwind an array instead of just integrating the single field.", "username": "blue_puma" }, { "code": "$lookup/$unwind$lookup/$unwind", "text": "$lookup/$unwind works, but seems to me expensive from a computational perspective.MongoDB PlaygroundIs there a way to fetch a field value from a 1-to-1 relationship without the $lookup/$unwind combination?", "username": "blue_puma" }, { "code": "$lookup$lookupstatistatusid$setstr$arrayElemAtstrStatus.strdb.orders.aggregate([\n {\n $lookup: {\n from: \"stati\",\n localField: \"status\",\n foreignField: \"id\",\n as: \"strStatus\"\n }\n },\n {\n $set: {\n strStatus: { $arrayElemAt: [\"$strStatus.str\", 0] }\n }\n }\n])\n", "text": "Hello @blue_puma Welcome to MongoDB Community Forum,You can use $lookup without pipeline,", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to add single field using $lookup?
2021-04-29T09:58:24.212Z
How to add single field using $lookup?
42,615
null
[ "aggregation", "queries", "python" ]
[ { "code": "myd = mergedCollection.find(myquery).sort(\"Price\")\nprint(\"MY D: \"+str(myd))\nshoes = myd[0][\"theAssociatedShoes\"]\nprint(\"Shoes: \"+ str(shoes))\nMY D: <pymongo.cursor.Cursor object at 0x05859538>\nShoes: [{'Title': 'Nike Cosmic Unity \"Amalgam\"', 'Price': 160, 'Currency': 'USD', 'Picture': 'https://static.nike.com/a/images/t_default/3bca4f51-f2e4-4948-a665-27e03eea4ddd/cosmic-unity-amalgam-basketball-shoe-nDHKr4.png', 'Link': 'nike.com/t/cosmic-unity-amalgam-basketball-shoe-nDHKr4/DA6725-500', 'Brand': 'nike'}, {'Title': 'Ultraboost 21 Shoes', 'Price': 180, 'Currency': ' USD', 'Picture': 'https://assets.adidas.com/images/w_280,h_280,f_auto,q_auto:sensitive/3728ddf5b7dc4a2ca3e3ac7d0106c5a1_9366/ultraboost-21-shoes.jpg', 'Link': 'adidas.com/us/ultraboost-21-shoes/FY0350.html', 'Brand': 'adidas'}, {'Title': 'Fresh', 'Price': 129, 'Currency': ' USD', 'Picture': 'https://nb.scene7.com/is/image/NB/m880f11_nb_02_i?$pdpflexf2$&wid=440&hei=440', 'Link': 'newbalance.com/pd/fresh-foam-880v11/M880V11-33418.html', 'Brand': 'newbalance'}, {'Title': 'Jordan Delta Breathe', 'Price': 130, 'Currency': 'USD', 'Picture': 'https://static.nike.com/a/images/t_default/b54eef6b-6dd5-4c07-9b09-901ab9d7b01a/jordan-delta-breathe-mens-shoe-2ggX3h.png', 'Link': 'nike.com/t/jordan-delta-breathe-mens-shoe-2ggX3h/CW0783-901', 'Brand': 'jordan'},...]\nmyd = mergedCollection.find(myquery)[0][\"theAssociatedShoes\"].sort(\"Price\")\nmyd = mergedCollection.find(myquery).sort(\"theAssociatedShoes.Price\", -1)\n", "text": "I have the following code:With the output:How come the Shoes are not sorted by price here? I have also tried using this code:But that throws a syntax error. I’ve also tried this solution to no avail.", "username": "lilg263" }, { "code": "find()sort()$unwindtheAssociatedShoesPrice$grouptheAssociatedShoes_iddb.collection.aggregate([\n { $unwind: \"$theAssociatedShoes\" },\n { $sort: { \"theAssociatedShoes.Price\": -1 } },\n { $group: { _id: \"$_id\", theAssociatedShoes: { $push: \"$theAssociatedShoes\" } } }\n]);\n", "text": "Hello @lilg263,find() with sort() method can not sort embedded document, you have to use aggregation pipeline, something like,", "username": "turivishal" } ]
How to query and sort nested mongodb information in python?
2021-05-01T03:40:47.037Z
How to query and sort nested mongodb information in python?
4,539
null
[]
[ { "code": "", "text": "I am not sure if “the community” thinks that posting a link to MongoDB Playground (with all data and queries ready to work with) is spam. It’s probably a piece of software bot that feels posting a URL is spam Maybe admins should consider a whitelist for sites like MongoDB Playground, I think the tool is very supportive for questions here. MAybe even add an extra field for such URLs.“You’ve reached the maximum number of topics a new user can create on their first day. Please wait 12 hours before trying again.” Oh great! Cheers\nbluepuma", "username": "blue_puma" }, { "code": "", "text": "Hi @blue_puma, Welcome to the MongoDB Community and thank you for the feedback!Flags are a combination of user-submitted flags as well as system heuristics. Unusual activity from a brand new user (for example, posting a high velocity of topics or links to a new site) will be flagged for review by our team of site moderators.The notification you received is rare, but is triggered by posting too many links to a new site in a short period of time (which is a very typical spammer activity). The wording in this notification hasn’t been customised from the default yet, but I’ll add a follow-up action for us to improve the messaging.Speed bumps for low trust level users allow the community time to catch up on your questions and give new users some time to learn more about the community (which will also increase your trust level and privileges). We have been relaxing some checks as the community grows and can self-moderate more effectively, but the starting point has been to err in favour of not allowing spammers easy access to annoy our community.Unfortunately we have continuous attempts from spammers, and there is an ongoing game of adjusting technical measures to get the right balance of spam detection. We’re trying to minimise the need for traffic light CAPTCHAs and other more extreme measures that frustrate our real audience.When moderators review flags, we consider false detection and adjust settings appropriately. This includes possibility of adding domains to our allow or reject lists and considering adjusting other activity thresholds. Since you are the first new user to post so many links to the MongoDB Playground in a short period of time, we had not added the site to an allow list yet – but I did so when handling your flagged posts.As you spend more time contributing positively in the forums, you will earn higher trust levels and have fewer speed bumps. The next level (Sprout) can be earned in as little as 10 minutes and removes the cap on posts & replies. Congrats @blue_puma – you’ve already earned that trust level .Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Your post was flagged as spam: the community feels it is an advertisement
2021-04-30T10:18:38.408Z
Your post was flagged as spam: the community feels it is an advertisement
4,478
https://www.mongodb.com/…8fa39385fd3.jpeg
[ "atlas-functions" ]
[ { "code": "Cannot find module 'module-name'", "text": "Hello I have been working with functions and have set up a dependecies archive as specified here: https://docs.mongodb.com/realm/functions/upload-external-dependencies/I followed the instructions to import them from here:https://docs.mongodb.com/realm/functions/import-external-dependencies/\nFor some reason, the .tar file did not work so I had to use .zip.\nThe modules show up on the UI, however importing them yields the error Cannot find module 'module-name'.I have tried both the UI upload and through realm-cli.Here are some screenshots:\n16-02-2021 08-18-13 a.m.1138×610 66.3 KB\n", "username": "pescoboza" }, { "code": "exports = function () {\n const ajv = require(\"ajv\");\n console.log(\"Made it this far\");\n console.log(JSON.stringify(ajv, null, 4));\n \n return \"All good\";\n}\ngznpm install ajv\ntar -czf node_modules.tar.gz node_modules\n", "text": "I just tested it out using this function and it worked for me:I created the gz file with:Just to check – did you “REVIEW & DEPLOY” after uploading the modules?image1074×131 11.9 KB", "username": "Andrew_Morgan" }, { "code": "exports = async function getTimezones() {\n\n const moment = require(\"moment\");\n \n console.log('moment found???');\n};\n", "text": "I am having the same issue as the original poster.\nI create a .zip file, uploaded it manually, and the modules appear in the UI.When I try to require it I get a ‘not found’ error. Please help.I am not able to upload a screenshot (permission denied), but yes, the modules exist in the UI, yes the app is deployed.My code is simply:with the error:\nCannot find module ‘moment’trace:\nFunctionError: Cannot find module ‘moment’\nat require ()\nat getTimezones$ (function.js:6:18)\nat call ()\nat tryCatch (:55:37)\nat invoke (:281:22)\nat :107:16\nat call ()\nat tryCatch (:55:37)\nat invoke (:145:20)\nat :180:11\nat :3:6\nat callInvokeWithMethodAndArg ()\nat enqueue (:202:13)\nat :107:16\nat :226:9\nat getTimezones (function.js:4:11)\nat apply ()\nat function_wrapper.js:3:1\nat :11:1", "username": "Eve_Ragins" }, { "code": "@npm install --only=prod", "text": "Update: I followed the exactly how Andrew wrote them, and that seemed to work. This was beyond frustrating – I have a full page of notes I tried that didn’t work.For future people, the difference between this “working” and not is either:", "username": "Eve_Ragins" }, { "code": "tar.gz--only=prod", "text": "Thanks for letting us know, Eve. Sorry to hear that you had a difficult experience getting function dependencies working. I work on the Realm docs team, and I would be happy to update the existing import docs if you have any other information in your notes. I’d definitely like to keep other users from experiencing this level of frustration in the future!In particular, I’m interested in hearing more about what the “something internal” that went wrong with your tar.gz was, and why --only=prod seemed to make a difference for you.", "username": "Nathan_Contino" }, { "code": "exports = function(arg){\n const { nanoid } = require('nanoid')\n return \"work\"\n};\n> ran on Fri Mar 19 2021 01:52:44 GMT+0300 (GMT+03:00)\n> took 261.423531ms\n> error: \nfailed to eval source for module 'nanoid': node_modules/nanoid/index.cjs: Line 9:7 Unexpected identifier (and 16 more errors)\n> trace: \nFunctionError: failed to eval source for module 'nanoid': node_modules/nanoid/index.cjs: Line 9:7 Unexpected identifier (and 16 more errors)\n at require (<native code>)\n at exports (function.js:2:24)\n at apply (<native code>)\n at function_wrapper.js:2:13\n at <anonymous>:11:1\n\n", "text": "i followed the same way but i have same errornpm i chance\nnpm i nanoid\ntar -czf node_modules.tar.gz node_modulesin functionresult:", "username": "Akifcan_Kara" }, { "code": "", "text": "I had the same issue when i packaged all the files in node_modules individualy. When i packaged only the root node_modules file it was working.image1366×768 112 KB", "username": "Nemanja_Trivic" }, { "code": "", "text": "I have a weirder issue.\nWhen I run the function individually, either from a client SDK or the web its works and the dependency is being imported and used.\nWhen the function runs as part of a Trigger (Auth trigger for example) the dependency is not found and function fails.From the logs:\nCannot find module ‘myModule’Note:\nThe app is released to the Store and was working fine for the past 2-3 months. Something happened this month and it stopped working on with Triggers.", "username": "Georges_Jamous" }, { "code": "", "text": "Hi @Georges_Jamous – are you able to share your function and I’ll try it out?", "username": "Andrew_Morgan" }, { "code": "exports = async function({ user }) {\n const builder = require('myPackage');\n const userEmail = user.data.email;\n const userObjectId = BSON.ObjectId(user.id);\n const userProfulePartition = builder.partitionForUserProfile({ userId: user.id })\n ...clipped\n return { ok: 1 };\n};\nasync function myFunction({ ...clipped }) {\n ...clipped\n}\n \"can_evaluate\": {},\n \"name\": \"afterSignup\",\n \"private\": true\n}\ntar -czf node_modules.tar.gz node_modules", "text": "Unfortunately its a bit complex, however its a standard function that Trigger after Signup.–\nConfig:Dependencies were uploaded using the UI and compressed using tar -czf node_modules.tar.gz node_modulesI have two packages that are listed in the UI, which is correct.Hope this helps", "username": "Georges_Jamous" }, { "code": "exports = async function(input) {\n return await context.functions.execute(\"afterSignup\", input);\n};\n", "text": "@Andrew_Morgan I have also tried something I thought maybe could overcome the issue until the problem is known.Since by calling the function explicitly without a trigger it works, I thought I would replace the Trigger function with a proxy function that would in turn call my original function.Like thisBut no luck, it did not work either.", "username": "Georges_Jamous" }, { "code": "", "text": "Try to import package in the proxy function instead of called function.", "username": "Nemanja_Trivic" }, { "code": "npm i -s lodash\ntar -czf node_modules.tar.gz node_modules\ntryDependencyexports = function(arg){\n const cloneDeep = require(\"lodash/cloneDeep\");\n var original = { name: \"Deep\" };\n var copy = cloneDeep(original);\n copy.name = \"John\";\n console.log(`original: ${original.name}`);\n console.log(`copy: ${copy.name}`);\n return (original != copy);\n};\n", "text": "I just tried running a function with a dependency from a database trigger and it worked.This is how I built the dependency file:I then uploaded it to my Realm app.image847×331 25.5 KBI created the tryDependency function:I then register that function against a database trigger and update the collection to make it fire.I get the correct results written to the logs:image806×266 16.2 KB", "username": "Andrew_Morgan" }, { "code": "", "text": "@Nemanja_Trivic so that was the initial problem the proxy is trying to solve. Not having the import in the Trigger. Anyhow, I have just tried it. It still not work.@Andrew_Morgan mmm that is weird. The thing is that I have done the exact steps. And it’s not that it is not working generally. It is, it’s only failing on a Trigger (Auth, don’t know about Database)\nCan you think of any reason why it wouldn’t?–From my side. I have provided a quick fix around this issue for now. and I will try to create a fresh App next week to try it like you did – just to cover my bases.", "username": "Georges_Jamous" }, { "code": "exports = function(authEvent) {\n const stripeConfiguration = context.values.get('stripe');\n const stripe = require('stripe')(stripeConfiguration.token);\n \n const customer = stripe.customers.create({\n name: \"testing\",\n email: \"[email protected]\",\n metadata: {\n _partition: context.user.id\n }\n });\n \n return;\n};\n", "text": "same problem here.database triggers work well with external libraries.just auth login and create failing.im using e-mail/password method on graphql headers request.\nPOST …/graphql header: {email: [email protected], password: yyyy}this is my function code:log message: Cannot find module ‘stripe’.\ntitle1424×659 29.6 KB\nany idea?", "username": "Bob_Dylan" }, { "code": "", "text": "@Bob_Dylan @Georges_Jamous I just called my same trigger from a login trigger — and sure enough, I get the dependency error when it runs. I’ll investigate some more.", "username": "Andrew_Morgan" }, { "code": "", "text": "In the meantime, one workaround would be to have your auth trigger insert a document into a “notifications” collection. There would be a database “insert” trigger on the “notifications” collection which did the actual work (including the use of your dependency) and optionally delete the notification document.", "username": "Andrew_Morgan" }, { "code": "", "text": "The engineering team has found the bug, and a fix should be deployed early next week.Thanks to @Bob_Dylan, @Eve_Ragins, @Akifcan_Kara, @Nemanja_Trivic, and @Georges_Jamous for raising this and helping to narrow down the problem!Cheers, Andrew.", "username": "Andrew_Morgan" } ]
Realm Functions importing dependencies: Cannot find module 'module-name'
2021-02-16T19:21:06.624Z
Realm Functions importing dependencies: Cannot find module &lsquo;module-name&rsquo;
8,405
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi guysI need to create a Q&A db with the Q and A as separate entities (this is what I am thinking now, please feel free to suggest alternatives). Since this is a NoSQL DB what is the best way to correlate the records and how should I store them? Editing the Qs and As could happen frequently and keeping 2-3 previous versions of each will be needed.\nAny pointers to primers and articles or even books that can educate me in this direction are more than welcome\nThank you\nLUUpdate: of course I started googling and while I will be getting answers here I am reading https://docs.mongodb.com/manual/applications/data-models-relationships/\nUpdate2: Ok it seems that the embedded documents is the way to go here, and not separate entities", "username": "Last_Unicorn" }, { "code": "", "text": "Hi @Last_Unicorn! Thanks for posting (and inadvertently answering) your question!I’m glad that you found an answer and came back to share it here so that others may benefit too Don’t hesitate to come back and ask any additional questions as you’re composing your questions and answers via embedded documents!", "username": "yo_adrienne" }, { "code": "", "text": "Hi @yo_adrienne yes I will definitely have more Qs\nThanks for confirming that I got it right", "username": "Last_Unicorn" } ]
Newbie, need help with basic data modeling, Q&A db
2021-04-30T14:12:09.538Z
Newbie, need help with basic data modeling, Q&amp;A db
1,757
null
[ "dot-net", "unity" ]
[ { "code": "", "text": "Hello guys!Im trying to connect my MongoDB to Unity WebGL with a C# script.Im getting an incompatibility error for the MongoDB drivers saying “Is the assembly missing or incompatible with the current platform?”Does this mean I cant use MongoDB with Unity for HTML5 ?Thank you!", "username": "Bruno_Mataloto" }, { "code": "", "text": "Hi @Bruno_Mataloto,It depends on how you’re trying to use MongoDB with Unity.Right now the Realm SDK for Unity does not support WebGL builds. However, if you’re trying to make HTTP requests from your Unity game to some backend, that will work fine.Just to be clear, you shouldn’t be using the C# driver for Unity because that driver is intended for backend applications, where as a Unity game is considered a frontend application.Hopefully that helps.Best,", "username": "nraboy" }, { "code": "", "text": "Hello, thank you!I found webwooks and I am now able to access the data with a HTTP post request.I just have a question about the webwook function. I wanted to return only a single value from the findOne() resulting document, the problem is that I always get the EJSON format and cant find a way to access the valueMy file is as follows :_id:“sensor.parents_bedroom_temp”\n_msgid:“94cfd84e.1db068”\npayload:“19.83”\ntopic:\"\"And I just wanted the payload value, Im using this :var doc = context.services.get(“mongodb-atlas”).db(\"…\").collection(\"…\").findOne({ _id :arg1},{\"_id\":0,“payload”:1});I also tried to access doc.payload but it returns undefined", "username": "Bruno_Mataloto" }, { "code": "JSON.stringify", "text": "Hi @Bruno_Mataloto,By default a Realm Function will return EJSON. You can get around this by wrapping the object in a JSON.stringify before you return it.You can learn more about it at this particular spot in the documentation:Hopefully that helps.Best,", "username": "nraboy" } ]
Unity WebGL C# error
2021-04-29T11:35:08.860Z
Unity WebGL C# error
4,952
null
[]
[ { "code": "", "text": "I have a mac running the latest version of Big Sur. I installed mongodb-community using homebrew, and it has been working ok for a week or so. All of a sudden, I find that mongod keeps quitting. I’ve tried uninstall mongodb-community and re installing it, but the same thing happens. When I run brew services list, it indicates that it’s an error. Could the database have been damaged? Are there tools that can check that? I was using it during a long run yesterday using the python package sacred, which records the results of experiments in a mongo database. I have trace backs, and logs. I don’t see a way to attach them to my post, if that would be helpful.", "username": "Victor_Miller" }, { "code": "", "text": "A followup: I did a mongod --repair. Unfortunately, even though it was running with journaling, it said that the repair failed. Is there any way to at least get back some of the data? In any case, how do I get mongo to start working again? If I move all of the database files to another directory, will mongo recreate new files in an empty directory?", "username": "Victor_Miller" }, { "code": "", "text": "So, I blew away the files (after copying them) in the directory /usr/local/var/mongdb and restarted mongod with brew services start mongod. Things seemed to be ok for quite a while. I’ve had a long running job using the python package sacred which writes to the mongo database. Just now I got a notice that mongod quit. Below is the trace back. Any idea what could be wrong?Here’s the beginning of the report from the system:Process: mongod [96082]\nPath: /usr/local/Cellar/mongodb-community/4.4.4/bin/mongod\nIdentifier: mongod\nVersion: 0\nCode Type: X86-64 (Native)\nParent Process: ??? [1]\nResponsible: mongod [96082]\nUser ID: 501Date/Time: 2021-04-22 18:17:06.676 -0400\nOS Version: macOS 11.2.3 (20D91)\nReport Version: 12\nAnonymous UUID: 4556CAA5-CE53-CD6B-2E13-8E5DB5751AF1Sleep/Wake UUID: D3D30E4E-51D4-46D7-B536-DA2E49542AAETime Awake Since Boot: 47000 seconds\nTime Since Wake: 38000 secondsSystem Integrity Protection: enabledCrashed Thread: 14 WTJournalFlusherException Type: EXC_CRASH (SIGABRT)\nException Codes: 0x0000000000000000, 0x0000000000000000\nException Note: EXC_CORPSE_NOTIFYApplication Specific Information:\nabort() called", "username": "Victor_Miller" }, { "code": "", "text": "Now I’ve been able to restart mongod, but looking at the logs it says about port 27017 “socket already in use”.", "username": "Victor_Miller" }, { "code": "", "text": "Itt means another mongod already running on port 27017\nYou can check by ps -ef|grep mongodHow did you start your mongod\nFrom services or from command line?Before starting mongod just run mongo\nIf you are able to connect means a mongod is running on default port 27017", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks. I tried ps -ef | egrep mongod – no hits. I start it via brew services. brew services list said that nothing was running. When I ran mongo it said that nothing was running. Eventually I rebooted (I needed to do that anyway) and I was then able to start things ok. I don’t know why this keeps happening.", "username": "Victor_Miller" }, { "code": "", "text": "My problems bring up another question. What are the best practices for backing up the database. Since it looks like mongo is fragile enough for an error to corrupt the database so that it can’t be repaired (this happened to me), what do people do to backup the current state?", "username": "Victor_Miller" }, { "code": "", "text": "Hi @Victor_Miller welcome to the community!For details on using brew to install MongoDB, please see Install MongoDB Community Edition on macOS. The page includes the list of important files that may be useful.Any data corruption issue is a serious issue, so if you don’t mind, please open a ticket in the SERVER project describing what you experienced in detail, along with reproduction steps.Regarding best practices in backup, it’s more of a case-by-case basis and what you need. Personally I do a database dump using mongodump for my local (development) instance once in a while, and I also use Atlas free tier as another data store that I sync with my local instance reasonably frequently. My Atlas use is so that my data is available even if I accidentally spill coffee on my laptop Best regards\nKevin", "username": "kevinadi" } ]
Mongod keeps quitting on mac running Big Sur
2021-04-22T12:30:24.101Z
Mongod keeps quitting on mac running Big Sur
4,873
https://www.mongodb.com/…0_2_1024x213.png
[ "replication", "monitoring" ]
[ { "code": "p002-rs:SECONDARY> rs.conf()\n{\n\t\t\"_id\" : \"p002-rs\",\n\t\t\"version\" : 15,\n\t\t\"term\" : 12,\n\t\t\"protocolVersion\" : NumberLong(1),\n\t\t\"writeConcernMajorityJournalDefault\" : true,\n\t\t\"members\" : [\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 0,\n\t\t\t\t\t\t\"host\" : \"p002xdmn000:27017\",\n\t\t\t\t\t\t\"arbiterOnly\" : false,\n\t\t\t\t\t\t\"buildIndexes\" : true,\n\t\t\t\t\t\t\"hidden\" : false,\n\t\t\t\t\t\t\"priority\" : 2,\n\t\t\t\t\t\t\"tags\" : {\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\t\t\t\"votes\" : 1\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 1,\n\t\t\t\t\t\t\"host\" : \"p002xdmn001:27017\",\n\t\t\t\t\t\t\"arbiterOnly\" : false,\n\t\t\t\t\t\t\"buildIndexes\" : true,\n\t\t\t\t\t\t\"hidden\" : false,\n\t\t\t\t\t\t\"priority\" : 1,\n\t\t\t\t\t\t\"tags\" : {\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\t\t\t\"votes\" : 1\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 2,\n\t\t\t\t\t\t\"host\" : \"p002xdmn002:27017\",\n\t\t\t\t\t\t\"arbiterOnly\" : true,\n\t\t\t\t\t\t\"buildIndexes\" : true,\n\t\t\t\t\t\t\"hidden\" : false,\n\t\t\t\t\t\t\"priority\" : 0,\n\t\t\t\t\t\t\"tags\" : {\n\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\t\t\t\"votes\" : 1\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 5,\n\t\t\t\t\t\t\"host\" : \"p002xdmg000:27017\",\n\t\t\t\t\t\t\"arbiterOnly\" : false,\n\t\t\t\t\t\t\"buildIndexes\" : true,\n\t\t\t\t\t\t\"hidden\" : false,\n\t\t\t\t\t\t\"priority\" : 0,\n\t\t\t\t\t\t\"tags\" : {\n\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\t\t\t\"votes\" : 0\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 7,\n\t\t\t\t\t\t\"host\" : \"p002xdmg200:27017\",\n\t\t\t\t\t\t\"arbiterOnly\" : false,\n\t\t\t\t\t\t\"buildIndexes\" : true,\n\t\t\t\t\t\t\"hidden\" : false,\n\t\t\t\t\t\t\"priority\" : 0,\n\t\t\t\t\t\t\"tags\" : {\n\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\t\t\t\"votes\" : 0\n\t\t\t\t}\n\t\t],\n\t\t\"settings\" : {\n\t\t\t\t\"chainingAllowed\" : true,\n\t\t\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\t\t\"heartbeatTimeoutSecs\" : 10,\n\t\t\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\t\t\"catchUpTimeoutMillis\" : -1,\n\t\t\t\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\t\t\t\"getLastErrorModes\" : {\n\n\t\t\t\t},\n\t\t\t\t\"getLastErrorDefaults\" : {\n\t\t\t\t\t\t\"w\" : 1,\n\t\t\t\t\t\t\"wtimeout\" : 0\n\t\t\t\t},\n\t\t\t\t\"replicaSetId\" : ObjectId(\"5e7888456f011b6b9bbbd2f1\")\n\t\t}\n}\np002-rs:SECONDARY> rs.status()\n{\n\t\t\"set\" : \"p002-rs\",\n\t\t\"date\" : ISODate(\"2021-04-20T03:38:55.185Z\"),\n\t\t\"myState\" : 2,\n\t\t\"term\" : NumberLong(12),\n\t\t\"syncSourceHost\" : \"p002xdmn001:27017\",\n\t\t\"syncSourceId\" : 1,\n\t\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\t\"majorityVoteCount\" : 2,\n\t\t\"writeMajorityCount\" : 2,\n\t\t\"votingMembersCount\" : 3,\n\t\t\"writableVotingMembersCount\" : 2,\n\t\t\"optimes\" : {\n\t\t\t\t\"lastCommittedOpTime\" : {\n\t\t\t\t\t\t\"ts\" : Timestamp(1618889935, 1650),\n\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t},\n\t\t\t\t\"lastCommittedWallTime\" : ISODate(\"2021-04-20T03:38:55.111Z\"),\n\t\t\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\t\t\t\"ts\" : Timestamp(1618889935, 1650),\n\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t},\n\t\t\t\t\"readConcernMajorityWallTime\" : ISODate(\"2021-04-20T03:38:55.111Z\"),\n\t\t\t\t\"appliedOpTime\" : {\n\t\t\t\t\t\t\"ts\" : Timestamp(1618889935, 1650),\n\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t},\n\t\t\t\t\"durableOpTime\" : {\n\t\t\t\t\t\t\"ts\" : Timestamp(1618889935, 1650),\n\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t},\n\t\t\t\t\"lastAppliedWallTime\" : ISODate(\"2021-04-20T03:38:55.111Z\"),\n\t\t\t\t\"lastDurableWallTime\" : ISODate(\"2021-04-20T03:38:55.111Z\")\n\t\t},\n\t\t\"members\" : [\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 0,\n\t\t\t\t\t\t\"name\" : \"p002xdmn000:27017\",\n\t\t\t\t\t\t\"health\" : 1,\n\t\t\t\t\t\t\"state\" : 1,\n\t\t\t\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\t\t\t\"uptime\" : 171609,\n\t\t\t\t\t\t\"optime\" : {\n\t\t\t\t\t\t\t\t\"ts\" : Timestamp(1618889933, 1809),\n\t\t\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"optimeDurable\" : {\n\t\t\t\t\t\t\t\t\"ts\" : Timestamp(1618889933, 1809),\n\t\t\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"optimeDate\" : ISODate(\"2021-04-20T03:38:53Z\"),\n\t\t\t\t\t\t\"optimeDurableDate\" : ISODate(\"2021-04-20T03:38:53Z\"),\n\t\t\t\t\t\t\"lastHeartbeat\" : ISODate(\"2021-04-20T03:38:54.035Z\"),\n\t\t\t\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2021-04-20T03:38:54.696Z\"),\n\t\t\t\t\t\t\"pingMs\" : NumberLong(11),\n\t\t\t\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\t\t\t\"syncSourceId\" : -1,\n\t\t\t\t\t\t\"infoMessage\" : \"\",\n\t\t\t\t\t\t\"electionTime\" : Timestamp(1617942386, 166),\n\t\t\t\t\t\t\"electionDate\" : ISODate(\"2021-04-09T04:26:26Z\"),\n\t\t\t\t\t\t\"configVersion\" : 15,\n\t\t\t\t\t\t\"configTerm\" : 12\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 1,\n\t\t\t\t\t\t\"name\" : \"p002xdmn001:27017\",\n\t\t\t\t\t\t\"health\" : 1,\n\t\t\t\t\t\t\"state\" : 2,\n\t\t\t\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\t\t\t\"uptime\" : 171609,\n\t\t\t\t\t\t\"optime\" : {\n\t\t\t\t\t\t\t\t\"ts\" : Timestamp(1618889933, 1809),\n\t\t\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"optimeDurable\" : {\n\t\t\t\t\t\t\t\t\"ts\" : Timestamp(1618889933, 1809),\n\t\t\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"optimeDate\" : ISODate(\"2021-04-20T03:38:53Z\"),\n\t\t\t\t\t\t\"optimeDurableDate\" : ISODate(\"2021-04-20T03:38:53Z\"),\n\t\t\t\t\t\t\"lastHeartbeat\" : ISODate(\"2021-04-20T03:38:54.447Z\"),\n\t\t\t\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2021-04-20T03:38:54.312Z\"),\n\t\t\t\t\t\t\"pingMs\" : NumberLong(11),\n\t\t\t\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\t\t\t\"syncSourceHost\" : \"p002xdmn000:27017\",\n\t\t\t\t\t\t\"syncSourceId\" : 0,\n\t\t\t\t\t\t\"infoMessage\" : \"\",\n\t\t\t\t\t\t\"configVersion\" : 15,\n\t\t\t\t\t\t\"configTerm\" : 12\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 2,\n\t\t\t\t\t\t\"name\" : \"p002xdmn002:27017\",\n\t\t\t\t\t\t\"health\" : 1,\n\t\t\t\t\t\t\"state\" : 7,\n\t\t\t\t\t\t\"stateStr\" : \"ARBITER\",\n\t\t\t\t\t\t\"uptime\" : 171609,\n\t\t\t\t\t\t\"lastHeartbeat\" : ISODate(\"2021-04-20T03:38:54.471Z\"),\n\t\t\t\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2021-04-20T03:38:54.249Z\"),\n\t\t\t\t\t\t\"pingMs\" : NumberLong(11),\n\t\t\t\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\t\t\t\"syncSourceId\" : -1,\n\t\t\t\t\t\t\"infoMessage\" : \"\",\n\t\t\t\t\t\t\"configVersion\" : 15,\n\t\t\t\t\t\t\"configTerm\" : 12\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 5,\n\t\t\t\t\t\t\"name\" : \"p002xdmg000:27017\",\n\t\t\t\t\t\t\"health\" : 1,\n\t\t\t\t\t\t\"state\" : 2,\n\t\t\t\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\t\t\t\"uptime\" : 171619,\n\t\t\t\t\t\t\"optime\" : {\n\t\t\t\t\t\t\t\t\"ts\" : Timestamp(1618889935, 1650),\n\t\t\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"optimeDate\" : ISODate(\"2021-04-20T03:38:55Z\"),\n\t\t\t\t\t\t\"syncSourceHost\" : \"p002xdmn001:27017\",\n\t\t\t\t\t\t\"syncSourceId\" : 1,\n\t\t\t\t\t\t\"infoMessage\" : \"\",\n\t\t\t\t\t\t\"configVersion\" : 15,\n\t\t\t\t\t\t\"configTerm\" : 12,\n\t\t\t\t\t\t\"self\" : true,\n\t\t\t\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\t\"_id\" : 7,\n\t\t\t\t\t\t\"name\" : \"p002xdmg200:27017\",\n\t\t\t\t\t\t\"health\" : 1,\n\t\t\t\t\t\t\"state\" : 2,\n\t\t\t\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\t\t\t\"uptime\" : 171608,\n\t\t\t\t\t\t\"optime\" : {\n\t\t\t\t\t\t\t\t\"ts\" : Timestamp(1618889933, 1809),\n\t\t\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"optimeDurable\" : {\n\t\t\t\t\t\t\t\t\"ts\" : Timestamp(1618889933, 1809),\n\t\t\t\t\t\t\t\t\"t\" : NumberLong(12)\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"optimeDate\" : ISODate(\"2021-04-20T03:38:53Z\"),\n\t\t\t\t\t\t\"optimeDurableDate\" : ISODate(\"2021-04-20T03:38:53Z\"),\n\t\t\t\t\t\t\"lastHeartbeat\" : ISODate(\"2021-04-20T03:38:54.251Z\"),\n\t\t\t\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2021-04-20T03:38:54.453Z\"),\n\t\t\t\t\t\t\"pingMs\" : NumberLong(91),\n\t\t\t\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\t\t\t\"syncSourceHost\" : \"p002xdmn001:27017\",\n\t\t\t\t\t\t\"syncSourceId\" : 1,\n\t\t\t\t\t\t\"infoMessage\" : \"\",\n\t\t\t\t\t\t\"configVersion\" : 15,\n\t\t\t\t\t\t\"configTerm\" : 12\n\t\t\t\t}\n\t\t],\n\t\t\"ok\" : 1,\n\t\t\"$clusterTime\" : {\n\t\t\t\t\"clusterTime\" : Timestamp(1618889935, 1650),\n\t\t\t\t\"signature\" : {\n\t\t\t\t\t\t\"hash\" : BinData(0,\"UCCI1C7YAqH3J4A0yw8lEYYudWk=\"),\n\t\t\t\t\t\t\"keyId\" : NumberLong(\"6906426962382684228\")\n\t\t\t\t}\n\t\t},\n\t\t\"operationTime\" : Timestamp(1618889935, 1650)\n}\n", "text": "Hello,We are having a pretty specific usecase of mongodb replication. There’s a very small set of data of around 1-2GB that is updated a lot at peak periods and gets replicated.We use 3 sites:\nsite 1: PSA cluster, 2 data bearing voting members\nsite 2: Secondary replica, non voting, non electable, RTT from site 1 is 10ms\nsite 3: Secondary replica, non voting, non electable, RTT from site 1 is 100msThe HW specs do not differ between sites 2 & 3:\n6 CPU cores + 5 GB RAM. Performant SSD drives for data storage.The oplog size is set to 10GB.p002-rs:SECONDARY> rs.printReplicationInfo()\nconfigured oplog size: 10240MB\nlog length start to end: 8533secs (2.37hrs)\noplog first event time: Tue Apr 20 2021 01:13:50 GMT+0000 (UTC)\noplog last event time: Tue Apr 20 2021 03:36:03 GMT+0000 (UTC)\nnow: Tue Apr 20 2021 03:36:04 GMT+0000 (UTC)WriteConcern is not in use (since PSA arch is in effect).The intention is to deliver frequently updating data without delays to sites 2 and 3 from site 1.We observe no issues with site 2 whatsoever. But for site 3 a replication lag appears from time to time that may last up to 1 hour, continuously growing. At the very same time, there are absolutely no signs of troubles on site 2. Worth noting that site 2 instance is also actively serving requests at the same time, while site 3 is not very loaded by clients.We face different cases and cannot reliably correlate it with load patterns. Sometimes with TPS reaching 8k ops there may be no lag at all, at other times with about 4k ops we observed the lag grow.We investigated the effect of slow locking queries on Primary side and fixed them, so now the DB profiling does not produce any queries slower than 1s.I would like to know how to understand the cause of the lag whilst it is increasing and piling up? Is there any common approach to apply to troubleshooting this?Some cluster related details:p002xdmn* - site 1\np002xdmg000 - site 2\np002xdmg200 - site 3rs.conf:rs.status:A picture of the increasing lag from monitoring from the last occurrence:\n\nimage1824×381 44.3 KB\n", "username": "Artem_Meshcheryakov" }, { "code": "", "text": "Often, hidden secondaries are used for analytical purpose. As such, you might have different indexes on the lagging secondary. The replication is slower since it has to do more work to update the extra indexes.", "username": "steevej" }, { "code": "", "text": "Hi Steeve, thank you for response!\nThis is not a hidden secondary, and the configuration between two secondaries is identical, and same data is replicated to both of them. So extra indexes cannot exist there.", "username": "Artem_Meshcheryakov" }, { "code": "", "text": "A comment worth sharing: at yet another occasion it was confirmed that restarting the mongodb service on the lagging replica helps to remove the lag immediately:", "username": "Artem_Meshcheryakov" }, { "code": "", "text": "Hi @Artem_Meshcheryakov welcome to the community!Sometimes when there is lag in a secondary, the usual suspects are:Do you think that the issue can be caused by one of the reasons listed above? Since you mentioned that the other secondary do not have this issue and it’s not serving reads, could you try to stop reading for a while on the problematic secondary to check if the problem reoccurs? At least it will confirm or deny if serving reads is the cause.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin!\nAppreciate your response!\nLet me clarify few doubts: the replicas on 2 sites are fully identical, there is no difference in their hardware. Read workloads are out of the question, because we have observed the same lag issue even with reads completely off. For now the lagging replica is under much less read load than the non-lagging one. Burst credit is also not the case.\nSo this leaves us with point 5 and, certainly, network connection is obviously in focus in the investigation. However, we have not yet found any proof for that. The connectivity between replication source and the replica is established through IPSec tunnel + Cloud Provider network, which can be considered quite stable. But we don’t remove the network as the possible origin of the issues.", "username": "Artem_Meshcheryakov" }, { "code": "", "text": "Hi @Artem_MeshcheryakovI have one more thought about this: replication lag was typically caused because the secondary cannot process the incoming writes fast enough, so work gets queued up (~3 minutes worth of work, according to your graph). From my limited experience, the most common cause was slow disk, the second most common cause was because the node was also busy doing something else.As you mentioned that all the nodes are using equal hardware and the other secondary does not have this problem, something is evidently different about this problematic node. You might want to check the node’s CPU/disk/memory load e.g. using iostat when it’s lagging, and compare it against the primary and the normal secondary.Additionally, if you’re not using MongoDB 4.4, there are some performance improvements with the new streaming replication feature in MongoDB 4.4 that may be able to help with your issue.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi ,Thank you for the suggestions! At the moment of the lag the activity on the replica when it comes to cpu/disk actually drops, so it looks more like a locking issue. But I don’t understand how it can be related to the distance. It’s a curious combination of factors.\nWe are on MongoDB 4.4. and streaming replication already. I have a thought to try to switch it to the batch mode to see if it gives any difference (to possibly prove it’s some streaming replication issue), but have not done this yet.", "username": "Artem_Meshcheryakov" } ]
How to find the cause of replication lag on a particular secondary?
2021-04-20T03:46:26.370Z
How to find the cause of replication lag on a particular secondary?
4,350
null
[ "replication" ]
[ { "code": "python3 -c \"\n> import pymongo\n> client = pymongo.MongoClient('mongodb://127.0.0.1:27017')\n> config = {'_id': 'rs0', 'members': [{'_id': 0, 'host': '127.0.0.1:27017'}]}\n> client.admin.command('replSetInitiate', config)\n> \"\nTraceback (most recent call last):\n File \"<string>\", line 5, in <module>\n File \"/usr/lib/python3/dist-packages/pymongo/database.py\", line 740, in command\n codec_options, session=session, **kwargs)\n File \"/usr/lib/python3/dist-packages/pymongo/database.py\", line 637, in _command\n client=self.__client)\n File \"/usr/lib/python3/dist-packages/pymongo/pool.py\", line 694, in command\n exhaust_allowed=exhaust_allowed)\n File \"/usr/lib/python3/dist-packages/pymongo/network.py\", line 162, in command\n parse_write_concern_error=parse_write_concern_error)\n File \"/usr/lib/python3/dist-packages/pymongo/helpers.py\", line 168, in _check_command_response\n max_wire_version)\npymongo.errors.OperationFailure: cluster time cannot be advanced beyond its maximum value, full error: {'operationTime': Timestamp(0, 0), 'ok': 0.0, 'errmsg': 'cluster time cannot be advanced beyond its maximum value', 'code': 40482, 'codeName': 'Location40482', '$clusterTime': {'clusterTime': Timestamp(0, 0), 'signature': {'hash': b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', 'keyId': 0}}}\nstatic const uint32_t kMaxSignedInt = ((1U << 31) - 1);\nbool lessThanOrEqualToMaxPossibleTime(LogicalTime time, uint64_t nTicks) {\n return time.asTimestamp().getSecs() <= LogicalClock::kMaxSignedInt &&\n time.asTimestamp().getInc() <= (LogicalClock::kMaxSignedInt - nTicks);\n}\n2 ** 31 / (3600 * 24 * 365) == 68 years from 1970 / 1970 + 68 == 2038 ", "text": "Hello,\nI have noticed a mongodb bug. If I set my system time to 2070 and I set the mongodb cluster replicaSet configuration, mongod crash:From mongo source it seems the max clock value is defined as belowWhich means: 2 ** 31 / (3600 * 24 * 365) == 68 years from 1970 / 1970 + 68 == 2038 It seems linked to this bug https://jira.mongodb.org/browse/SERVER-36870, which was raised in 2018. Do you know if there has been any update on this topic ?Yours sincerely,Alexis", "username": "Alexis_Doussot" }, { "code": "", "text": "Hi @Alexis_Doussot,Welcome to the MongoDB Community Forums… This is a known problem for all systems using 32-bit integers for measuring time and as you can see in the JIRA, we’re aware of the issue and the ticket has been assigned to a team’s queue. You can also keep track of further comments right there on the ticket.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Mongodb replicaset cluster maximum logicalTime => 2038 year max
2021-04-29T17:22:13.163Z
Mongodb replicaset cluster maximum logicalTime =&gt; 2038 year max
2,642
null
[]
[ { "code": "", "text": "Hi! I’m using MongoDB (local) and MongoAtlas to store the logs of an application. I don’t understand why, in the same period of time my system stores more entries in Altas than in the local mongoDB version. I don’t know if it could be due to I’ve enable TLS1.3 (https://docs.mongodb.com/kafka-connector/master/kafka-configure-ssl/) for the local one…but I find it very strange. Ie. in 3 minutes my system saves 30072 documents in the local version and 39041 in atlas…What could be the reason? thanks in advance", "username": "Laura_Fernandez" }, { "code": "", "text": "Hi Laura,That sounds really odd and I can’t imagine it could have anything to do with a TLS setting.Is it possible you didn’t drive the same exact set of writes to both places?\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Thanks Andrew! Yes it possible, because I capturing system events and sending them to Mongo, but I always get much more entries in Atlas than in the local version. I don’t have a cluster this last one…could be that the reason?\nThanks again", "username": "Laura_Fernandez" } ]
MongoDB (local) vs MongoDB Atlas
2021-04-27T17:16:24.350Z
MongoDB (local) vs MongoDB Atlas
3,049
null
[ "dot-net" ]
[ { "code": "", "text": "I am getting these error or exception while adding many records in MongoDb using AddManyAsync() method.\nhow to resolve these?\nwhat is the reason?Error Message\nServer returned node is recovering error (code = 11602, codeName\"InterruptedDueToReplStateChange\").\nMongoDB.Driver.MongoNodeIsRecoveringException: Server returned node is recovering error (code = 11602, codeName = “InterruptedDueToReplStateChange”).\nat MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.ExecuteAsync[TResult](IRetryableWriteOperation`1 operation, RetryableWriteContext context, CancellationToken cancellationToken)", "username": "Graeme_Henderson" }, { "code": "", "text": "Hi @Graeme_Henderson, thanks for posting your question Do you mind posting the code snippet that gives you this error and some brief context on what you are trying to achieve? Thanks!", "username": "yo_adrienne" } ]
MongoDb driver Exception
2021-04-29T01:53:21.464Z
MongoDb driver Exception
3,627
null
[]
[ { "code": "products.insertOne()const express = require('express');\nconst app = express();\nconst morgan = require('morgan');\nconst bodyParser = require('body-parser');\nconst mongoose = require('mongoose');\n\nconst productRoutes = require('./api/routes/products');\nconst orderRoutes = require('./api/routes/orders');\n\nmongoose.connect('mongodb://User:' + process.env.MONGO_ATLAS_PASSWORD + '@cluster0-shard-00-00.y7nnq.mongodb.net:27017,cluster0-shard-00-01.y7nnq.mongodb.net:27017,cluster0-shard-00-02.y7nnq.mongodb.net:27017/MyDatabaseName?ssl=true&replicaSet=atlas-oyeawl-shard-0&authSource=admin&retryWrites=true&w=majority', {\nuseNewUrlParser: true,\nuseUnifiedTopology: true, \nuseCreateIndex: true\n});\n\napp.use(morgan('dev'));\napp.use(bodyParser.urlencoded({extended: false}));\napp.use(bodyParser.json());\n\napp.use((req, res, next) => {\nres.header('Access-Control-Allow-Origin', '*');\nres.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept, Authorization');\nif (req.method == 'OPTIONS') {\n res.header('Access-Control-Allow-Methods', 'PUT, POST, PATCH, DELETE, GET');\n return res.status(200).json({});\n}\nnext();\n});\n\n// routes which handle requests\napp.use('/products', productRoutes);\napp.use('/orders', orderRoutes);\n\napp.use((req, res, next) => {\nconst error = new Error('Uh oh! 404 not found error.');\nerror.status(404);\nnext(error);\n});\n\napp.use((error, req, res, next) => {\nres.status(error.status || 500);\nres.json({\n error: {\n message: error.message\n }\n});\n});\n\nmodule.exports = app;\nconst express = require('express');\nconst router = express.Router();\n\nconst mongoose = require('mongoose');\nconst Product = require('../models/products');\n\nrouter.get('/', (req, res, next) => {\nres.status(200).json({\n message: 'Handling GET requests to /products'\n});\n});\n\nrouter.post('/', (req, res, next) => {\nconst product = new Product({\n _id: new mongoose.Types.ObjectId(),\n name: req.body.name,\n price: req.body.price\n});\nproduct\n.save()\n.then(result => {\n console.log(result);\n})\n .catch(err => console.log(err));\nres.status(201).json({\n message: 'Handling POST requests to /products',\n createdProduct: product\n});\n});\nmodule.exports = router;\n", "text": "I’m trying to connect my API to my MongoDB Atlas database but cannot seem to do it. When I submit a POST request to /products I receive a 201 response, but after a couple of seconds get the following error:\nMongooseError: Operation products.insertOne() buffering timed out after 10000msHere’s my code for the connection:And here’s my code that defines the POST request:Any ideas on how to fix this/connect to my database? (I know that in my URI string it says MyDatabse, in my actual code I have it set to my database name)", "username": "Christian_Cox" }, { "code": "", "text": "Hi Christian,Did you ensure that the source IP of your app is on the Atlas IP Access List?Cheers\n-Andrew", "username": "Andrew_Davidson" } ]
Cannot connect to database
2021-04-28T11:53:29.733Z
Cannot connect to database
2,728
null
[]
[ { "code": "orders: [\n { id: 100, status: \"shipped\", options: [{ returned: true }] },\n { id: 101, status: \"packed\", options: [{ quick: true }] },\n { id: 102, status: \"ordered\" }\n]\n\ndesired result: [\n { id: 100, status: \"returned\", options: [{ returned: true }] }, // <- updated status\n { id: 101, status: \"packed\", options: [{ quick: true }] },\n { id: 102, status: \"ordered\" }\n]\n\nset$cond$set {\n status: { options: {$elemMatch: {returned: true}}}\n}\n", "text": "How can I use aggregate to update a field if a field with value exists in an array?I can set an additional field, but I have not managed a conditional update, tried with $set and $cond but it would not work together.MongoDB PlaygroundThanks,\nbluepuma", "username": "blue_puma" }, { "code": "db.orders.aggregate([\n {\n $set: {\n returned: \"$options.returned\"\n }\n },\n {\n $unwind: {\n path: \"$returned\",\n preserveNullAndEmptyArrays: true\n }\n }\n])\nreturned: true", "text": "How to move from array to field:But how can I update the status field if returned: true?", "username": "blue_puma" }, { "code": "$unwind$project: {\n returned: { \n $cond: [\n { $eq: [ '$options.returned', true ] }, \n true, \n false\n ]\n }\n}\nInvalid $project :: caused by :: Cannot use expression other than $meta in exclusion projection", "text": "The $unwind to get the additional field sounds expensive, I tried to use project:But that gives me an error Invalid $project :: caused by :: Cannot use expression other than $meta in exclusion projection.", "username": "blue_puma" } ]
How to conditional update a field in aggregtion if field with certain value exists in array?
2021-04-29T11:18:02.105Z
How to conditional update a field in aggregtion if field with certain value exists in array?
5,605
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.14-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.13. The next stable release 4.2.14 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.14-rc0 is released
2021-04-29T19:01:53.205Z
MongoDB 4.2.14-rc0 is released
2,429
https://www.mongodb.com/…8_2_1024x196.png
[ "indexes" ]
[ { "code": "", "text": "Hi,I have a question regarding two indices which were created for one of our collection on MongoDB Atlas.\nBoth are regular compound indices which are structured as follows:(1) structureNode_date_1\n(2) kind_1_structureNode_1_posGroup_1_source_1_date_1Type of fields is as follows:What confuses me is the following observation:\nThe size of index (1) is greater than the size of index (2). (see screenshot attached)\nBoth indices are fully built and (2) contains all fields that are available in (1).\nThus, I expect index (2) to be at least the size of index (1).Q: Why is it that index (1) is larger than index (2)?Is this some kind of compression issue?\nHas the unique flag on index (2) any influence on index size?Best,\nMartinatlas_index_size_comparison2091×401 52.3 KB", "username": "MartinLoeper" }, { "code": "", "text": "@Sinan_Birbalta and I discovered today that there is indeed a different index format version (8 vs. 12) [1] in use when comparing both of our indices. I think the different format might explain our observation, but anyway… maybe someone here in the community forums knows better?![1] mongo/wiredtiger_index.cpp at 5bbadc66ed462aed3cc4f5635c5003da6171c25d · mongodb/mongo · GitHub", "username": "MartinLoeper" } ]
Compound Index Size shrinks when adding more fields
2021-04-28T13:02:54.666Z
Compound Index Size shrinks when adding more fields
2,151
null
[ "aggregation" ]
[ { "code": "orders: [\n { id: 100, status: 3, strDate: '2021-03-01', items: [], strStatus: 'shipped' },\n { id: 101, status: 2, strDate: '2021-04-01', items: [], strStatus: 'packed' },\n { id: 102, status: 1, strDate: '2021-04-01', items: [], strStatus: 'ordered' },\n]\n\nresult: [\n {\n _id: '2021-03-01', orders: [\n { _id: 'shipped', count: 1 }\n ]\n },\n {\n _id: '2021-04-01', orders: [\n { _id: 'packed', count: 1 },\n { _id: 'ordered', count: 1 }\n ]\n },\n]\n", "text": "How can I use aggregate to create a two level aggregation?Thanks,\nbluepuma", "username": "blue_puma" }, { "code": "$group", "text": "Hello @blue_puma,You can use the aggregate’s $group stage to get the desired result; see $group - Pivot Data Example .", "username": "Prasad_Saya" }, { "code": "\n// aggregate count on 3 levels\n\nitems: [\n { id: 1, level1: 'A', level2: 'A1', level3: 'A1x' },\n { id: 2, level1: 'A', level2: 'A1', level3: 'A1x' },\n { id: 3, level1: 'A', level2: 'A2', level3: 'A1x' },\n { id: 4, level1: 'B', level2: 'B1', level3: 'B1x' },\n { id: 5, level1: 'B', level2: 'B1', level3: 'B1x' },\n { id: 6, level1: 'B', level2: 'B1', level3: 'B1y' },\n]\n\nresults: [\n {\n id: 'A',\n ag: [\n {\n id: 'A1',\n ag: [\n {\n id: 'A1x',\n count: 2\n }\n ]\n },\n {\n id: 'A2',\n ag: [\n {\n id: 'A1x',\n count: 1\n }\n ]\n }\n ]\n },\n {\n id: 'B',\n ag: [\n {\n id: 'B1',\n ag: [\n {\n id: 'B1x',\n count: 2\n },\n {\n id: 'B1y',\n count: 1\n }\n ]\n },\n ]\n },\n]\n\nresults: [\n {\n id: 'A',\n count: 3,\n ag: [\n {\n id: 'A1',\n count: 2,\n ag: [\n {\n id: 'A1x',\n count: 2\n },\n ],\n },\n {\n id: 'A2',\n count: 1,\n ag: [\n {\n id: 'A1x',\n count: 1\n }\n ],\n }\n ],\n },\n {\n id: 'B',\n count: 3,\n ag: [\n {\n id: 'B1',\n count: 3,\n ag: [\n {\n id: 'B1x',\n count: 2\n },\n {\n id: 'B1y',\n count: 1\n }\n ],\n },\n ],\n },\n]\n", "text": "Thanks, I still can not wrap my head around it. Is there a more generic approach or “formula” how to handle this?Even better would be a count on every level ", "username": "blue_puma" }, { "code": "", "text": "Thanks, I still can not wrap my head around it. Is there a more generic approach or “formula” how to handle this?Multiple levels of aggregation require applying the grouping multiple times. Here is an example of such a query:", "username": "Prasad_Saya" }, { "code": "db.collection.aggregate([\n {\n $group: {\n _id: {\n level1: \"$level1\",\n level2: \"$level2\",\n level3: \"$level3\"\n },\n count: {\n $sum: 1\n }\n }\n },\n {\n $group: {\n _id: {\n level1: \"$_id.level1\",\n level2: \"$_id.level2\"\n },\n count: {\n $sum: \"$count\"\n },\n data: {\n $push: {\n _id: \"$_id.level3\",\n count: \"$count\",\n data: \"$data\"\n }\n }\n }\n },\n {\n $group: {\n _id: \"$_id.level1\",\n count: {\n $sum: \"$count\"\n },\n data: {\n $push: {\n _id: \"$_id.level2\",\n count: \"$count\",\n data: \"$data\"\n }\n }\n }\n }\n])\n", "text": "Awesome @Prasad_Saya, thanks a lot. That was exactly what I was looking for.Here is the adapted solution:MongoDB Playground", "username": "blue_puma" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to aggregate count on three levels?
2021-04-29T10:01:01.173Z
How to aggregate count on three levels?
3,852
null
[]
[ { "code": "logOutremoveUserapp.currentUser.logOut()app.currentUser.remove()", "text": "In the client SDKs, what’s the difference between the logOut function and the removeUser functions?In Swift, those functions are app.currentUser.logOut() and app.currentUser.remove().According to the doc, the remove function “logs out and destroys the session related to this user”. I’ve been using logOut until now, but I had problems with the session after that (getting invalidSession errors), so I’m guessing I need to use remove instead. But now I’m curious, in what case would you want to use logOut? In which case you don’t want to destroy the session of the logged out user?Thanks for the clarifications.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "@Jean-Baptiste_Beau The iOS docs need to get updated but they will follow the same API and design as the other SDKs, see Android here: https://docs.mongodb.com/realm/sdk/android/advanced-guides/multi-user-applications/logOut() removes the user from the cache - so they will need to logIn again to the server to re-authenticate. remove() removes the user AND removes any data/realms downloaded/opened by the user.", "username": "Ian_Ward" }, { "code": "invalidSession", "text": "@Ian_Ward Thanks for the answer. Following up on that, I’m wondering what I should do after an invalidSession error. I’ve created a new topic:Could you please have a look? Thanks a lot.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Difference between logOut and removeUser
2021-04-29T14:41:41.202Z
Difference between logOut and removeUser
5,475
null
[]
[ { "code": "", "text": "Hi,\nI am receiving the above error in the DB logs.It is as seen above including the missing name so:“SASL GSSAPI authentication failed for on $external from client [IP Address omitted] ProtocolError: SASL(-1): generic failure: SSPI: AcceptSecurityContext: The token supplied to the function is invalid\\r\\n\\r”,The IP address is of a box which has about 60 services on it and probably other unregistered apps that can hit mongoI have just upgraded from Mongo 3.6 to Mongo 4.0.23 and this error started to appear. From various other analytics I know that the majority of clients can connect to the replica set. I need help to pin down which process is causing the error,What can I do to narrow this down?", "username": "Colin_Dooley" }, { "code": " \"2021-04-29T15:31:45.527+0100 I COMMAND [conn14814] command admin.$cmd command: buildInfo { buildinfo: true, $readPreference: { mode: \\\"secondaryPreferred\\\" }, $db: \\\"admin\\\" } numYields:0 reslen:1300 locks:{} protocol:op_query 11ms\\r\", \n\n \"2021-04-29T15:31:45.529+0100 D COMMAND [conn14814] run command $external.$cmd { saslStart: 1, mechanism: \\\"GSSAPI\\\", payload: \\\"xxx\\\", $readPreference: { mode: \\\"secondaryPreferred\\\" }, $db: \\\"$external\\\" }\\r\", \n\n \"2021-04-29T15:31:45.529+0100 D ACCESS [conn14814] SSPI principal name: [REDACTED]\", \n\n \"2021-04-29T15:31:45.531+0100 E ACCESS [conn14814] SSPI: AcceptSecurityContext: The token supplied to the function is invalid\\\\r\\\\n\\r\", \n\n \"2021-04-29T15:31:45.534+0100 D ACCESS [conn14814] Was not able to acquire authorization username from Cyrus SASL. Falling back to authentication name.\\r\", \n\n \"2021-04-29T15:31:45.535+0100 E ACCESS [conn14814] Was not able to acquire principal id from Cyrus SASL: -6\\r\", \n\n \"2021-04-29T15:31:45.537+0100 I ACCESS [conn14814] SASL GSSAPI authentication failed for on $external from client [IP ADDRESS] ; ProtocolError: SASL(-1): generic failure: SSPI: AcceptSecurityContext: The token supplied to the function is invalid\\\\r\\\\n\\r\", \n \n \"2021-04-29T15:31:45.539+0100 D ACCESS [conn14814] Was not able to acquire authorization username from Cyrus SASL. Falling back to authentication name.\\r\", \n\n \"2021-04-29T15:31:45.541+0100 E ACCESS [conn14814] Was not able to acquire principal id from Cyrus SASL: -6\\r\", \n\n \"2021-04-29T15:31:45.544+0100 D - [conn14814] User Assertion: AuthenticationFailed: Authentication failed. src\\\\mongo\\\\db\\\\auth\\\\sasl_commands.cpp 300\\r\", \n\n \"2021-04-29T15:31:45.579+0100 D COMMAND [conn14814] assertion while executing command 'saslStart' on database '$external' with arguments '{ saslStart: 1, mechanism: \\\"GSSAPI\\\", payload: \\\"xxx\\\", $readPreference: { mode: \\\"secondaryPreferred\\\" }, $db: \\\"$external\\\" }': AuthenticationFailed: Authentication failed.\\r\", \n\n \"2021-04-29T15:31:45.579+0100 I COMMAND [conn14814] command $external.$cmd command: saslStart { saslStart: 1, mechanism: \\\"GSSAPI\\\", payload: \\\"xxx\\\", $readPreference: { mode: \\\"secondaryPreferred\\\" }, $db: \\\"$external\\\" } numYields:0 ok:0 errMsg:\\\"Authentication failed.\\\" errName:AuthenticationFailed errCode:18 reslen:258 locks:{} protocol:op_query 50ms\\r\", \n\n \"2021-04-29T15:31:45.581+0100 D NETWORK [conn14814] Session from [IP ADDRESS] encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer\\r\", \n\n \"2021-04-29T15:31:45.581+0100 I NETWORK [conn14814] end connection [IP ADDRESS] (554 connections now open)\\r\", \n \n \"2021-04-29T15:31:45.582+0100 D NETWORK [conn14814] Cancelling outstanding I/O operations on connection to [IP ADDRESS]\\r\",", "text": "Here are the connections from the log:\nThis replica set is hosted via Windows using Windows Services\nWindows Server 2012 R2 Standard", "username": "Colin_Dooley" } ]
SASL GSSAPI authentication failed for on $external from client
2021-04-29T14:46:16.915Z
SASL GSSAPI authentication failed for on $external from client
2,973
null
[ "node-js" ]
[ { "code": "const connectionClient = new MongoClient(configs.db_url_live, { useUnifiedTopology: true })\nlet dbConnection;\nlet connectedDb;\n\n(async () => {\n\ndbConnection = await connectionClient.connect();\nconnectedDb = dbConnection.db(configs.db_name);\nconsole.log(‘Connectd to DB from garageServer');\n\n})();\nmodule.exports=connectedDb;import * as all from “./index”;Cannot use import statement outside a module", "text": "How do I share my connected DB instance to my other files.For example I open a DB connection in my main index.js file and instead of re-opening the connection again & again in other files, I want to share this instance of connected db to my other filesFor Example, after the import statements, I write the following in my index fileNow I use this statement at the end of my filemodule.exports=connectedDb;And in my file where I need that instance, I use this statementimport * as all from “./index”;But there are two problem in my this approach\nFirst of all I get this error in my other file where I am requiring the connected DB instanceCannot use import statement outside a moduleSecondly, even I somehow get rid of this error, the main problem is,When importing my instance from index file, it is undefined, because it is not connected initially but export statement is already executed.How do I make this happen.\nThanks in advance to helping hands", "username": "HSD_Qatar" }, { "code": "const { MongoClient } = require('mongodb');\nconst config = require('./config');\nconst Users = require('./Users');\nconst conf = config.get('mongodb');\n\nclass MongoBot {\n constructor() {\n const url = `mongodb://${conf.hosts.join(',')}`;\n\n this.client = new MongoClient(url, conf.opts);\n }\n async init() {\n await this.client.connect();\n console.log('connected');\n\n this.db = this.client.db(conf.db);\n this.Users = new Users(this.db);\n }\n}\n\nmodule.exports = new MongoBot();\nrequire()", "text": "Hi @HSD_Qatar, there are many possible ways to pass the connection instance between files. One of the cleanest and very modern approach is as follows:and you can require() the same into another file.You can learn more about this here.In case you have any doubts, please feel free to reach out to us.Thanks.\nSourabh Bagrecha,\nCurriculum Services Engineer", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Share connected DB instance to other files
2021-04-29T07:06:29.593Z
Share connected DB instance to other files
4,913
null
[ "atlas-functions" ]
[ { "code": "", "text": "Hello,While trying to download reports with the amazon sellers partner api we encountered an issue. While downloading the file an error occured:FunctionError: ‘crypto’ module: buffer is not a supported output encodingWe tried to reproduce the error locally by using the same node version like Realm but could not reproduce it.We are using the ‘amazon-sp-api’ npm package. The problem seems to occur inside the package when calling:\nlet decipher = crypto.createDecipheriv(\n‘aes-256-cbc’,\nBuffer.from(details.encryptionDetails.key, ‘base64’),\nBuffer.from(details.encryptionDetails.initializationVector, ‘base64’)\n);Since it is the only place where the crypto library is used inside the call. On function side we just call\nawait sellingPartner.download(res, { json: true });With the correct parameters. The call worked locally with the same node version (10.18.1).\nNow we have no idea what is happening there, since inside the built-in support documentation every from crypto that is needed is marked as supported.Best Regards,\nDaniel", "username": "Daniel_Bebber" }, { "code": "", "text": "When calling the function without a trycatch we got the following stacktrace:‘crypto’ module: buffer is not a supported output encodingtrace:\nFunctionError: ‘crypto’ module: buffer is not a supported output encoding\nat github.com/10gen/stitch/function/execution/vm.gojaFunc.func1 (native)\nat update (:193:37(32))\nat download$ (node_modules/amazon-sp-api/lib/SellingPartner.js:829:64(187))\nat tryCatch (:55:50(9))\nat invoke (:281:30(112))\nat :107:28(6)\nat tryCatch (:55:50(9))\nat invoke (:145:28(15))\nat :155:19(7)", "username": "Daniel_Bebber" }, { "code": "", "text": "We managed to specify the issue to the line inside the amazon-sp-api:let decrypted_buffer = Buffer.concat([decipher.update(encrypted_buffer), decipher.final()]);The update function is the root cause. The function is returning a buffer by default which makes sense. But MongDB backend is not allowing as output encoding buffer somehow? This is strange since buffers are explicitly allowed by the docu:\nhttps://nodejs.org/api/crypto.html#crypto_decipher_update_data_inputencoding_outputencodingExperimenting with other output encodings has shown that only buffer as output encoding is an issue here.", "username": "Daniel_Bebber" }, { "code": "", "text": "We also have encountered an issue when using the csvtojson npm package. Is it possible that there are problems with using Buffers in MongoDB Realm Functions?", "username": "Daniel_Bebber" } ]
[Bug] FunctionError: 'crypto' module: buffer is not a supported output encoding
2021-04-29T10:35:12.570Z
[Bug] FunctionError: &lsquo;crypto&rsquo; module: buffer is not a supported output encoding
2,309
null
[ "php" ]
[ { "code": "<input type=\"hidden\" id=\"roundDate\" name=\"roundDate\" value=\"<?php echo $round['roundDate']->toString(); ?>\"><?php echo $round['roundDate']->format(\"d M Y\"); ?>$_POST$_SESSION", "text": "A query returns objects, which I then want to loop through to create HTML content. I have a problem converting / using MongoDB BSON Date items…I have tried:<input type=\"hidden\" id=\"roundDate\" name=\"roundDate\" value=\"<?php echo $round['roundDate']->toString(); ?>\">I can display the same field on the page (literally the line above in my PHP source code):<?php echo $round['roundDate']->format(\"d M Y\"); ?>Essentially, I am using this structure to pass it through to the next page with an HTML form, via $_POST, which is then stored in $_SESSION for any further access / use.Hope someone can correct my ways or spot the error!", "username": "Dan_Burt" }, { "code": "$round['roundDate']MongoDB\\BSON\\UTCDateTimetoStringformat$round['roundDate']DateTimetoDateTimeformatW3C<input type=\"hidden\" id=\"roundDate\" name=\"roundDate\" value=\"<?php echo $round['roundDate']->format(\\DateTimeInterface::W3C); ?>\">\n", "text": "What type is $round['roundDate']? I’m asking because the MongoDB\\BSON\\UTCDateTime class does not have a toString method, which is what your first call uses. The second call to format makes me think that $round['roundDate'] would be a DateTime instance retrieved using the toDateTime method in our BSON class. If so, using format with the W3C format string should do the trick:", "username": "Andreas_Braun" }, { "code": "cursorcursorarrayscursor$x = 0;\nforeach ($cursor as $comp) {\n \n $y = 0;\n foreach ($comp['compRounds'] as $round) {\n ..\n $comps[$x]['compRounds'][$y]['roundDate'] = $round['roundDate']->toDateTime();\n \n $z = 0;\n foreach ($round['courses'] as $course) {\n ..\n \n $z++;\n }\n $y++;\n }\n $x++;\n}\n", "text": "Thanks @Andreas_BraunAt the top of my PHP page, I call out to a separate function for the MongoDB query. This function retrieves the query as a cursor, then loops through the cursor, with sub-loops for embedded arrays within the cursor. This was done specifically so that all of the stored variables were converted to PHP-native data types, such as this DateTime. Here is the conversion of this to a PHP DateTime object:Is this a correct pattern / way of processing?My requirement though is to allow me to save this same date into another collection, so its embedded there. So if I convert 1 way, I expect I will need to reverse it before performing the MongoDB insert command?", "username": "Dan_Burt" }, { "code": "toDateTime()", "text": "The driver will convert them back then write the BSON objects to the database. I’d suggest either using an ODM that takes care of this mapping for you, or always assume to receive BSON instances and work with those (e.g. call toDateTime() on it, then format the date).", "username": "Andreas_Braun" } ]
PHP returned MongoDB Date to HTML Form Hidden field
2021-04-27T21:19:25.952Z
PHP returned MongoDB Date to HTML Form Hidden field
3,318
null
[ "replication" ]
[ { "code": "", "text": "N00B here - I have a three-node cluster - primary, secondary and arbiter (voting only, no data). The secondary has fallen out of sync with the primary and I cannot catch it up. It appears the oplog size is not large enough. If I understand things correctly, I need to extend the oplog size on secondary before primary (with DB restarts). My actual question is: IF primary fails before the above steps are taken (assuming they are correct), with secondary being out of sync, what happens? Am I better off at this point removing the secondary from the cluster so that if the primary fails, the now-defunct secondary does not try to take over?Thanks in advance.", "username": "Justin_Sayre" }, { "code": "db.printReplicationInfo()SECONDARYrs.status()SECONDARY", "text": "Hi @Justin_Sayre welcome to the community!If you’re just starting on your MongoDB journey, I highly recommend the free courses available at the MongoDB University. They cover materials from beginners to more advanced topics.Having said that, I will go ahead and just jump into the deep end with your questions The secondary has fallen out of sync with the primary and I cannot catch it up. It appears the oplog size is not large enough.To determine your oplog’s length in time, you can use db.printReplicationInfo(). Note that the interpretation of this command assumes a steady state of writing, so it could show less time if your cluster is busy, or show more time if it’s less busy.Once a secondary fell off the oplog, the only way to recover it is to do a resync.If I understand things correctly, I need to extend the oplog size on secondary before primary (with DB restarts)That scenario is assuming the secondary is still functioning. I believe it is not in your case. Thus the procedure outlined in Change the Size of the Oplog doesn’t really apply to you since you’re gonna rebuild a new secondary anyway (with an appropriately sized oplog from the start). At this point, you just need to resize the primary’s oplog.Am I better off at this point removing the secondary from the cluster so that if the primary fails, the now-defunct secondary does not try to take over?A defunct secondary will never try to take over (marked by its status of anything other than SECONDARY in the rs.status() output). Only when a node having a status of SECONDARY will it be able to take over as primary, since that status means that it’s up, ready to take over, and is following the primary’s write closely.In your situation, I would assess what is the appropriate oplog size for your workload. This can be done using simulations of your production workload, so that this situation doesn’t repeat itself in the future.I would also consider deploying a primary-secondary-secondary setup instead of primary-secondary-arbiter. Having two secondaries vastly increase the chances of the whole set having zero downtime in the face of failures. It will also help with zero downtime maintenance, since with two secondaries you can do an effective rolling maintenance/upgrade.Another point is, if your current setup have a different hardware spec between primary/secondaries, I would consider making them all the same, since in a replica set, all nodes have an equal chance of becoming primary (assuming default setup of voting/priority). Unless you have very specific needs and reasons, I would not change the default voting/priority settings, since it will interfere with High Availability guarantees a replica set gives you, and making failure scenarios more complex.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hey Kevin, thanks SO much for replying. I am definitely going to check out the course. Unfortunately, I inherited a production setup running on 3.2, so there’s a lot of work to be done - and I have learned a lot, just since posting. The oplog size is about 30 minutes or 1GB in size, and the DB is well over 35GB. I have already tried the resync procedures noted here and both methods (deleting the db files/directory and copying the files from PRIMARY) failed. At this point, I assume that my only recourse is to take PRIMARY offline, resize its oplog, and then bring it back up. Assuming that works, then will SECONDARY’s oplog get the new size, or do I need to resize it in the .conf settings?Thanks again!", "username": "Justin_Sayre" }, { "code": "", "text": "Hi @Justin_SayreAt this point, I assume that my only recourse is to take PRIMARY offline, resize its oplog, and then bring it back up. Assuming that works, then will SECONDARY’s oplog get the new size, or do I need to resize it in the .conf settings?Yes for v3.2 that is correct v3.6+ this is an online operation.As your secondary is not synced, yes update it’s conf this is configured per replica. As for your primary follow the procedure:", "username": "chris" }, { "code": "", "text": "Sorry about my formatting - this is basically my first time doing this.In case anyone runs into this problem and this topic, I successfully re-synched my SECONDARY by taking the following steps:DB stats:\n“collections” : 44,\n“objects” : 3564420,\n“avgObjSize” : 28231.381419136913,\n“dataSize” : 100628500558,\n“storageSize” : 36791537664,\n“numExtents” : 0,\n“indexes” : 64,\n“indexSize” : 117452800Please note - this cluster is three nodes, with PRIMARY, SECONDARY and ARBITER. I understand that this may not be best-practice, but it is what I inherited.", "username": "Justin_Sayre" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Failed secondary replica can't sync
2021-04-22T19:15:42.370Z
Failed secondary replica can&rsquo;t sync
4,697
null
[ "python", "crud" ]
[ { "code": "", "text": "Folks:I have a MongoDB database application with a Python Flask front end using pymongo. I am having difficulty updating only the embedded document within a collection. I have no issues updating the entire collection from an HTML form.For example, I have no issues with:db.polymers.update_one({ “_id”: polymer_id}, {“$set”: {“name”: name, “composition”: {“monomer01”: monomer01, “monomer02”: monomer02, “mw”: 10000 }, … “comments”: comments}})However, trying to update only one element within the embedded document named “composition” doesn’t update as expected. And there isn’t even an error message. Here is the statement:db.polymers.update_one({“_id”: polymer_id},{“$set”: {“composition”: {“mw”: 15000}}})I would appreciate any help! Thanks!Mike.", "username": "Michael_Redlich" }, { "code": "db.polymers.update_one({\"_id\": polymer_id},{\"$set\": {“composition.mw”: 15000}})\n", "text": "Hi @Michael_Redlich, welcome back again to our forums. We’ve restructured these slightly since your last visit.In general, this question might be better asked in one of the forums related to the Drivers or to Working with Data topic.That said, it looks like there might be a quick solution to your problem.db.polymers.update_one({“_id”: polymer_id},{“$set”: {“composition”: {“mw”: 15000}}})In terms of updating a single element in array, you should use dot notation which allows the updating of a specific field in an embedded document or position within an array in your document. Checkout the $set page in the MongoDB Manual -Have you tried using dot notation and an update_one statement of the format “composition.mv”?Welcome back and hope this helps - if you have any questions or feedback related to M220P we’d love to hear it here, in this topic of our forums.Kindest regards,\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "Hi Eoin:Thanks for the update on how I should submit questions related to mine. Thanks, too, for the suggested fix. I tried it, but unfortunately, the problem still exists. My other Python methods don’t behave this way.I used the mongo shell to update this manually and all worked well. Interestingly, using the dot notation in the shell updated the element in the embedded document. Using the additional curly braces, however, updated the element and wiped out the remaining elements in the embedded document.", "username": "Michael_Redlich" }, { "code": "$set{\"$set\": {“composition”: {“mw”: 15000}}}\ncomposition{“mw”: 15000}composition{\"$set\": {“composition.mw”: 15000}}\nmwcomposition15000compositionmwfilterupdate_one", "text": "Hi @Michael_Redlich,I can’t say why this would work in the shell and not with PyMongo, but I can explain the difference between the two $set commands:This command says “Set composition to this object {“mw”: 15000}.” As you’ve seen, this will wipe out any other values stored in the composition field, because you’re not updating it, you’re replacing it with a new subdocument.This command reads as “update the value of the mw field within the composition field to 15000.” As you’ve seen, it doesn’t affect any other values within composition, it just updates mw.There’s no reason this should work for you in the shell, but not with PyMongo. I hate to suggest it, but I suspect the difference is either a typo in your exact update command, an incorrect filter being provided to update_one, or maybe executing the query on the wrong collection? I’ve done all of these things, so I’m only suggesting them as things to think about!", "username": "Mark_Smith" }, { "code": "", "text": "Hi Mark:Thanks for all the info. I certainly appreciate it! And, yes, I’ve been bitten by typos in the past as well. I’m certain everything is correct, but I will quadruple check!All the best,Mike.", "username": "Michael_Redlich" }, { "code": "", "text": "Hi Mark:Just to close out this issue, I discovered that I didn’t wrap the polymer_id variable with ObjectId(). Problem solved!Thanks to everyone who chimed-in on this issue!Mike.", "username": "Michael_Redlich" } ]
Pymongo update_one() issue with embedded documents
2021-03-23T19:05:56.875Z
Pymongo update_one() issue with embedded documents
18,059
null
[ "aggregation", "queries" ]
[ { "code": "[\n { \"firstName\": \"A\" },\n { \"firstName\": \"b\" },\n { \"firstName\": \"C\" },\n { \"firstName\": \"d\" },\n { \"firstName\": \"e\" },\n { \"firstName\": \"A\" }\n]\n[\n { \"firstName\": \"A\" },\n { \"firstName\": \"A\" },\n { \"firstName\": \"b\" },\n { \"firstName\": \"C\" },\n { \"firstName\": \"d\" },\n { \"firstName\": \"e\" }\n]\ndb.collection.aggregate([{ \"$sort\": { \"firstName\": 1 } }]);\n[\n { \"firstName\": \"A\" },\n { \"firstName\": \"A\" },\n { \"firstName\": \"C\" },\n { \"firstName\": \"b\" },\n { \"firstName\": \"d\" },\n { \"firstName\": \"e\" }\n]\n", "text": "Strange behaviour of sorting on string field,Sample Documents:Expected: I want sort it in Ascending order, exact like:Query:But it gives sorting on the base of string case like upper case comes first and then lower case.\nAbove query gives:Is there any way to manage this situation?I know i can add one more stage before sort stage to convert string in upper case or lower case to get order in sequence.", "username": "turivishal" }, { "code": "caseLeveldb.test.find().sort({ firstName: 1 }).collation({ locale: \"en\", caseLevel: true })", "text": "Hello @turivishal, you can use collation and specify the caseLevel option to get the desired result. For example,db.test.find().sort({ firstName: 1 }).collation({ locale: \"en\", caseLevel: true })For aggregate method you can specify the collation option.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you @Prasad_Saya,Helpful Document:", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to sort by string on the base of alphabets order and not by case?
2021-04-29T06:07:54.087Z
How to sort by string on the base of alphabets order and not by case?
23,931
null
[ "queries" ]
[ { "code": "", "text": "I have a large MongoDB of word documents and PDFs which I would like to save from the DB to my computer, retaining the directory structure if possible.The website who stored these files and folders provided a MongoDB dump. I am completely new to this and am learning as I go. I have installed the MongoDB on Ubuntu, used Mongorestore to add the dumped files (fs.chunks.bson, fs.files etc.) to this local database and I can see the collection in Compass.I would appreciate any thoughts on how to get the Docs/PDFs etc. onto my hard disk so I can open and view them in their respective applications. Retaining the folder struction as it was on the website is vital, if that is possible.Thanks.", "username": "GreenLeaf" }, { "code": "mongofiles", "text": "Hello @GreenLeaf, welcome to the MongoDB Community forum!The data you had restored into the local (your computer) MongoDB is stored as GridFS collections. GridFS is a way of storing large files in MongoDB database. To get your DOC/PDF documents from the database you need to use the GridFS tools as specified here: Use GridFS. You can work with mongofiles command-line tool or use GridFS API of a programming language (like Python, Java, NodeJS, etc.) with its respective MongoDB Driver.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you for your response. I have figured out a way to save files to disk from a GridFS bucket using Studio 3T, but this writes 10,174 files (the DOC/PDF files) to one folder without retaining the folder structure.Is there a way, using mongofiles or Studio 3T (or another way), to save the files to disk while maintaining the folder structure? If it is relevant, there is a collection in the DB called _hierarchy.Like I said before, I am learning as I go and any simple explanations or example code would be greatly appreciated.", "username": "GreenLeaf" }, { "code": "files", "text": "_hierarchyThis collection is not part of the GridFS collections. In case you have the directory structures somehow stored in this collection then, you need to figure to use it to build the directories. What does the document look like in this hierarchy collection? Also, the GridFS files collection has some meta data stored in it - you can query and see if directory information is in it.Edit Add:This Stack Overflow post says that “GridFS does not store files as a structure like file system hierarchy.”: Save file to GridFS with given path", "username": "Prasad_Saya" }, { "code": "", "text": "Okay. Thank you. I will go through the other collection and metadata in the hope of being able to rebuilt the folder structure.", "username": "GreenLeaf" } ]
Retrieve Word Docs from MongoDB
2021-04-28T10:13:24.536Z
Retrieve Word Docs from MongoDB
3,469
https://www.mongodb.com/…c124dcdc69d.jpeg
[ "atlas-device-sync" ]
[ { "code": "", "text": "We are in development mode, and it’s really just a few devices + compass connecting to Atlas, yet we persistently see 40-80+ connections at a time. We do our best to shut down our app when we are testing (and I believe that disconnects from Realm properly as it gets properly reflected in the logs), but we are still wondering what is happening on the back end.Is there any documentation that we can look at to help understand what’s going on, and as importantly, is there a way we can clean up / maintain the number of connections?\nimage988×318 17.2 KB\n", "username": "Roger_Cheng" }, { "code": "", "text": "I’ve this exact same question while developing my app using Realm Sync. I’d love to see someone from Realm team throw some light on how Atlas counts connections for Realm Sync.", "username": "siddharth_kamaria" } ]
How many connections are typical on Atlas in relationship to Realm
2021-04-21T10:38:46.933Z
How many connections are typical on Atlas in relationship to Realm
1,609
null
[ "node-js", "crud" ]
[ { "code": "req.end(function (res) {\n\n if (res.error) throw new Error(res.error);\n\n var games = res.body.response;\n\n for(var j=0;j<games.length;j++){\n\n var currentGame = games[j];\n\n Game.find({game_id: currentGame.fixture.id})\n\n .then(game_check=>{\n\n console.log(currentGame.fixture.id)\n\n if(game_check==\"\"){ // The Game is not Exist mean create a new Game and push it to his league\n\n console.log(currentGame.fixture.id)\n\n var league=currentGame.league.name\n\n var league_id=currentGame.league.id\n\n var game_id=currentGame.fixture.id\n\n var homeTeam=currentGame.teams.home.name\n\n var awayTeam=currentGame.teams.away.name\n\n var dateGame=currentGame.fixture.date\n\n const new_game = new Game({\n\n league,\n\n league_id,\n\n game_id,\n\n homeTeam,\n\n awayTeam,\n\n dateGame\n\n });\n\n new_game\n\n .save()\n\n .catch((err) => {\n\n console.log(err);\n\n })\n\n \n\n League.findByIdAndUpdate(\n\n leagueID,\n\n {\n\n $push: { upcoming: new_game },\n\n },\n\n {\n\n new: true,\n\n })\n\n .exec((err, result) => {\n\n if (err) {\n\n return res.status(422).json({ error: err });\n\n } else {\n\n }\n\n });\n\n \n\n }\n\n else{\n\n console.log(\"already exist\"+currentGame.fixture.id);\n\n } \n\n \n\n })}\n\n});\n", "text": "Hey,\nI work with API that refreshing every 10 sec, and check if we got new Games.\nif we have a new Game I need to create a Game and save it to MongoDB and push the to his league.\nI don’t succeed to find to way to check if the game exists or not…\nplsss help mehere’s my code :", "username": "nir_avraham" }, { "code": " Game.find", "text": "Hi @nir_avraham,Welcome to MongoDB community.I think Game.find will not result in empty string for a non found result, but rather an undifiend value or not even going to then section at all…Why wouldn’t you use a count operation or await to check if promise was resolved to an object…Thanks\nPavel", "username": "Pavel_Duchovny" } ]
How to check if value exist or not in Collection (Node js )
2021-04-26T08:18:45.669Z
How to check if value exist or not in Collection (Node js )
7,001
null
[]
[ { "code": "", "text": "Since adding frozen objects, I have gotten a few out of memory errors in production. I have read the issues on GitHub and tried to follow the suggestions there Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=io.realm Code=9 \"mmap() failed: Cannot allocate memory size: 1207959552 offset: 0\" · Issue #6469 · realm/realm-swift · GitHub. I was definitely keeping frozen references too long while doing write transactions, something I have now adressed.But, it still makes me wonder. I currently don’t use “compactOnLaunch” for my local realm. Should this always be used? What is the default setting, does it never compact if this setting isn’t specified?The docs here: https://docs.mongodb.com/realm/sdk/ios/advanced-guides/compacting/ says that I should experiment with compaction to see what setting to use. I don’t mind adding a setting, but how do I know what is a good setting? I assume that compaction is done on first access in the same thread. Can I detect that this is needed before executing the compaction? Does compaction need to be run on a background thread?", "username": "Simon_Persson" }, { "code": "let config = Realm.Configuration(shouldCompactOnLaunch: { totalBytes, usedBytes in\n // totalBytes refers to the size of the file on disk in bytes (data + free space)\n // usedBytes refers to the number of bytes used by data in the file\n // Compact if the file is over 100MB in size and less than 50% 'used'\n let oneHundredMB = 100 * 1024 * 1024\n return (totalBytes > oneHundredMB) && (Double(usedBytes) / Double(totalBytes)) < 0.5\n})\ndo {\n // Realm is compacted on the first open if the configuration block conditions were met.\n let realm = try Realm(configuration: config)\n} catch {\n // handle error compacting or opening Realm\n}", "text": "@Simon_Persson Yes in production I would always have a compactOnLaunch callback setup. Compact is never run if this callback is not setup. A good setting is a tradeoff between how often you want to do it, since it will block the app startUp because you cannot open the Realm while it is being compacted, and how large you are okay with the file growing, but I think the example given in the docs is a good starting point of 50% free space. I would set the max file size something comparable to your average realm state size - so if you have an average realm state size of 10MB then set the max file size to 20MB or 40MB.You can see in the below code snippet that the callback only returns true if the file size is greater than 100MB and there is 50% free space. If those conditions are not satisfied then compaction is not run.", "username": "Ian_Ward" }, { "code": "", "text": "Thanks! I have been running the app for years without compacting😮 I think maybe this could be a bit more clear in the docs. It is listed in the advanced section.Is there a way to detect if the Realm should be compacted? Should this be done on a background thread?", "username": "Simon_Persson" }, { "code": "totalBytesusedBytes", "text": "using asyncOpen() will automatically compact on a background thread and of course you could do it manually too. If you take a look at the code snippet you can see that totalBytes and usedBytes are passed into the compact callback as arguments - you can use this to see how fragmented your realm file is and then make a determination on whether to compact.", "username": "Ian_Ward" }, { "code": "", "text": "Thanks, but I am not sure AsyncOpen alone will solve this.If I always use asyncOpen on the first open, then I will always compact the realm and increase startup time, if it is an expensive operation. I only want to do this when needed right?Is it safe to do compaction on the main thread? Or is it better to always spin off a separate thread and then using the normal try Realm(configuration:config) and let this complete before my normal realm usage?", "username": "Simon_Persson" }, { "code": "", "text": "Hmm… I just changed the configuration so that the local configuration has a compaction callback. I noticed that the callback is called multiple times for the same configuration. I assume it will do this whenever there is no cached version? In practice I guess that the first open would take care of the compaction and that the following callbacks will return false, but it means that I don’t have full control over when compaction is done.Then I guess I can’t guarantee that the compaction is done on a background thread. But I assume that it safe to do on the main thread as well? If not, should I use a separate configuration for the first open to guarantee that compaction only happens on first open?", "username": "Simon_Persson" }, { "code": "totalBytesusedBytes", "text": "using asyncOpen() will automatically compact on a background thread and of course you could do it manually too. If you take a look at the code snippet you can see that totalBytes and usedBytes are passed into the compact callback as arguments - you can use this to see how fragmented your realm file is and then make a determination on whether to compact.This is confusing:Will asyncOpen() automatically compact on a background threadORWill asyncOpen() automatically compact on a background thread ONLY IF compactOnLaunch = YES", "username": "Duncan_Groenewald" }, { "code": "", "text": "Sorry I should have been more clear - if you use asyncOpen AND compactOnLaunch is set then it is automatically done in the background. We never compact unless compactOnLaunch is explicitly set by the developer", "username": "Ian_Ward" }, { "code": "", "text": "FIY, I simply added the shouldCompactOnLaunch and changed the thresholds based on the recommendations here. I haven’t had any complaints so far from users, so I assume it is working. The crashes related to the database growing in size is gone now that I am using compaction and I am being more careful with frozen realms and write transactions.", "username": "Simon_Persson" }, { "code": "", "text": "Sorry I should have been more clear - if you use asyncOpen AND compactOnLaunch is set then it is automatically done in the background. We never compact unless compactOnLaunch is explicitly set by the developerCan you confirm compactOnLaunch works for synced and non-synced Realms ?", "username": "Duncan_Groenewald" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Database growing in size. Recommendations for compaction?
2021-04-20T09:54:32.090Z
Database growing in size. Recommendations for compaction?
3,382
null
[]
[ { "code": "", "text": "Hello there, i am using apache beam which uses splitVector command on mongodb.\nbut when i try to use it with atlas i get this error :\npymongo.errors.OperationFailure: not authorized on 67f4a620c3d5 to execute command { splitVector: “67f4a620c3d5.618725a7906b”, keyPattern: { _id: 1 }…is there a way i can grant permissions to a user to use splitVector ?", "username": "ali_ihsan_erdem" }, { "code": "", "text": "Hi @ali_ihsan_erdem welcome to the community!Unfortunately the SplitVector command is not available in Atlas. However, the Apache Beam team recognizes this issue and provided a workaround: apache_beam.io.mongodbio module read from MongoDB. In short, use the bucket_auto option instead.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can i use splitvector on ATLAS?
2021-04-26T11:36:12.872Z
Can i use splitvector on ATLAS?
2,485
null
[]
[ { "code": "sudo yum install -y mongodb-orgLoaded plugins: extras_suggestions, langpacks, priorities, update-motd\nThere are no enabled repos.\n Run \"yum repolist all\" to see the repos you have.\n To enable Red Hat Subscription Management repositories:\n subscription-manager repos --enable <repo>\n To enable custom repositories:\n yum-config-manager --enable <repo>\n", "text": "when i run sudo yum install -y mongodb-org on amazon linux 2 here is the answer i getwho has a solution please", "username": "Armel_KOBLAN" }, { "code": "", "text": "Hi @Armel_KOBLAN welcome to the community!For Amazon Linux, please follow the steps in Install MongoDB Community Edition on Amazon Linux.If that procedure fails, please post the failing command and the error message it outputs.Best regards\nKevin", "username": "kevinadi" } ]
Mongodb install on Amazon linux 2
2021-04-26T13:53:55.712Z
Mongodb install on Amazon linux 2
2,245
null
[ "sharding", "indexes" ]
[ { "code": "", "text": "when create two indexes on a collection:\ndb.createIndex({“name”: “hashed”})\ndb.createIndex({“name”: 1}, {unique:true})\nwhat is the behavior of this collection?\nActually, original problem is this:\nfirst create an empty shard collection by : sh.shardCollection(“db”, {“name”: “hashed”})\nwhich would create a hashed index on db with the field name, but also I want the field name to be unique\nso I create another index by : db.createIndex({“name”:1}, {unique:true})\nwhat’s the behavior of the collection?How the data of this collection arranged in shards, is it hashed or is it ranged?\n(dont focos on the code, I omit the collection name , it’s just a simple example)", "username": "11115" }, { "code": "({“name”: “hashed”})({“name”: 1}, {unique:true})", "text": "Hi @11115Having a hashed and unique index on the same field is a valid strategy to have uniqueness enforced on a sharded collection, since:So using your example, you can shard the collection using ({“name”: “hashed”}) as the shard key to allow for semi-random distribution of documents across shards. However if you also would like to enforce that “name” should be unique, creating a unique index ({“name”: 1}, {unique:true}) should do that for you.This is mentioned briefly in Hashed Index: unique constraint.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Hashed and ranged index on same field
2021-04-27T06:54:17.129Z
Hashed and ranged index on same field
1,810
null
[ "dot-net" ]
[ { "code": "", "text": "I have 4 Web Applications .AspCore on server MochaHost that working fine with Realm Legacy.\nI have written a new Web Application with New Mongo-DBRealm (library Realm 10.1.2) that works fine.\nYesterday all my Web Apps (both legacy and new mongo-db realm) are very very slow without having changed any line of code!What could be the problem? Have you made any changes on the server side? Or is it a problem with my MochaHost hosting?Thanks\nLuigi", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "It is suspicious that both the Legacy Cloud and the new MongoDB Realm apps have started experiencing slowness at the same time - those services are hosted on completely different infrastructure and have virtually zero influence on one another.That being said, I would recommend creating a support ticket as that will ensure much more prompt response times and will allow the TSE team to cross-reference the symptoms with similar issues if such have been reported.", "username": "nirinchev" }, { "code": "", "text": "I confirm.\nIt was a hosting provider issue.PS: new MongoDb Realm is great!", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
GetRealm & GetRealmAsync are too slow from yesterday
2021-04-27T21:26:38.958Z
GetRealm &amp; GetRealmAsync are too slow from yesterday
1,414
null
[ "swift" ]
[ { "code": "AnyView ().environment(\\.realmConfiguration, Realm.Configuration(...))@ObservedResults(Object.Self) var objectsextension Realm {\n\nstatic var IAMRealm: Realm? {\n\nlet configuration = Realm.defaultConfig\n\ndo {\n\nlet realm = try Realm(configuration: configuration)\n\nreturn realm\n\n} catch {\n\nos_log(.error, \"Error opening realm: \\(error.localizedDescription)\")\n\nreturn nil\n\n}\n\n}\n\n**static** **var** defaultConfig: Realm.Configuration {\n\n**return** Realm.Configuration(schemaVersion: 1)\n\n}\n\n**static** **func** setDefaultConfig(){\n\nRealm.Configuration.defaultConfiguration = Realm.defaultConfig\n\n}\n\n}\n\n**private** **struct** RealmConfigurationKey: EnvironmentKey {\n\n**static** **let** defaultValue = Realm.defaultConfig\n\n}\n\n**extension** EnvironmentValues {\n\n**var** realmConfiguration: Realm.Configuration {\n\n**get** { **self** [RealmConfigurationKey. **self** ] }\n\n**set** { **self** [RealmConfigurationKey. **self** ] = newValue }\n\n}\n\n}\n@main\nstruct RealmApp: SwiftUI.App {\n @NSApplicationDelegateAdaptor(AppDelegate.self) var appDelegate\n \n var body: some Scene {\n initialise()\n return WindowGroup {\n ContentView()\n }\n .windowStyle(HiddenTitleBarWindowStyle())\n .windowToolbarStyle(UnifiedCompactWindowToolbarStyle(showsTitle: true))\n }\n func initialise() {\n Realm.Configuration.defaultConfiguration = Realm.defaultConfig\n }\n}\n", "text": "According to the example hereyou simply pass in the Realm.Configuration to the View like soAnyView ().environment(\\.realmConfiguration, Realm.Configuration(...))And within the View create a variable to access the realm objects\n@ObservedResults(Object.Self) var objectsshould use the realm configuration being passed in but this does not seem to work.I have the following:The only way I could get it to work was by calling the following from the SwiftUI.App prior to returning the main window.Am I doing something wrong when using the .environment(.realmConfiguration, …) option ?Apologies for all the **s - how can I stop that happening when pasting in from Xcode ?", "username": "Duncan_Groenewald" }, { "code": "@ObservedResults@ObservedResults(DailyScrum. self ) var scrums\nif let scrums = scrums {\n ForEach(scrums) { scrum in\n NavigationLink(destination: DetailView(scrum: scrum)) {\n CardView(scrum: scrum)\n }\n .listRowBackground(scrum.color)\n }\n}\n**", "text": "Hi @Duncan_Groenewald, you shouldn’t need to manually inject anything into the SwiftUI environment for your view to be able to access the default Realm using @ObservedResults. You can take a look at this article to see how I updated Apple’s Scrumdinger app to store data in the default Realm.In the top-level view I include this line…and I can then work with the results:To get rid of the ** I copy from Xcode, paste into VS Code and then copy it again to paste here.", "username": "Andrew_Morgan" }, { "code": "", "text": "I’ll remember the VSCode tip ! thanks,I want to change the default realm configuration - with schema changes I need to pass in a schema version number so that any migration will be performed. Is there another way to handle schema changes ?BTW it tries to open with a default schema version 0 when the version needs to be 1.", "username": "Duncan_Groenewald" }, { "code": "", "text": "Sorry - read your question too quickly! I haven’t yet experimented with multiple schema versions (in my plans for the next few months), but hopefully, someone else can chip in.", "username": "Andrew_Morgan" }, { "code": "", "text": "You should be able to set the defaultConfiguration from your AppDelegate class. You also should not have your own EnvironmentValues extension.", "username": "Jason_Flax" }, { "code": "LocalOnlyContentView()\n .environment(\\.realmConfiguration, Realm.Configuration( /* ... */ ))\n", "text": "How exactly are you meant to set the defaultConfiguration in AppDelegate ? SwiftUI apps don’t usually have an AppDelegate.And how does that correspond with the example provided in the docs here https://docs.mongodb.com/realm/sdk/ios/integrations/swiftui/where the example provided isNo mention of needing anything in AppDelegate.", "username": "Duncan_Groenewald" }, { "code": "LocalOnlyContentView()\n .environment(\\.realmConfiguration, Realm.Configuration( /* ... */ ))\nLocalOnlyContentViewLocalOnlyContentView@ObservedResults@ObservedResults(DailyScrum.self, configuration: Realm.Configuration(...))", "text": "You don’t need to do anything in AppDelegate, but if you are trying to set the default Realm configuration before any Views are displayed, it would be the simplest way to do it.Your sampleshould work. If the Realm.Configuration is not correctly being used in the LocalOnlyContentView, then that would be a bug on our end. What does LocalOnlyContentView look like?As a side note, if your configuration is static, you can pass it into the @ObservedResults property wrapper directly:@ObservedResults(DailyScrum.self, configuration: Realm.Configuration(...))", "username": "Jason_Flax" }, { "code": "struct CategoryBrowserView: View {\n @ObservedResults(CategoryNode.self, filter: NSPredicate(format: \"parent == nil\")) var topLevelCategories\n @ObservedObject var model = ModelController.shared\n @ObservedObject var fileService = FileController.shared\n \n @State private var searchTerm: String = \"\"\n \n let iconSize: CGFloat = 11\n \n var projectsCategory: CategoryNode? {\n return topLevelCategories.filter(\"name == %@\", TopLevelCategoryNames.projects.rawValue).first\n }\n var eventsCategory: CategoryNode? {\n return topLevelCategories.filter(\"name == %@\", TopLevelCategoryNames.events.rawValue).first\n }\n var locationsCategory: CategoryNode? {\n return topLevelCategories.filter(\"name == %@\", TopLevelCategoryNames.locations.rawValue).first\n }\n\n...\n...", "text": "My view is shown below (partially). There are no other references to any realm objects. However this would not be the first time the realm is opened from the main thread.", "username": "Duncan_Groenewald" }, { "code": " @ObservedObject var model = ModelController.shared\n @ObservedObject var fileService = FileController.shared\n", "text": "What are these?What error are you getting that implies that the correct configuration isn’t being used?", "username": "Jason_Flax" }, { "code": "", "text": "The are Swift classes containing UI state information and another one containing UI data like array of thumbnail images etc. One of them has some variables holding references to some other realm objects and realm results sets.The error is saying the file cannot be opened because the schema version 0 is less than the current schema version 1. As you can see from my initial post we are working with version 1.I can try adding it to the @ObservableResults() constructor to see if that works.", "username": "Duncan_Groenewald" }, { "code": "struct ContentView: View {\n\n /// Realm initialisation\n @State var isRealmInitialised: Bool = false\n @State var initialisationMessage: String = \"Loading database, please wait\"\n \n var body: some View {\n NavigationView {\n \n if isRealmInitialised {\n \n // The main App screens\n SidebarPanel()\n \n MainView()\n \n AdjustmentsPanel()\n\n } else {\n\n // Screens to display while initialising Realm\n // First time we open Realm there may be long running migrations\n // So show the user something...\n\n // Left panel\n Text(\"\")\n\n // Center panel\n VStack {\n Spacer()\n ProgressView()\n Text(initialisationMessage)\n .foregroundColor(Color.secondaryLabel)\n Spacer()\n }.onAppear(perform: {\n self.initialiseDatabase()\n })\n\n // Right panel\n Text(\"\")\n } \n }\n .frame(maxWidth: .infinity, maxHeight: .infinity)\n \n }\n \n func initialiseDatabase() {\n let startTime = Date()\n \n Realm.asyncInitialise(completion: {result, message in\n\n // Show for a minimum of 1 second or things look messy\n let elapsedTime = Date().timeIntervalSince(startTime)\n let delay = max(0, 1.0 - elapsedTime)\n\n DispatchQueue.main.asyncAfter(deadline: .now() + delay) {\n self.isInitialised = result\n self.initialisationMessage = message\n }\n })\n }\n}\nextension Realm {\n static var IAMRealm: Realm? {\n let configuration = Realm.defaultConfig\n do {\n let realm = try Realm(configuration: configuration)\n return realm\n } catch {\n os_log(.error, \"Error opening realm: \\(error.localizedDescription)\")\n return nil\n }\n }\n // Set this if there are database changes and a new version is required\n static let schemaVersion: UInt64 = 1\n \n static var defaultConfig: Realm.Configuration {\n \n var config = Realm.Configuration(schemaVersion: Realm.schemaVersion)\n \n // Code to perform any required migration\n // Note this will not be called for Synced Realms\n config.migrationBlock = { migration, oldSchemaVersion in\n \n if oldSchemaVersion < Realm.schemaVersion {\n \n os_log(\"Realm migration from version \\(oldSchemaVersion) to \\(Realm.schemaVersion)\")\n \n }\n return\n }\n \n return config\n }\n /// Sets the global default realm configuration for subsequent calls to open a new Realm\n static func setDefaultConfig(){\n Realm.Configuration.defaultConfiguration = Realm.defaultConfig\n }\n \n /// Open the realm the first time on a background thread so that any migrations will not block the main thread\n /// There are two kinds of migrations that could happen:\n /// 1. For local only Realms any schema change will trigger a migration and you should handle any custom migrations that may be required. Realm will handle simple schema changes.\n /// 2. For Synced Realms and For Local Realms a new version of the SDK may migrate the database to a new storage format in which case the thread opening the Realm for the first time will block\n /// while this is being performed.\n ///\n static func asyncInitialise(completion: @escaping (Bool, String)->Void) {\n \n DispatchQueue.global().async {\n // Set the default configuration so that if there are schema changes then the correct migrations\n // will be performed\n // You may also wish to perform compaction here.\n Realm.Configuration.defaultConfiguration = Realm.defaultConfig\n \n let configuration = Realm.defaultConfig\n do {\n let _ = try Realm(configuration: configuration)\n completion(true, \"Database initialisation completed.\")\n } catch {\n completion(false, \"Error opening realm: \\(error.localizedDescription)\")\n }\n }\n }\n}", "text": "BTW here is a simplified version of what I do to when launching any Realm App, regardless of whether it is local or synced. The docs I have seen from Realm/MongoDB don’t do a very good job of explaining this or providing working examples that include dealing with SDK upgrades, schema version changes etc… Hope someone finds this useful.The problem is that if the SDK version is updating the database format then it will block whatever thread you are opening the Realm on the first time. So given we have no idea when this might happen it is safer to always open on a background thread the first time and once the first open has completed then either handle any errors and display to the user or continue opening the rest of the app and use Realm as normal.Realm extension to provide a custom async initialisation. Note that asyncOpen() will BLOCK indefinitely if there is no network connection since it will want to download the initial database before returning. As a consequence in general this should only be used the first time the user is logging in to the Cloud Realm.", "username": "Duncan_Groenewald" }, { "code": "let app = App(id: YOUR_REALM_APP_ID)\n// Log in...\nlet user = app.currentUser\nlet partitionValue = \"some partition value\"\nvar configuration = user!.configuration(partitionValue: partitionValue)\nRealm.asyncOpen(configuration: configuration) { result in\n switch result {\n case .failure(let error):\n print(\"Failed to open realm: \\(error.localizedDescription)\")\n // handle error\n case .success(let realm):\n print(\"Successfully opened realm: \\(realm)\")\n // Use realm\n }\n}\nRealm.asyncInitialise(completion: {result, message in\n // Show for a minimum of 1 second or things look messy\n let elapsedTime = Date().timeIntervalSince(startTime)\n let delay = max(0, 1.0 - elapsedTime)\n\n DispatchQueue.main.asyncAfter(deadline: .now() + delay) {\n", "text": "Thanks for the code!There is quite a bit of discussion (and confusion) about how to initially interact with Realm, and there are significant differences in that operation between a local only and a sync.The problem is that if the SDK version is updating the database format then it will block whatever thread you are opening the Realm on the first time.There are no current plans to alter the database format so that’s not something to be concerned about. The only significant change was when Realm became MongoDB Realm and the database needed to be changed to support Atlas NoSQL storage.As far as blocking the background thread Realm is running on, it’s not really a problem - it’s by design, and if the pattern presented in the guide Sync Changes Between Devices - iOS SDK is followed, it’s not an issue. The following code prepares and executes the connection and sync’s the data. It’s all done on a background thread so the UI is not tied up and in the .success block, you know it’s ready to go.whether it is local or syncedIf you are using a local only realm, none of that is needed as there is no synchronization - the data is always present.providing working examples that include dealing with SDK upgrades, schema version changes etcI believe this is (finally) in the works (right, Realmers?). FYI and you may know this - Sync’d realms have no migrations. Additive changes are simple and fully supported, just update your object in code and you’re set. Destructive changes require a deletion of local data and re-sync with the server (per the above code). Local only Realms fully support migrations which is laid out in the guide Schema Versions & MigrationsOh and this…Generally speaking, attempts to ‘work around’ an asynchronous call usually ends up in weird and intermittent operation. I would advise against that; work with async functions - let them drive the boat and then take action when they are done with their task within their closure. Trying to ‘guess’ at when an async function will complete is like trying to catch a bullet in the dark with a pair of pliers.A little off topic but I thought I would include this:During development, where you’re constantly changing object properties, stick with local only realm. It will save you a ton of time and avoids having to stop and restart the Realm server etc. Once you get the objects to a happy place, then implement sync’ing.", "username": "Jay" }, { "code": "", "text": "I think you may be referring to a different database format change - I am referring to the realm-core change - and there have been quite a few with SDK upgrades - where the local realm file gets upgraded to the new file format. This blocks the main thread while it is being performed if you open the realm on the main thread since the call to Realm() does not return until the upgrade has completed. Similarly if there is a schema change and a database migration needs to be performed this call to Realm() will also block while the migration is being performed if it is called on the main thread.If you choose to use asyncOpen{} for sync realms then this will never call the callback if there is no network connection - so won’t work if your users may be offline. Should only be used for the first initialisation or for specific use cases where the network is available.And WRT the ‘work around’ an async call - that code does not work around an async call - it is just making sure the UI does not flash a screen for a fraction of a second if the initial call to Realm() returns immediately by ensuring the screen is being displayed for at least 1 second before disappearing.And lastly wrt to development with a local realm - that’s fine but the docs or some suggestions in posts here indicate that with a synced realm you should perform updates to objects on a background thread because of potential to block the main thread while the update is performed. Whereas there have been a few comments that one should avoid opening a realm on the main thread and on a background thread(s) if it is a local realm.If you are planning to use sync in the future then your client app needs to take into consideration the effect of performing updates to objects on the main thread in a synced environment. In general we assume there app might be synced in future and code accordingly.So far we have not had any issues with opening a local realm on the main thread and on background threads. it would be good to have a definitive document in your RealmSwift guide dealing with these issues rather than us having to rely on posts on the forum.Anyway don’t take this as negative comment - RealmSwift is a game changer - just wish Apple would buy it or licence it and replace Core Data !!", "username": "Duncan_Groenewald" } ]
How do you change the default configuration for @ObservedResults()?
2021-04-27T00:41:09.709Z
How do you change the default configuration for @ObservedResults()?
5,158
null
[ "queries", "data-modeling" ]
[ { "code": "", "text": "Hi !\nI wanted some help in creating a composite key.\nI want to create a composite key which is interchangeable, what I mean is if I have 2 keys as\n{\nkey1: value1,\nkey2: value2\n}\nThe composite key should be created such that only unique connection is formed between them.\nBut the value can be interchanged.\nif key1 = ‘data1’ and key2 = ‘data2’ this is also equal to key1=‘data2’ and key2 = ‘data1’.I am new to the forum and didn’t know where is the right place to ask this. Please let me know if anyone can help me with this.Thank you!", "username": "Aisha_Deshmukh" }, { "code": "{\nkey : [ { \"k\" : \"key1\" , \"v\" : \"data1\" }, {\"k\" : \"key2\", \"v\" : \"data2\"}]\n}\n\n...\n\n{\nkey : [ { \"k\" : \"key1\" , \"v\" : \"data2\" }, {\"k\" : \"key2\", \"v\" : \"data1\"}]\n}\n{ 'key.k' : 1, 'key.v' : 1}", "text": "Hi @Aisha_Deshmukh,Not sure I fully understand the underlying requirements but what I have in mind is the following:This way you can unique index { 'key.k' : 1, 'key.v' : 1}.Now when you search just search key.v : … And it will find docs with that value even if the keys are interchangeable…Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Create Composite Key with interchangeable values
2021-04-27T10:09:03.228Z
Create Composite Key with interchangeable values
3,324
null
[ "aggregation", "performance" ]
[ { "code": "db.Alerts.aggregate([{\n \"$match\": {\n \"status\": {\n \"$ne\": -1\n },\n \"type\": 4\n }\n}, {\n \"$lookup\": {\n \"localField\": \"alertTypeId\",\n \"from\": \"AlertTypes\",\n \"foreignField\": \"_id\",\n \"as\": \"alertTypeRel\"\n }\n}, {\n \"$project\": {\n \"title\": 1,\n \"type\": 1,\n \"alertTypeId\": 1,\n \"alertTypeRel.alertTypeName\": 1,\n \"priority\": 1,\n \"message\": 1,\n \"status\": 1,\n \"startDate\": 1,\n \"createdAt\": 1,\n \"createdBy\": 1,\n \"validUntil\": 1,\n \"errorFlag\": 1,\n \"extApiId\": 1,\n \"errorMessage\": 1,\n \"autoPublish\": 1,\n \"statusChangedBy\": 1\n }\n},{\n \"$sort\": {\n \"status\": 1,\n \"createdAt\": -1\n }\n}, {\n \"$group\": {\n \"_id\": null,\n \"count\": {\n \"$sum\": 1\n },\n \"results\": {\n \"$push\": \"$ROOT\"\n }\n }\n}, {\n \"$project\": {\n \"total\": \"$count\",\n \"_id\": 0,\n \"results\": {\n \"$slice\": [\"$results\", 0, 10]\n }\n }\n}], {\n \"collation\": {\n \"locale\": \"en\",\n \"strength\": 2\n },\n \"allowDiskUse\": true,\n \"cursor\": {}\n}).pretty();\n{\n \"v\" : 2,\n \"key\" : {\n \"status\" : 1,\n \"createdAt\" : -1\n },\n \"name\" : \"status_1_createdAt_-1\"\n}\nfacetuncaught exception: Error: command failed: {\n \"ok\" : 0,\n \"errmsg\" : \"$push used too much memory and cannot spill to disk. Memory limit: 104857600 bytes\",\n \"code\" : 146,\n \"codeName\" : \"ExceededMemoryLimit\"\n} : aggregate failed :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\ndoassert@src/mongo/shell/assert.js:18:14\n_assertCommandWorked@src/mongo/shell/assert.js:639:17\nassert.commandWorked@src/mongo/shell/assert.js:729:16\nDB.prototype._runAggregate@src/mongo/shell/db.js:266:5\nDBCollection.prototype.aggregate@src/mongo/shell/collection.js:1058:12\n@(shell):1:1\n", "text": "There are 2 collections: Alerts & AlertTypes . The Alerts collection have a field called: alertTypeId which is the lookup/foreign key of the AlertTypes collection.I need to optimize the following query where I fetch the data from the Alerts collection along with the AlertType Name by joining the corresponding collection.I used the aggregate function as follows:I have indexed the fields as well. for egs:There are 1250543 & 117 records in the Alerts & AlertTypes collections respectively. I tried the facet query as well but it took more than 5 mins to execute. The first query throws the folloring error:db.Alerts.explain(“executionStats”) output is included in the following link: Dropbox - Mongo-explain-stats.docx - Simplify your lifeCan anyone help me?Thanks", "username": "Sanjay_Kumar_N_S" }, { "code": "", "text": "Welcome to the community @Sanjay_Kumar_N_S,May you please provide samples of your documents?", "username": "Imad_Bouteraa" }, { "code": " {\n \"$facet\":{\n \"total\":[\n {\n \"$count\":\"count\"\n }\n ],\n \"result\":[\n {\n \"$limit\":10\n }\n ]\n }\n },\n {\n \"$addFields\":{\n \"total\":{\n \"$first\":\"$total.count\"\n }\n }\n }{\n \"$group\": {\n \"_id\": null,\n \"count\": {\n \"$sum\": 1\n },\n \"results\": {\n \"$push\": \"$ROOT\"\n }\n }\n}, {\n \"$project\": {\n \"total\": \"$count\",\n \"_id\": 0,\n \"results\": {\n \"$slice\": [\"$results\", 0, 10]\n }", "text": "Have you tried this facet + addFieldsinstead of", "username": "Imad_Bouteraa" }, { "code": "db.Alerts.aggregate([\n {\n \"$match\": {\n \"status\": { \"$ne\": -1 },\n \"type\": 4\n }\n }, \n {\n \"$sort\": {\n \"status\": 1,\n \"createdAt\": -1\n }\n },\n {\n $facet: {\n result: [\n { $skip: 0 },\n { $limit: 10 },\n {\n \"$lookup\": {\n \"localField\": \"alertTypeId\",\n \"from\": \"AlertTypes\",\n \"foreignField\": \"_id\",\n \"as\": \"alertTypeRel\"\n }\n },\n {\n \"$project\": {\n \"title\": 1,\n \"type\": 1,\n \"alertTypeId\": 1,\n \"alertTypeRel.alertTypeName\": 1,\n \"priority\": 1,\n \"message\": 1,\n \"status\": 1,\n \"startDate\": 1,\n \"createdAt\": 1,\n \"createdBy\": 1,\n \"validUntil\": 1,\n \"errorFlag\": 1,\n \"extApiId\": 1,\n \"errorMessage\": 1,\n \"autoPublish\": 1,\n \"statusChangedBy\": 1\n }\n }\n ],\n count: [{ $count: \"total\" }]\n }\n } \n], \n{\n \"collation\": {\n \"locale\": \"en\",\n \"strength\": 2\n },\n \"allowDiskUse\": true,\n \"cursor\": {}\n})\n.pretty();\n", "text": "I tried the facet as follows:", "username": "Sanjay_Kumar_N_S" }, { "code": "{\n\t\"_id\" : ObjectId(\"598d5746d11eb54eb7da2t50\"),\n\t\"title\" : \"Security alert\",\n\t\"alertTypeId\" : ObjectId(\"598a43345d5f673eb152d180\"),\n\t\"message\" : \"Under investigation\",\n\t\"priority\" : \"medium\",\n\t\"status\" : -1,\n\t\"alertLocations\" : [\n\t\t{\n\t\t\t\"data\" : {\n\t\t\t\t\"type\" : \"Point\",\n\t\t\t\t\"coordinates\" : [\n\t\t\t\t\t51.519812,\n\t\t\t\t\t-0.093933\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"radius\" : 7000,\n\t\t\t\"dataPolygon\" : {\n\t\t\t\t\"type\" : \"Polygon\",\n\t\t\t\t\"coordinates\" : [\n\t\t\t\t\t[\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t51.519812,\n\t\t\t\t\t\t\t-0.031050930111633495\n\t\t\t\t\t\t],\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t51.525448705006035,\n\t\t\t\t\t\t\t-0.03130407448443756\n\t\t\t\t\t\t],\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t51.53104002671351,\n\t\t\t\t\t\t\t-0.032061469436875226\n\t\t\t\t\t\t],\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t51.50308305277655,\n\t\t\t\t\t\t\t-0.03331701688130983\n\t\t\t\t\t\t],\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t51.50858397328649,\n\t\t\t\t\t\t\t-0.032061469436875226\n\t\t\t\t\t\t],\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t51.51417529499397,\n\t\t\t\t\t\t\t-0.03130407448443756\n\t\t\t\t\t\t],\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t51.519812,\n\t\t\t\t\t\t\t-0.031050930111633495\n\t\t\t\t\t\t]\n\t\t\t\t\t]\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t],\n\t\"createdAt\" : ISODate(\"2017-08-11T08:13:58.869Z\"),\n\t\"extraMessage\" : \"Other theft\",\n\t\"startDate\" : ISODate(\"2017-08-11T19:08:00Z\"),\n\t\"validUntil\" : null,\n\t\"createdBy\" : ObjectId(\"5885b186db6df92d3ada7777\"),\n\t\"statusChangedBy\" : ObjectId(\"5885b186db6df92d3ada7777\"),\n\t\"type\" : 2\n}\n{\n\t\"_id\" : ObjectId(\"598a43345d5f673eb152d180\"),\n\t\"alertTypeName\" : \"Amber Alert\",\n\t\"description\" : \"Amber Alert, keep your eyes open\",\n\t\"status\" : 1,\n\t\"createdBy\" : ObjectId(\"5885b186db6df92d3ada7777\"),\n\t\"createdAt\" : ISODate(\"2017-08-08T23:03:16.657Z\"),\n\t\"isSpecialType\" : -1\n}\n", "text": "Sample:Alerts:AlertTypes:", "username": "Sanjay_Kumar_N_S" }, { "code": "\"ExceededMemoryLimit\"", "text": "does the facet version issues \"ExceededMemoryLimit\" too?", "username": "Imad_Bouteraa" }, { "code": "", "text": "No facet query executed successfully, but it took more than 5 mins to get the result.", "username": "Sanjay_Kumar_N_S" }, { "code": "$lookupdb.Alerts.aggregate([\n {\n \"$match\": {\n \"status\": { \"$ne\": -1 },\n \"type\": 4\n }\n }, \n {\n \"$sort\": {\n \"status\": 1,\n \"createdAt\": -1\n }\n },\n { $limit: 10 },\n {\n \"$lookup\": {\n \"localField\": \"alertTypeId\",\n \"from\": \"AlertTypes\",\n \"foreignField\": \"_id\",\n \"as\": \"alertTypeRel\"\n }\n },\n {\n \"$project\": {\n \"title\": 1,\n \"type\": 1,\n \"alertTypeId\": 1,\n \"alertTypeRel.alertTypeName\": 1,\n \"priority\": 1,\n \"message\": 1,\n \"status\": 1,\n \"startDate\": 1,\n \"createdAt\": 1,\n \"createdBy\": 1,\n \"validUntil\": 1,\n \"errorFlag\": 1,\n \"extApiId\": 1,\n \"errorMessage\": 1,\n \"autoPublish\": 1,\n \"statusChangedBy\": 1\n }\n } \n], \n{\n \"collation\": {\n \"locale\": \"en\",\n \"strength\": 2\n },\n\"cursor\": {}\n})\nexplain: true", "text": "How long does it take if you forget about getting total count and just get the first ten documents and corresponding $lookup?In other words, how long does this pipeline take?This pipeline should be fast if it’s using appropriate indexes - I suspect that maybe you don’t have an appropriate index here? Note that when you specify collation the pipeline can only use indexes with appropriate collations. If you specify explain: true option then you can see whether an index is being used.Asya", "username": "Asya_Kamsky" }, { "code": "{\n \"v\" : 2,\n \"key\" : {\n \"status\" : 1,\n \"createdAt\" : -1\n },\n \"name\" : \"status_1_createdAt_-1\"\n}\n{\n\t\"v\" : 2,\n\t\"key\" : {\n\t\t\"status\" : 1,\n\t\t\"createdAt\" : -1\n\t},\n\t\"name\" : \"status_1_createdAt_-1\",\n\t\"collation\" : {\n\t\t\"locale\" : \"en\",\n\t\t\"caseLevel\" : false,\n\t\t\"caseFirst\" : \"off\",\n\t\t\"strength\" : 2,\n\t\t\"numericOrdering\" : false,\n\t\t\"alternate\" : \"non-ignorable\",\n\t\t\"maxVariable\" : \"punct\",\n\t\t\"normalization\" : false,\n\t\t\"backwards\" : false,\n\t\t\"version\" : \"57.1\"\n\t}\n}", "text": "In fact if this is the complete index definition then you’re not using it at all as it should have the same collation as your query/aggregation, namely it should look like this:", "username": "Asya_Kamsky" }, { "code": "", "text": "But we can’t do the lookup after limiting ith 10 because, sometimes, the sort parameter will be alertTypeName. So we should get the result(with lookup) before setting the limit(10).", "username": "Sanjay_Kumar_N_S" }, { "code": "", "text": "I tried dropping the index and again added the index with the options suggested by you. Still, it is taking too much time.I have shared the explain stats of the query execution here: Dropbox - Mongo-explain-stats.docx - Simplify your life.\nThere it is already using the index: status_1_createdAt_ (Before reindexing itself it was using.).", "username": "Sanjay_Kumar_N_S" }, { "code": "\"alertTypeName\"AlertType\"alertTypeId\"ObjectId(\"598d5746d11eb54eb7da2t50\")t", "text": "But we can’t do the lookup after limiting ith 10 because, sometimes, the sort parameter will be alertTypeName. So we should get the result(with lookup) before setting the limit(10).If \"alertTypeName\" is unique per AlertType you can sort by \"alertTypeId\" and take advantage of indexing. of course you need to create the appropriate index.\nI highly recommend doing what Asya asked, and share the valuable requested feedbackIn other words, how long does this pipeline take?Since Stennie took the pain of reformatting your code, I would like to note that ObjectId(\"598d5746d11eb54eb7da2t50\") is not a valid ObjectId. Please change the t near the end of the hex numberRegards,", "username": "Imad_Bouteraa" }, { "code": "", "text": "Oh…okay. Unfortunately, the hex code was changed(one letter t instead of f) while editing. Please have a look at it.Alerts:{\"_id\" : ObjectId(“598d6746d11eb54eb7da2f50”)}But how can we sort with alertTypeId instead of alertTypeName?", "username": "Sanjay_Kumar_N_S" }, { "code": "", "text": "I just executed the normal aggregate without lookup. ie only from the collection alerts. THat also taking too much time. Can you all check this image: This itself took a few minutes to complete\n\n\n\n", "username": "Sanjay_Kumar_N_S" }, { "code": "$lookup$sort$limit$lookup", "text": "Do you have access to the logs? Slow operations are logged there with some details and I suspect that if the original match is slow, it’s because your data doesn’t fit in RAM and fetching from disk is a very expensive operation (especially when you’re doing it for 1,233,806 document.But we can’t do the lookup after limiting ith 10 because, sometimes, the sort parameter will be\nalertTypeName. So we should get the result(with lookup) before setting the limit(10).In this pipeline you are sorting by something available before the $lookup so we told you how you can make things a lot faster (split up count into a separate query and have this pipeline do the $sort and $limit first and then $lookup.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "hey Sanjay,\ntake a look at :\nhttps://www.mongodb.com/how-to/subset-pattern/\nhttps://www.mongodb.com/how-to/bucket-pattern/\nthe subset pattern maybe your saviorRegarding the $match>$count pipeline. Using proper indexing, you can get a faster result with db.collection.count(query)", "username": "Imad_Bouteraa" }, { "code": "", "text": "Yes, I can see so many queries logged as the slow in the mongo logs. I will recheck the configurations.Suppose I have to sort based on the alert type name, but I get the alert type name after the lookup rt. So how can I sort/limit before the lookup stage?", "username": "Sanjay_Kumar_N_S" }, { "code": "", "text": "Suppose I have to sort based on the alert type name, but I get the alert type name after the lookup rt. So how can I sort/limit before the lookup stage?May you share an actual use case scenario where you want to sort by alert type name?\nIf you are interested in just the recent alerts, wouldn’t it be better to use a capped collection for querying?", "username": "Imad_Bouteraa" }, { "code": "", "text": "Actually, this query is to serve the alert list along with the alert type name. The list columns are sortable including alert type. I hope, you got it", "username": "Sanjay_Kumar_N_S" }, { "code": "{\n \"$sort\": {\n \"......\":\"..\",\n \"......\":\"..\"\n }\n },\n { $limit: 10 }", "text": "please fill:", "username": "Imad_Bouteraa" } ]
Mongo lookup querying is failed from a collection consists of ~12 lak records
2021-03-30T09:54:47.703Z
Mongo lookup querying is failed from a collection consists of ~12 lak records
8,685
null
[ "app-services-cli" ]
[ { "code": "", "text": "I am referring to https://docs.mongodb.com/realm/deploy/deploy-cli/Realm-cli help doesn’t show pull and push commands.\nIs it still available or it has replaced by other commands.realm-cli --help shows below:Available commands are:\ndiff View the changes you would make to the current app without importing the changes.\nexport Export a realm application to a local directory.\nimport Import and deploy a realm application from a local directory.\nlogin Log in using an Atlas Programmatic API Key\nlogout Deauthenticate as an administrator.\nsecrets Add or remove secrets for your Realm App.\nwhoami Display Current User Info", "username": "mo_dew" }, { "code": "realm-cli --versionnpm install -g mongodb-realm-cli@beta", "text": "Hello,Please check which realm-cli version you’re using via realm-cli --version.\nThe article you linked only applies to version 2 as mentioned therein.To install realm-cli version 2, please run npm install -g mongodb-realm-cli@betaRegards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "@Mansoor_Omar Thanks for your reply.V2 is currently in the beta phase, is there any plan to roll this out to latest?", "username": "mo_dew" } ]
Realm-cli - pull and push commands aren't available?
2021-04-27T11:56:42.685Z
Realm-cli - pull and push commands aren&rsquo;t available?
2,892
null
[ "queries" ]
[ { "code": "", "text": "If for example somebody click on the page from a user then my node.js webserver will make a query to the Mongo database to fetch all user infos, the database will look for the user infos and give back the result. I am thinking now about how cache works and if here something like cache will be use automatic or if you need to code something.For example if somebody did try to open the same user page before and if anyway the databse or the webserver script does see that this database request was make before and the data from the use is still same, then it could use something like cache, that would usual make the website work faster because you dont make always a new database query, is it possible that here the database or the webserver will use something like a cache? Does somebody understand about what i am thinking and how this works with Mongo db and node.js webserver?", "username": "Florian_Silbereisen" }, { "code": "mongod", "text": "Hi @Florian_Silbereisen,It’s impossible to guarantee that the data didn’t change between your first and second query. If you cache the data, you might just show stale data to your second user.You could totally add a cache layer in your Node app if this is the expected behaviour you want. But it’s probably not what I would do.That being said, MongoDB is already using RAM to “cache” the most recently used document in RAM. So next time you ask for these documents, mongod doesn’t have to fetch them from disk, as they are already available in RAM. This concept is called the “working set”.For this to work efficiently, you have to make sure that you have enough RAM for your indexes + working set + query execution (evil sort in-memory, etc), and you have to make sure that you don’t evict these documents from RAM too quickly. Else this might be a sign that you need more RAM.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Does this RAM concept from Mongo automaticly take place from hisself or do i need to code it or do i jsut need to buy a vps with enough RAM?", "username": "Florian_Silbereisen" }, { "code": "mongodworking set", "text": "Nothing to do. MongoDB just does it. But remember that each new document you query is fetch from disk, unless it’s already in RAM. If your RAM is full (or reserved for something else like the OS or the queries), then mongod will have to evict old documents from the RAM to make some room for the new ones. If your working set is too large for your RAM, you will just continuously evict documents from RAM and fetch them again from the disk a few seconds later.You can track this in MongoDB Atlas by checking the Page Faults.Here is one of my cluster:\nimage730×262 6.13 KB\n\nimage739×260 5.05 KB\nEach time my cluster runs “out” of RAM and needs to make room, I get more page faults.", "username": "MaBeuLux88" } ]
MongoDB Cache how does it work?
2021-04-07T17:23:52.893Z
MongoDB Cache how does it work?
40,739
null
[ "app-services-cli" ]
[ { "code": "", "text": "I am using realm-cli import command to deploy the changes. It always prompts to confirm the changes. Can this be ignored in any way?Please confirm the changes shown above: [y/n]:", "username": "mo_dew" }, { "code": "realm-cli --help\n\nFlags:\n --profile string specify the profile name to use (default \"default\")\n --telemetry string enable or disable telemetry (this setting is remembered), available options: [\"off\", \"on\"]\n -o, --output-target string write output to the specified filepath\n -f, --output-format string set the output format, available options: [json]\n --disable-colors disable output styling\n -y, --yes set to automatically proceed through command confirmations\n -h, --help help for realm-cli\n -v, --version version for realm-cli", "text": "the --yes flag should automatically proceed through prompts:", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm cli - force import
2021-04-27T10:28:05.006Z
Realm cli - force import
2,693
null
[ "connecting", "golang" ]
[ { "code": "", "text": "Hello,\nAccording to you, is it possible or desirable to have retry policy in configuration defined on request scope or on scope MongoDB connection?\nFor example, I like the request (find, insert and so on) retry if technical error occurs (network, server overload and so on) according to retry policy or deadline via context.Context.Thanks for your support", "username": "Jerome_LAFORGE" }, { "code": "", "text": "Hi @Jerome_LAFORGE,Drivers currently retry failed operations once after transient errors (e.g. network errors, failovers, etc). There is an ongoing drivers-wide project to introduce an improved timeouts API. Part of this project involves changing the retry policy to retry multiple times, which will include retrying as many times as possible before the context deadline expires or the context is cancelled. This work is not present in a released driver version, but is planned for an upcoming minor release.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Retry policy on technical error
2021-04-23T08:05:35.017Z
Retry policy on technical error
1,976
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hi @Pavel_DuchovnyI read your authentication blog for appgyver. would you mind outlining how we can implement remember me and log out for an appgyver. I would like my app to remember the user without them signing in again unless they log out.", "username": "jaseme" }, { "code": "", "text": "Perhaps you can store the access and refresh token in the database and query it for restoring operations.If its expired you have to renew or relogin", "username": "Pavel_Duchovny" }, { "code": "", "text": "I know I can store it in storage in appgyver, then retrive it when the login page mounts, I guess my question is how do i do a restoring operation it i have the access and refresh token in appgyver.", "username": "jaseme" }, { "code": "", "text": "Just provide it to your graphql query or webhook function …Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Login, sign out and remember me
2021-04-27T05:36:14.576Z
Login, sign out and remember me
5,893
null
[ "aggregation", "node-js" ]
[ { "code": "{ \"sellerId\": 1234,\n\t\"firstName\": \"John\",\n\t\"lastName\": \"doe\",\n\t\"fullName\": \"John Doe\",\n\t\"email\": \"[email protected]\",\n\t\"bagId\": 2224\n}\n { \n \"sellerId\": 1234\n\t\"bagId\": 2224,\n \"source\" : \"fedex\"\n}\n[\n{\n\t\"bagId\": 2224,\n\t\"brandName\": \"Denim\",\n\t\"size\": \"32\",\n\t\"clothId\": 1244,\n\t\"color\": \"green\",\n \"price\": 20\n},\n{\n\t\"bagId\": 2224,\n\t\"brandName\": \"Zara\",\n\t\"size\": \"31\",\n\t\"clothId\": 1243,\n\t\"color\": \"red\",\n \"price\": 90\n}\n]\n\t\"firstName\": \"John\",\n\t\"lastName\": \"doe\",\n\t\"fullName\": \"John Doe\",\n\t\"email\": \"[email protected]\",\n\t\"bagId\": 2224,\n\t\"clothId\": 1244 // sold cloth id which I got from Shopify\n\t\"sellerId\": 1234\n}\nconst sllerInfo = await client\n .db()\n .collection('clothes')\n .aggregate(\n [\n { $match: { clothId: { '$in': convertInto } } }, // convertInto is arrays of sold clothId which I got from Shopify\n {\n $lookup:\n {\n from: \"bags\",\n localField: \"bagId\",\n foreignField: \"bagId\",\n as: \"bags\"\n }\n },\n {\n $lookup:\n {\n from: \"sellers\",\n localField: \"sellerId\",\n foreignField: \"sellerId\",\n as: \"sellers\"\n },\n },\n {\n \"$project\": {\n \"bagId\": 1.0,\n \"bags.source\": 1.0,\n \"sellers.firstName\": 1.0, // dont get anything\n \"sellers.lastName\": 1.0, // dont get anything\n \"brand\": 1.0\n }\n },\n ]\n ).toArray()\n", "text": "I am really new to MongoDB query and practicing mongoDb query. I am using this official mongoDb-package. I am following this doc .I have three collections of data. One is bags, one is sellers and last one is cloths. Our data architecture is When seller sends bags of cloths with his/her informations. We created seller collections like this:this is bag collectionAfter selection the cloths from bags which is suppose to be sell, we create cloth collection.When the cloth get sold from Shopify. we get arrays of SKU-ID which is our cloth collections clothId.My goal is when the cloth get sold, we match the clothId(SKU-ID FROM SHOPIFY) the find bagId, from that bagId we will get the seller information.My expected outcome is{I successfully match the sold cloth id and shopify (SKU-ID) and get bags info but I could not able figure it out, how to get get sellers info from the bagIdThis is what my code where I get sold-cloth info and bags details but it does not give me seller’s info just got empty arrays", "username": "Alak_Dam" }, { "code": "const sllerInfo = await client\n.db()\n.collection('clothes')\n.aggregate(\n [\n { $match: { clothId: { '$in': convertInto } } }, // convertInto is arrays of sold clothId which I got from Shopify\n {\n $lookup:\n {\n from: \"bags\",\n localField: \"bagId\",\n foreignField: \"bagId\",\n as: \"bags\"\n }\n },\n {\n $lookup:\n {\n from: \"sellers\",\n localField: \"sellerId\",\n foreignField: \"sellerId\",\n as: \"sellers\"\n },\n },\n { $unwind: \"bags\" },\n { $unwind: \"sellers\" },\n {\n \"$project\": {\n \"bagId\": 1.0,\n \"bags.source\": 1.0,\n \"sellers.firstName\": 1.0, // dont get anything\n \"sellers.lastName\": 1.0, // dont get anything\n \"brand\": 1.0\n }\n },\n ]\n).toArray()\n", "text": "Hi,\n$lookup stages return arrays of collections not a single one, even if the selection fields are uniq,\nso you need to $unwind those arrays, if you are sure the bagId is uniq in the bags collection and the sellerId is also uniq in the sellers collection.\nSomething like this maybe (not tested):", "username": "Yahya_Kacem" } ]
MongoDb aggregate query three collection
2021-04-19T05:37:14.999Z
MongoDb aggregate query three collection
6,438
null
[ "connecting", "php" ]
[ { "code": "#### An uncaught Exception was encountered\n\nType: MongoDB\\Driver\\Exception\\ConnectionTimeoutException\n\nMessage: No suitable servers found: `serverSelectionTimeoutMS` expired: [TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed calling ismaster on 'bohoz-shard-00-02.817p7.azure.mongodb.net:27017'] [TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed calling ismaster on 'bohoz-shard-00-01.817p7.azure.mongodb.net:27017'] [TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:\n\nFilename: /home/rumaiz/MyWork/AAYUS/php_works/mvm-admin-portal/vendor/mongodb/mongodb/src/functions.php\n\nLine Number: 431\n", "text": "Hi All,\nI am trying to connect my PHP (CodeIgniter) application to MongoDB Atlas. But connection failed and reason is TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.I am using\n-PHP 7.4.7 (Codeigniter Framework)\n-MongoDB Extention version 1.7.4\n-Ubuntu 18.4\n-XamppFull error log isPlease help to resolve this issue.", "username": "Rumaiz_Mohomed" }, { "code": "sudo apt-get update\nsudo apt-get -y install ca-certificates\n", "text": "Try updating ca-certificates first.", "username": "chris" }, { "code": "", "text": "Tried this. But not worked.rumaiz@rumaiz-HP-ProBook-450-G0:~$ sudo apt-get -y install ca-certificates\nReading package lists… Done\nBuilding dependency tree\nReading state information… Done\nca-certificates is already the newest version (20190110~18.04.1).\n0 upgraded, 0 newly installed, 0 to remove and 19 not upgraded.Still Same error", "username": "Rumaiz_Mohomed" }, { "code": "php > var_dump(openssl_get_cert_locations());\narray(8) {\n [\"default_cert_file\"]=>\n string(21) \"/usr/lib/ssl/cert.pem\"\n [\"default_cert_file_env\"]=>\n string(13) \"SSL_CERT_FILE\"\n [\"default_cert_dir\"]=>\n string(18) \"/usr/lib/ssl/certs\"\n [\"default_cert_dir_env\"]=>\n string(12) \"SSL_CERT_DIR\"\n [\"default_private_dir\"]=>\n string(20) \"/usr/lib/ssl/private\"\n [\"default_default_cert_area\"]=>\n string(12) \"/usr/lib/ssl\"\n [\"ini_cafile\"]=>\n string(0) \"\"\n [\"ini_capath\"]=>\n string(0) \"\"\n}\n", "text": "You should next make sure php is using the system certificate store. It may be that your development environment is overriding the use of the default path.This is from a vanilla php:7.4.7 container:If I remove the DST_Root_CA_X3.pem from my certificate path I get the exact same error.", "username": "chris" }, { "code": "export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt\n", "text": "This cert thing with MongoDB and PHP has driven me mad and one PITA searching for a solution. to NO AVAIL…But Finally:I’m running PHP from the command line in server mode to test my PHP code. And got all the errors that you can find on the internet, relating to MongoDB/PHP/apache/Nginx.This is my final solution to help me move forward:It may help others in what I say.It will change the default_cert_file used with PHP. I have no idea where it’s changed with PHP config files because it’s certainly not in any php.ini files.All this aggro all over the place with SSL/TLS is related to setting the correct path to the cert file.It brings back memories in 2019 when I was low-level testing SSL/TLS (MITM) Secure Appliance Technology.\nRegards,\nSteve", "username": "Steve_N_A" } ]
TLS/SSL issue to connect MongoDB Atlas from PHP
2020-10-06T16:10:17.139Z
TLS/SSL issue to connect MongoDB Atlas from PHP
8,746
null
[]
[ { "code": "", "text": "I had an event over the weekend where a developer released some code which caused a large number of errors in one of our realm functions (as fired from a trigger). It ran for almost 3 days before somebody noticed it, and even using the REST API, the number of logs I could query to understand the extent of the impact was only 100 at a time. This was not at all sufficient and it ended up wasting a lot of time. I still cannot say the full extent of the impact without writing a script to increment over different time slots and aggregate them in a file.How can we save the Realm Logs into a collection so that I can query them, or possibly even apply different analytics on our use patterns?If nothing else, please remove the 100 item limit on the API. That is ridiculously small for the service it needs to fulfill – anything that’s just a small number of records can be handled easily enough via the user interface.", "username": "Eve_Ragins" }, { "code": "", "text": "Hi @Eve_Ragins, I think that this is a good suggestion and it would be great if you could add it to Realm: Top (0 ideas) – MongoDB Feedback Engine (I see other requests there around Realm logs, but this one doesn’t seem to be covered.Cheers, Andrew.", "username": "Andrew_Morgan" } ]
How can I save realm server logs into a collection for better queryability?
2021-04-26T15:52:28.125Z
How can I save realm server logs into a collection for better queryability?
2,057
null
[ "queries", "indexes" ]
[ { "code": "\"end\" : [\n \"[1616860800.0, 1616601599.0]\"\n]\n\"end\" : [\n \"[-inf.0, 1616947199.0]\"\n]\n", "text": "hi, I want to report a strange query issue.we have mongo (3.6.19) replica set. when i query below:db.scci_detail.find({“cut_commitrepo”: “isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk”,“type”: “sysadapt”,“end”: {\"$gte\": 1616860800,\"$lte\": 1616947199}}).explain(‘allPlansExecution’)on hzcoop02(secondary node): query is very fast.\non hzcoop03(another secondary node): query is very slow!after inspecting, I find the index bounds are different!\non hzcoop02:on hzcoop03:what causes the difference?", "username": "111393" }, { "code": "", "text": "Hi @111393,Its possible that not the same index is being used in both nodes and maybe the query plan caches are different.I will need a getIndexes and full excutionStats explain frim both.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"pipeline.scci_detail\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"cut_commitrepo\" : {\n\t\t\t\t\t\t\"$eq\" : \"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"type\" : {\n\t\t\t\t\t\t\"$eq\" : \"sysadapt\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\"$lte\" : 1616947199\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\"end\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_end_1\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\"[1616860800.0, 1616947199.0]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [\n\t\t\t{\n\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\t\"$lte\" : 1616947199\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\t\"revision\" : 1,\n\t\t\t\t\t\t\"name\" : 1,\n\t\t\t\t\t\t\"patchset\" : 1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_revision_1_name_1_patchset_1\",\n\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"revision\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"name\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"patchset\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 0,\n\t\t\"executionTimeMillis\" : 7,\n\t\t\"totalKeysExamined\" : 0,\n\t\t\"totalDocsExamined\" : 0,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"nReturned\" : 0,\n\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\"works\" : 2,\n\t\t\t\"advanced\" : 0,\n\t\t\t\"needTime\" : 0,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 0,\n\t\t\t\"restoreState\" : 0,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"invalidates\" : 0,\n\t\t\t\"docsExamined\" : 0,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\"works\" : 1,\n\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\"end\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_end_1\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\"[1616860800.0, 1616947199.0]\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 0,\n\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t}\n\t\t},\n\t\t\"allPlansExecution\" : [\n\t\t\t{\n\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\"executionTimeMillisEstimate\" : 10,\n\t\t\t\t\"totalKeysExamined\" : 1,\n\t\t\t\t\"totalDocsExamined\" : 1,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\t\t\"$lte\" : 1616947199\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 10,\n\t\t\t\t\t\"works\" : 1,\n\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\"needTime\" : 1,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\"docsExamined\" : 1,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"nReturned\" : 1,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 10,\n\t\t\t\t\t\t\"works\" : 1,\n\t\t\t\t\t\t\"advanced\" : 1,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\t\t\"revision\" : 1,\n\t\t\t\t\t\t\t\"name\" : 1,\n\t\t\t\t\t\t\t\"patchset\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_revision_1_name_1_patchset_1\",\n\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"revision\" : [\n\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"name\" : [\n\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"patchset\" : [\n\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 1,\n\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\"totalKeysExamined\" : 0,\n\t\t\t\t\"totalDocsExamined\" : 0,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\"works\" : 1,\n\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\"docsExamined\" : 0,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\"works\" : 1,\n\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\t\t\"end\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_end_1\",\n\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\t\t\"[1616860800.0, 1616947199.0]\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 0,\n\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"hzcoop02.china.nsn-net.net\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"3.6.19\",\n\t\t\"gitVersion\" : \"41b289ff734a926e784d6ab42c3129f59f40d5b4\"\n\t},\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1619509531, 16),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1619509531, 16),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"oHDIC9hSlWkXuaWwQQUWV0IdVc0=\"),\n\t\t\t\"keyId\" : NumberLong(\"6894478951775731714\")\n\t\t}\n\t}\n}\n", "text": "db.scci_detail.find({“cut_commitrepo”: “isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk”,“type”: “sysadapt”,“end”: {\"$gte\": 1616860800,\"$lte\": 1616947199}}).explain(‘allPlansExecution’)hi, sorry for the late response.hzcoop02:", "username": "111393" }, { "code": "db.scci_detail.getIndexes()\n[\n\t{\n\t\t\"v\" : 1,\n\t\t\"key\" : {\n\t\t\t\"_id\" : 1\n\t\t},\n\t\t\"name\" : \"_id_\",\n\t\t\"ns\" : \"pipeline.scci_detail\"\n\t},\n\t{\n\t\t\"v\" : 1,\n\t\t\"key\" : {\n\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\"type\" : 1,\n\t\t\t\"end\" : 1\n\t\t},\n\t\t\"name\" : \"cut_commitrepo_1_type_1_end_1\",\n\t\t\"ns\" : \"pipeline.scci_detail\",\n\t\t\"background\" : true\n\t},\n\t{\n\t\t\"v\" : 1,\n\t\t\"key\" : {\n\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\"type\" : 1,\n\t\t\t\"revision\" : 1,\n\t\t\t\"name\" : 1,\n\t\t\t\"patchset\" : 1\n\t\t},\n\t\t\"name\" : \"cut_commitrepo_1_type_1_revision_1_name_1_patchset_1\",\n\t\t\"ns\" : \"pipeline.scci_detail\",\n\t\t\"background\" : true\n\t},\n\t{\n\t\t\"v\" : 1,\n\t\t\"key\" : {\n\t\t\t\"date\" : 1,\n\t\t\t\"name\" : 1\n\t\t},\n\t\t\"name\" : \"date_1_name_1\",\n\t\t\"ns\" : \"pipeline.scci_detail\",\n\t\t\"background\" : true\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"start\" : 1,\n\t\t\t\"name\" : 1\n\t\t},\n\t\t\"name\" : \"start_1_name_1\",\n\t\t\"background\" : true,\n\t\t\"ns\" : \"pipeline.scci_detail\"\n\t}\n]\n", "text": "indexes:", "username": "111393" }, { "code": "db.scci_detail.find({\"cut_commitrepo\": \"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\",\"type\": \"sysadapt\", $and:[{\"end\":{\"$gte\": 1616860800}},{\"end\":{\"$lte\": 1616947199}}]}).explain('allPlansExecution')\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"pipeline.scci_detail\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"cut_commitrepo\" : {\n\t\t\t\t\t\t\"$eq\" : \"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"type\" : {\n\t\t\t\t\t\t\"$eq\" : \"sysadapt\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\"$lte\" : 1616947199\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"end\" : {\n\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\"end\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_end_1\",\n\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"cut_commitrepo\" : [ ],\n\t\t\t\t\t\"type\" : [ ],\n\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\"end\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\"[-inf.0, 1616947199.0]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [\n\t\t\t{\n\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\t\"$lte\" : 1616947199\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\t\"revision\" : 1,\n\t\t\t\t\t\t\"name\" : 1,\n\t\t\t\t\t\t\"patchset\" : 1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_revision_1_name_1_patchset_1\",\n\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\"cut_commitrepo\" : [ ],\n\t\t\t\t\t\t\"type\" : [ ],\n\t\t\t\t\t\t\"revision\" : [ ],\n\t\t\t\t\t\t\"name\" : [ ],\n\t\t\t\t\t\t\"patchset\" : [ ]\n\t\t\t\t\t},\n\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"revision\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"name\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"patchset\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 0,\n\t\t\"executionTimeMillis\" : 177970,\n\t\t\"totalKeysExamined\" : 107760,\n\t\t\"totalDocsExamined\" : 107760,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"end\" : {\n\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : 0,\n\t\t\t\"executionTimeMillisEstimate\" : 84036,\n\t\t\t\"works\" : 107762,\n\t\t\t\"advanced\" : 0,\n\t\t\t\"needTime\" : 107760,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 8239,\n\t\t\t\"restoreState\" : 8239,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"invalidates\" : 0,\n\t\t\t\"docsExamined\" : 107760,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"nReturned\" : 107760,\n\t\t\t\t\"executionTimeMillisEstimate\" : 360,\n\t\t\t\t\"works\" : 107761,\n\t\t\t\t\"advanced\" : 107760,\n\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 8239,\n\t\t\t\t\"restoreState\" : 8239,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\"end\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_end_1\",\n\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"cut_commitrepo\" : [ ],\n\t\t\t\t\t\"type\" : [ ],\n\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\"end\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\"[-inf.0, 1616947199.0]\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 107760,\n\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\"dupsTested\" : 107760,\n\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t}\n\t\t},\n\t\t\"allPlansExecution\" : [\n\t\t\t{\n\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\"executionTimeMillisEstimate\" : 92136,\n\t\t\t\t\"totalKeysExamined\" : 107761,\n\t\t\t\t\"totalDocsExamined\" : 107761,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\t\t\"$lte\" : 1616947199\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 92136,\n\t\t\t\t\t\"works\" : 107761,\n\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\"needTime\" : 107761,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 8238,\n\t\t\t\t\t\"restoreState\" : 8238,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\"docsExamined\" : 107761,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"nReturned\" : 107761,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 262,\n\t\t\t\t\t\t\"works\" : 107761,\n\t\t\t\t\t\t\"advanced\" : 107761,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 8238,\n\t\t\t\t\t\t\"restoreState\" : 8238,\n\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\t\t\"revision\" : 1,\n\t\t\t\t\t\t\t\"name\" : 1,\n\t\t\t\t\t\t\t\"patchset\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_revision_1_name_1_patchset_1\",\n\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : [ ],\n\t\t\t\t\t\t\t\"type\" : [ ],\n\t\t\t\t\t\t\t\"revision\" : [ ],\n\t\t\t\t\t\t\t\"name\" : [ ],\n\t\t\t\t\t\t\t\"patchset\" : [ ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"revision\" : [\n\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"name\" : [\n\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"patchset\" : [\n\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 107761,\n\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\"executionTimeMillisEstimate\" : 84036,\n\t\t\t\t\"totalKeysExamined\" : 107760,\n\t\t\t\t\"totalDocsExamined\" : 107760,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"end\" : {\n\t\t\t\t\t\t\t\"$gte\" : 1616860800\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 84036,\n\t\t\t\t\t\"works\" : 107761,\n\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\"needTime\" : 107760,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 8238,\n\t\t\t\t\t\"restoreState\" : 8238,\n\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\"docsExamined\" : 107760,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"nReturned\" : 107760,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 360,\n\t\t\t\t\t\t\"works\" : 107761,\n\t\t\t\t\t\t\"advanced\" : 107760,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 8238,\n\t\t\t\t\t\t\"restoreState\" : 8238,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : 1,\n\t\t\t\t\t\t\t\"type\" : 1,\n\t\t\t\t\t\t\t\"end\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"cut_commitrepo_1_type_1_end_1\",\n\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : [ ],\n\t\t\t\t\t\t\t\"type\" : [ ],\n\t\t\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\t\t\"end\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"cut_commitrepo\" : [\n\t\t\t\t\t\t\t\t\"[\\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\", \\\"isource/svnroot/BTS_SC_SYSADAPT_LTE/trunk\\\"]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"type\" : [\n\t\t\t\t\t\t\t\t\"[\\\"sysadapt\\\", \\\"sysadapt\\\"]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"end\" : [\n\t\t\t\t\t\t\t\t\"[-inf.0, 1616947199.0]\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 107760,\n\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\"dupsTested\" : 107760,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"hzcoop03.china.nsn-net.net\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"3.6.19\",\n\t\t\"gitVersion\" : \"41b289ff734a926e784d6ab42c3129f59f40d5b4\"\n\t},\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1619509726, 11),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1619509726, 11),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"XOM8Upk05LOfCqav7IlEMcoCDPQ=\"),\n\t\t\t\"keyId\" : NumberLong(\"6894478951775731714\")\n\t\t}\n\t}\n}\n", "text": "hzcoop03:", "username": "111393" }, { "code": " “indexName” : “cut_commitrepo_1_type_1_end_1”,\n“isMultiKey” : false,\n“indexName” : “cut_commitrepo_1_type_1_end_1”,\n“isMultiKey” : true,\n", "text": "Hi @111393,I noticed that in node 2 the index choosen is considered as not multikey:While on node 3 it is:This is most probably a metadata bug which has effected this node in previous versions.I suggest to either rebuild this index on node 3 or resync the node as there might be additional indexes impacted.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "interesting findings.\ni will do as you advise.", "username": "111393" } ]
Query hit index is quite slow on one replica set node
2021-04-01T09:30:19.366Z
Query hit index is quite slow on one replica set node
2,348
null
[ "dot-net" ]
[ { "code": "System.TimeoutException\nA timeout occurred after 30000 ms selecting a server using CompositeServerSelector {\n Selectors = MongoDB.Driver.MongoClient + AreSessionsSupportedServerSelector, LatencyLimitingServerSelector {\n AllowedLatencyRange = 00: 00: 00.0150000\n }\n}.\nClient view of cluster state is {\n ClusterId: \"1\",\n ConnectionMode: \"ReplicaSet\",\n Type: \"ReplicaSet\",\n State: \"Disconnected\",\n Servers: [{\n ServerId: \"{ ClusterId : 1, EndPoint : \"\n 127.0 .0 .1: 27001 \" }\",\n EndPoint: \"127.0.0.1:27001\",\n ReasonChanged: \"Heartbeat\",\n State: \"Disconnected\",\n ServerVersion: ,\n TopologyVersion: ,\n Type: \"Unknown\",\n HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.Net.Sockets.SocketException: \n No connection could be made because the target machine actively refused it 127.0.0.1:27001 at ...\n", "text": "Hi all, wondering if anybody can help diagnose an issue connecting to a replica set using the .NET driver.Environment\n1 x Ubuntu Server running MongoDB (3.6.2) replica set on ports 27001/27002/27003.\n1 x Windows Server running app using MongoDB dotnet driver (2.12.2)Both are dedicated servers running on a private networkIssue\nThe windows app can connect directly to the primary (27002) using the following connection string, this works fine and allows reads and writes without problem.mongodb://username:password@ubuntu_host:27002/db_to_connect_toI tried the following connection string combinations to connect to the replica set:mongodb://username:password@ubuntu_host:27001,ubuntu_host:27002,ubuntu_host:27003/db_to_connect_to?replicaSet=replica_set_namemongodb://username:password@ubuntu_host:27001,ubuntu_host:27002,ubuntu_host:27003/?authSource=db_to_connect_to&replicaSet=replica_set_nameAlso tried using the IP address for the Ubuntu host instead of the domain name, but get the same error.ErrorThe no connection error makes sense as it is trying to connect to 127.0.0.1 and on the Windows server this would refuse the connection, but why is the driver telling it to use the locahost instead of the host provided in the connection string.Any thoughts?Thanks,\nJames", "username": "jpd" }, { "code": "", "text": "Thank you for your question James,Could you pleaseThanks", "username": "Boris_Dogadov" }, { "code": " \"members\" : [\n \t\t{\n \t\t\t\"_id\" : 0,\n \t\t\t\"host\" : \"127.0.0.1:27001\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 1,\n \t\t\t\"tags\" : {\n \t\t\t\t\n \t\t\t},\n \t\t\t\"slaveDelay\" : 0,\n \t\t\t\"votes\" : 1\n \t\t},\n \t\t{\n \t\t\t\"_id\" : 1,\n \t\t\t\"host\" : \"127.0.0.1:27002\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 1,\n \t\t\t\"tags\" : {\n \t\t\t\t\n \t\t\t},\n \t\t\t\"slaveDelay\" : 0,\n \t\t\t\"votes\" : 1\n \t\t},\n \t\t{\n \t\t\t\"_id\" : 2,\n \t\t\t\"host\" : \"127.0.0.1:27003\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 1,\n \t\t\t\"tags\" : {\n \t\t\t\t\n \t\t\t},\n \t\t\t\"slaveDelay\" : 0,\n \t\t\t\"votes\" : 1\n \t\t}\n \t],\n", "text": "Hi Boris,Thanks so much for pointing me in the right direction. The replica set members have localahost entries…Can I just check my understanding, so the driver uses the connection string to establish a connection based on any available host in the connection string, then once connected the server reports back the replica configuration that is used to establish the link?I can’t change the replica set configuration at the moment as it will need to be scheduled during down time but presumably by updating the config following this tutorial https://docs.mongodb.com/manual/tutorial/change-hostnames-in-a-replica-set/\nit should then fix the issue.Replica set configuration outputThanks again,\nJames", "username": "jpd" }, { "code": "", "text": "Hi James,\nYes, driver uses the reported configuration for further connection establishments and updates. Changing the host addresses to non-local ones should resolve the issue.Thanks!", "username": "Boris_Dogadov" }, { "code": "", "text": "Hi Borris, thanks for your help and explaining how it works.", "username": "jpd" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Dotnet driver - can't connect to replica set
2021-04-26T16:37:05.722Z
Dotnet driver - can&rsquo;t connect to replica set
4,486
null
[]
[ { "code": "", "text": "This question relates to my other question about saving logs: How can I save realm server logs into a collection for better queryability?Some bad code went out to production and we didn’t identify that there a problem for several days. How can I sign myself up to get notified when there are errors in realm (particularly around realm function calls)?", "username": "Eve_Ragins" }, { "code": "", "text": "Hi Eve,Currently the only alerts that you get notified about in Realm is for Sync/Trigger failures.\nTo allow for wider alert monitoring, our team are planning to have other events propagate into the Atlas activity feed as a future improvement.This feature request has also been posted in our feedback portal here, please feel free to vote on this request to get updates if anything changes.As a workaround you could add a try/catch block in your function that posts the error to a third party monitoring tool.Regards\nManny", "username": "Mansoor_Omar" } ]
How can I set up alerts for when there's an Error in Realm?
2021-04-26T19:17:30.836Z
How can I set up alerts for when there&rsquo;s an Error in Realm?
1,773
null
[ "swift" ]
[ { "code": "public class Parent:Object {\n let childList = List<Child>()\n}\n\npublic class Child:EmbeddedObject { //This used to be a regular object\n let parents:LinkingObjects<Parent> = LinkingObjects(fromType: Parent.self, property: \"childList\")\n}\nmigration.enumerateObjects(ofType: Child.className()) { (oldObject, newObject) in\n \n //How can I now that an old child object does not have a parent. \n // oldObject?[\"parents\"] doesn't seem to exist\n}\n", "text": "I am trying to change a regular object to embedded. The requirements for to change the embedded ness is is that each embedded object can have one and exactly one object that is linked to it.I know for sure that I have orphaned objects (zero parents) and want to make sure these gets deleted. If I fail to do this, the app will crash.The question is. How do I find these orphaned children so I can delete them?Class structure:Some children does not have parents. Migration function.Any suggestions?", "username": "Simon_Persson" }, { "code": "", "text": "After reading this https://github.com/realm/realm-cocoa/issues/7145#issuecomment-827029259, I am convinced it is better to avoid this migration altogether and moving data to a new property instead.", "username": "Simon_Persson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there a way to access linking objects during migration? (Embedded objects)
2021-04-26T17:18:55.515Z
Is there a way to access linking objects during migration? (Embedded objects)
2,315
null
[]
[ { "code": "", "text": "Hi there everyone. I am currently trying to deploy my database onto AWS elastic beanstalk for global use with my website but currently failing and was wondering has anyone ever worked with integrating their MongoDB with AWS service to use for a website. Thanks again", "username": "Jack_Haugh" }, { "code": "", "text": "Hi Jack,To confirm you mean you’re deploying an application AWS Elastic Beanstalk that connects to MongoDB Atlas on AWS right?Elastic Beanstalk is for the application tier: MongoDB Atlas is for the database.Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hello @Andrew_Davidson. Right now our website is deployed onto AWS Amplify by syncing it with our github repo. We were told that looking into Using AWS Elastic Beanstalk would help try to deploy our database to use globally but still finding problems with it and stuck with how and where to go now.", "username": "Jack_Haugh" }, { "code": "", "text": "Hi Jack,Your application EB: it should have a connection string to Atlas. You may have an IP Access List issue as the EB nodes may have non-deterministic public IPs. Have you tried adding all IPs (0.0.0.0/0) to the Atlas IP Access List?-Andrew", "username": "Andrew_Davidson" } ]
MongoDB database with AWS Elastic Beanstalk
2021-04-22T18:01:47.073Z
MongoDB database with AWS Elastic Beanstalk
3,491
null
[]
[ { "code": "", "text": "Hello,\nThe first issue that I encountered deploying mongod on a local machine were permissions to files (code=exited, status=100 - no permissions) As I see in lab filesystem (Vagrant?) mongod.log and config files have permissions set to 755 (rxw, rx, rx), on a local linux machine these by default are 644, which of those are advised for default paths (/var/log/ and /etc/, also for db in /var/lib/) and which for custom (e.g. fast deploying at home/user path just for practice)?The second one is deploying another service of mongod - with systemd. I could copy-paste to custom unit - named mongod1.service, mongod2.service or mongod3.service at /lib/systemd/system/ (according to discussion at github) and use separate mongod.conf files, therefore separate logs and dbs, for learning purposes. The main question is - should those mongod units (in systemd) run on a binary code of the same service (default: /usr/bin/mongod) or should they be deployed on separate located binaries? How does it look in virtual environments? Is the binary shared?As a summary:Any tips are welcome! ", "username": "Pawel_Kuklinski" }, { "code": " ExecStart=/usr/bin/mongod --config /etc/mongod.conf ExecStart=/usr/bin/mongod --config /etc/mongod1.conf ExecStart=/usr/bin/mongod --config /etc/mongod2.confmongodb chown mongodb:mongodb mongodb1chmod 700 mongodb1chmod 700 mongodb2systemctl start mongodsystemctl start mongod1systemctl start mongod2mongodmongod", "text": "seems solved:By default, MongoDB runs using the mongodb user account. If you change the user that runs the MongoDB process, you must also modify the permission to the data and log directories to give this user access to these directories (mongodb documentation)It is also possible without changing owner, e.g. by giving all of the permissions to others (which opens also permissions to all of users from /etc/group “however entities should not be multiplied without necessity”)and started all of the instances accordingly:\nsystemctl start mongod\nsystemctl start mongod1\nsystemctl start mongod2checked all with systemctl status mongod (1,2)ps -ef to see whether they’re present in system process list.as for binaries found some tips in mongodb manualFor production deployments, you should maintain as much separation between members as possible by hosting the mongod instances on separate machines. When using virtual machines for production deployments, you should place each mongod instance on a separate host server serviced by redundant power circuits and redundant network paths.Thank you for your patience,\nRegards.", "username": "Pawel_Kuklinski" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trouble Deploying Mongod on a Local Machine
2021-04-23T14:10:07.608Z
Trouble Deploying Mongod on a Local Machine
3,888
null
[ "atlas-functions", "security" ]
[ { "code": "", "text": "Hey folks,I’m exposing a Realm to the internet with the GraphQL API and was wondering how folks here rate-limit/throttle, or otherwise manage incoming requests to discourage abuse.What I’m looking for here is the ability to throttle requests like AWS’s API Gateway does: Throttle API requests for better throughput - Amazon API Gateway.Are there any easy solutions here? Do I have to roll something myself? Should I put something in front of the API, or is it more worth my time trying to protect my work-intensive custom resolvers at the function level?", "username": "randytarampi" }, { "code": "", "text": "Also interested in this and couldn’t find much about it. There is a proposal here . I hope it will be picked up.", "username": "A_B4" } ]
How do folks throttle/rate-limit requests to their Realm GraphQL APIs?
2021-03-11T10:19:02.894Z
How do folks throttle/rate-limit requests to their Realm GraphQL APIs?
3,430
null
[ "crud" ]
[ { "code": "// players\n [ \n {\"_id\": 12592,\"TotalPoints\": 52},\n {\"_id\": 12752,\"TotalPoints\": 9},\n {\"_id\": 12605,\"TotalPoints\": 2},\n {\"_id\": 12604,\"TotalPoints\": -7},\n {\"_id\": 12770,\"TotalPoints\": 8},\n {\"_id\": 12596,\"TotalPoints\": 0},\n {\"_id\": 12764,\"TotalPoints\": 2},\n {\"_id\": 12606,\"TotalPoints\": 2},\n {\"_id\": 12755,\"TotalPoints\": 2},\n {\"_id\": 12600,\"TotalPoints\": 2},\n {\"_id\": 12599,\"TotalPoints\": 42},\n {\"_id\": 12591,\"TotalPoints\": 81},\n {\"_id\": 12756,\"TotalPoints\": 60},\n {\"_id\": 12769,\"TotalPoints\": -2},\n {\"_id\": 12610,\"TotalPoints\": 2}\n ]\n// user_teams\n [\n {\"_id\": 12943,\n \"Players\": [ {\"PlayerID\": 12596,\"PlayerPosition\": \"Player\",\"Points\": 0},\n {\"PlayerID\": 12604,\"PlayerPosition\": \"Player\",\"Points\": -7},\n {\"PlayerID\": 12605,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12606,\"PlayerPosition\": \"ViceCaptain\",\"Points\": 3},\n {\"PlayerID\": 12608,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12752,\"PlayerPosition\": \"Captain\",\"Points\": 18},\n {\"PlayerID\": 12755,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12756,\"PlayerPosition\": \"Player\",\"Points\": 60},\n {\"PlayerID\": 12757,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12759,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12761,\"PlayerPosition\": \"Player\",\"Points\": 97} ]},\n\n {\"_id\": 12944,\n \"Players\": [ {\"PlayerID\": 12592,\"PlayerPosition\": \"Captain\",\"Points\": 104},\n {\"PlayerID\": 12596,\"PlayerPosition\": \"Player\",\"Points\": 0},\n {\"PlayerID\": 12600,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12606,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12608,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12752,\"PlayerPosition\": \"Player\",\"Points\": 9},\n {\"PlayerID\": 12753,\"PlayerPosition\": \"Player\",\"Points\": 14},\n {\"PlayerID\": 12755,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12757,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12759,\"PlayerPosition\": \"ViceCaptain\",\"Points\": 3},\n {\"PlayerID\": 12764,\"PlayerPosition\": \"Player\",\"Points\": 2} ]},\n\n {\"_id\": 12945,\n \"Players\": [ {\"PlayerID\": 12591,\"PlayerPosition\": \"Player\",\"Points\": 81},\n {\"PlayerID\": 12599,\"PlayerPosition\": \"Player\",\"Points\": 42},\n {\"PlayerID\": 12605,\"PlayerPosition\": \"ViceCaptain\",\"Points\": 3},\n {\"PlayerID\": 12610,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12753,\"PlayerPosition\": \"Captain\",\"Points\": 28},\n {\"PlayerID\": 12755,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12756,\"PlayerPosition\": \"Player\",\"Points\": 60},\n {\"PlayerID\": 12757,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12759,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12760,\"PlayerPosition\": \"Player\",\"Points\": 0},\n {\"PlayerID\": 12770,\"PlayerPosition\": \"Player\",\"Points\": 8} ]},\n\n {\"_id\": 12946,\n \"Players\": [ {\"PlayerID\": 12591,\"PlayerPosition\": \"Player\",\"Points\": 81},\n {\"PlayerID\": 12599,\"PlayerPosition\": \"Player\",\"Points\": 42},\n {\"PlayerID\": 12605,\"PlayerPosition\": \"ViceCaptain\",\"Points\": 3},\n {\"PlayerID\": 12610,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12753,\"PlayerPosition\": \"Captain\",\"Points\": 28},\n {\"PlayerID\": 12755,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12756,\"PlayerPosition\": \"Player\",\"Points\": 60},\n {\"PlayerID\": 12757,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12759,\"PlayerPosition\": \"Player\",\"Points\": 2},\n {\"PlayerID\": 12760,\"PlayerPosition\": \"Player\",\"Points\": 0},\n {\"PlayerID\": 12770,\"PlayerPosition\": \"Player\",\"Points\": 8} ]}\n ]\n $UserTeams = $this->db->{'user_teams'};\n $Players = $this->db->{'players'};\n $PlayersData = $Players->find();\n \n $ViceCaptainMultiplier = 1.5;\n $CaptainMultiplier = 2;\n\n foreach ($PlayersData as $Player) {\n $TotalPoints = $Player['TotalPoints'];\n $PlayerID = $Player['_id'];\n \n $UserTeams->updateMany(\n [ 'Players.PlayerID' => $PlayerID ],\n [ '$set' => [ 'Players.$[c].Points' => $TotalPoints * $CaptainMultiplier,\n 'Players.$[vc].Points' => $TotalPoints * $ViceCaptainMultiplier,\n 'Players.$[p].Points' => $TotalPoints\n ]\n ],\n [ 'arrayFilters' => [ [ \"c.PlayerPosition\" => \"Captain\", \"c.PlayerID\" => $PlayerID ],\n [ \"vc.PlayerPosition\" => \"ViceCaptain\", \"vc.PlayerID\" => $PlayerID ],\n [ \"p.PlayerPosition\" => \"Player\", \"p.PlayerID\" => $PlayerID ]\n ]\n ]\n );\n };\n", "text": "Hi,We have two collections, one is players and other is user_teams. We are updating user_teams based on points in players collection. Below is our schema structure.Right now we are updating points by fetching all players and update user_teams player’s points one by one using loop.Now I want to know that, Is this correct way to do this? Or can I do this by any other way like using aggregation. Any possibility to do this without loop? Because number of documents can be in millions.Thanks.", "username": "Dharmesh_Prajapati" }, { "code": "user_teamsplayers", "text": "Hello @Dharmesh_Prajapati, welcome to the MongoDB Community forum!The usage of for-loop to fetch each player and update corresponding user_teams document(s) is okay. Only, the update statement may not work (have you tried your code with some sample data?) as you are intending it to. The arrayFilters condition is using the implicit and (I think it needs to be an $or operator).Or can I do this by any other way like using aggregation.Yes, you can do this with a Updates with Aggregation Pipeline - but, you cannot avoid the initial fetch from the players collection and the usage of the for-loop.To update a large number of documents efficiently, use Bulk Writes.", "username": "Prasad_Saya" }, { "code": "user_teams", "text": "Hi @Prasad_SayaThank you very much for your reply.The usage of for-loop to fetch each player and update corresponding user_teams document(s) is okay.To update user_teams collection from the player, Is it possible to use $merge operator of aggregation?Only, the update statement may not work (have you tried your code with some sample data?) as you are intending it to. The arrayFilters condition is using the implicit.Yes, everything is working fine. The only concern is to avoid the use of for-loop.To update a large number of documents efficiently, use Bulk Writes.Sure, I will try bulk writer.", "username": "Dharmesh_Prajapati" }, { "code": "", "text": "Is it possible to use $merge operator of aggregation?I don’t know if it can be used in your case; see the way $merge Aggregation Pipeline Stage works.", "username": "Prasad_Saya" } ]
Want suggestion for update documents
2021-04-26T06:33:22.037Z
Want suggestion for update documents
1,759
null
[ "aggregation", "queries", "python" ]
[ { "code": "mongodbselect distinct onSqlSELECT DISTINCT ON (department) * FROM employees\nORDER BY department, salary DESC;\nquerymongodbmongodbpymongo", "text": "Hello guys.I am wondering if mongodb have an Equivalent function as select distinct on like sql.\nFor example if i have this query in Sql:How can i do the same query in mongodb?\nFor mongodb i use python with pymongo.", "username": "harris" }, { "code": "collection.distinct()$group_id$groupdistinct()db.collection.distinct('field_x')field_xdb.collection.aggregate([ {$group: {_id: '$field_x'}} ])field_x_id", "text": "Hi @harrisThere are a couple of ways:Use collection.distinct(). See the manual for examples, and also the Pymongo manual on how to call this using Pymongo.Use the aggregation $group stage. This one is slightly more complex but is more flexible. See the manual for examples, and here’s the corresponding Pymongo page for the method. For this method, put the field you want the distinct values of as the _id part of the $group stage.Some quick examples:Regarding aggregation, the MongoDB University course M121: The MongoDB Aggregation Framework might be of interest.Best regards,\nKevin", "username": "kevinadi" }, { "code": "querysqlselect distinct on (id13) id13, timestamp1\n from oneindextwocolumnsfalse3years \nwhere timestamp1>='2010-01-01 00:00:00' and timestamp1<='2015-01-01 00:55:00' \norder by id13,timestamp1 desc\nmydb1.mongodbindextimestamp1.aggregate([\n\n {\n \"$match\": {\n \"timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:00:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2015-01-01 00:55:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n\n{\n \"$group\": {\n \"_id\":{\n \"id_13\":\"$id13\"\n },\n\n }\n},\n {\n \"$project\": {\n \"_id\": 0,\n \"id13\":1,\n \"timestamp1\":1\n }\n },\n {\"$sort\": {\"id13\": 1,\"timestamp1\":-1}}\n\n])\n", "text": "Hello!Thank you for you help!This is what i have tried:\nIf the original query in sql is this:This is what i triedBut it doesn’t seems to work.Do you have any suggestion?", "username": "harris" }, { "code": "", "text": "Hi @harrisDo you have an example document? Also, what’s the expected output?Best regards\nKevin", "username": "kevinadi" }, { "code": "{\n\t\"_id\" : ObjectId(\"605f104bdc49e72201af5a47\"),\n\t\"id1\" : 3758,\n\t\"id6\" : 2,\n\t\"id7\" : -79.09,\n\t\"id8\" : 35.97,\n\t\"id9\" : 5.5,\n\t\"id10\" : 0,\n\t\"id11\" : -99999,\n\t\"id12\" : 0,\n\t\"id13\" : -9999,\n\t\"c14\" : \"U\",\n\t\"id15\" : 0,\n\t\"id16\" : 99,\n\t\"id17\" : 0,\n\t\"id18\" : -99,\n\t\"id19\" : -9999,\n\t\"id20\" : 1197,\n\t\"id21\" : 0,\n\t\"id22\" : -99,\n\t\"id23\" : 0,\n\t\"timestamp1\" : ISODate(\"2010-01-01T01:35:00Z\"),\n\t\"timestamp2\" : ISODate(\"2009-12-31T20:35:00Z\")\n}\n{\n\t\"_id\" : ObjectId(\"605f104bdc49e72201af5a48\"),\n\t\"id1\" : 3758,\n\t\"id6\" : 2,\n\t\"id7\" : -79.09,\n\t\"id8\" : 35.97,\n\t\"id9\" : 5.5,\n\t\"id10\" : 0,\n\t\"id11\" : -99999,\n\t\"id12\" : 0,\n\t\"id13\" : -9999,\n\t\"c14\" : \"U\",\n\t\"id15\" : 0,\n\t\"id16\" : 99,\n\t\"id17\" : 0,\n\t\"id18\" : -99,\n\t\"id19\" : -9999,\n\t\"id20\" : 1198,\n\t\"id21\" : 0,\n\t\"id22\" : -99,\n\t\"id23\" : 0,\n\t\"timestamp1\" : ISODate(\"2010-01-01T01:40:00Z\"),\n\t\"timestamp2\" : ISODate(\"2009-12-31T20:40:00Z\")\n}\n{}\n{}\n{}\n.\n.\n.\n{}\n", "text": "Yes i have.My documents looks like this:But with my code the output is this:", "username": "harris" }, { "code": "id13timestamp[\n {\n \"$match\": {\n \"timestamp1\": {\n \"$gte\": datetime.strptime(\"2010-01-01 00:00:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2015-01-01 00:55:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n {\n \"$group\": {\n \"_id\": \"$id13\"\n },\n },\n]\n[{'_id': -9999.0}]\n_id$projecttimestamp$match", "text": "Hi @harrisI can get the unique values from id13 as per your example, but I’m not sure what you want to do timestamp field there:That pipeline outputs:Since aggregation is a pipeline, stages down the pipeline can only access what was made available to them from the previous stages up the pipeline. Thus if you group on _id and don’t specify other fields, the following $project stage won’t have access to the timestamp field anymore.So from your example, I’m unclear on what the timestamp field should contain. Is it the max timestamp? min timestamp? or should it return an array of timestamps that was matched by the $match stage?Best regards,\nKevin", "username": "kevinadi" } ]
Equivalent of "select distinct on" in mongodb
2021-04-21T15:01:28.871Z
Equivalent of &ldquo;select distinct on&rdquo; in mongodb
11,304
null
[]
[ { "code": "", "text": "I would like to display the union of two collections on one Chart, is this possible? I have a working aggregation using $unionWith, but it seems that this isn’t allowed in Charts, is there another way?In the worst case I could create a separate collection and update it whenever the page holding the chart is opened, but I would like a more elegant solution.Thanks", "username": "David_Gregory" }, { "code": "$unionWith$unionWith$lookup$unionWith", "text": "Hi @David_Gregory -As you noticed, we don’t currently support $unionWith directly in Charts. We need to explicitly test and enable new agg operators to ensure they are compatible with the Charts permissions model - in the case of $unionWith (similar to $lookup) we don’t want this to be a “back door” to gaining access to data that is not supposed to be shared, e.g. via an embedded chart.That said, this is a valid scenario and I’ll see what we can do to unlock it. In the meantime, you can get around this by creating a view in the shell that uses the $unionWith and then adding a data source that points to that view.Tom", "username": "tomhollander" }, { "code": "", "text": "Thanks Tom,that’s neater than my fallback solution.David", "username": "David_Gregory" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Charts union of two collections on one Chart
2021-04-21T13:25:05.506Z
Charts union of two collections on one Chart
2,710
null
[ "queries", "graphql", "react-js" ]
[ { "code": "Error: GraphQL error: No matching document found for id \"6068812264169f0c98cc9e9b\" version 97 modifiedPaths \"comments, comments.0, comments.0.likes\"\nthis my code\n\nresoler.js\n..........................\nasync likeReply(_, { postId, commentId, replyId }, context) {\n const { username } = check_auth(context);\n const post = await Post.findById(postId);\n if (post) {\n const comment = post.comments.find((c) => c.id === commentId);\n if (comment) {\n const reply = comment.replies.find((r) => r.id === replyId);\n if (reply) {\n if (!!reply.likes.find((like) => like.username === username)) {\n reply.likes = reply.likes.filter(\n (like) => like.username !== username\n );\n } else {\n reply.likes.unshift({\n username,\n createdAt: new Date().toISOString(),\n });\n }\n await post.save();\n return post;\n } else throw new UserInputError(\"Reply not fund\");\n } else throw new UserInputError(\"Comment not fund\");\n } else throw new UserInputError(\"Post not fund\");\n\n },\n", "text": "im working with the latest version\ni’m using apollo server #graphql , react js", "username": "Dimer_Bwimba_Mihanda" }, { "code": "", "text": "Hi Dimer!\nThanks so much for your question. I don’t think this is a GraphQL issue though. Do you use Mongoose? I believe findById function requires that postId be of type ObjectId. If this isn’t the case, if postId is a string, try using a simple find function.Hope this helps.Karen", "username": "Karen_Huaulme" }, { "code": "", "text": "Finally someone respond to me!!, thank you very very much for your response .\nYEs i use Mongoose, yes the postId is a type objectID. This happen when a user like a comment real quick , like why the await can’t wait for the first request to finish ?", "username": "Dimer_Bwimba_Mihanda" }, { "code": "", "text": "Hi Dimer, so I want to make sure I understand your issue. Can you simplify the function and use findById (posted) successfully without using GraphQL. It returns a document.Also, you mention this happens when a user likes, and your path is comments.0.replies.0.likes. It looks to be an async/await timing issue. What exactly happens in your application? Can you try some conditional rendering to make sure the “likes” exists/is defined before rendering?Karen", "username": "Karen_Huaulme" } ]
GraphQL error: No matching document found
2021-04-04T08:56:58.687Z
GraphQL error: No matching document found
5,418
null
[ "crud", "swift" ]
[ { "code": "/// Represents a screen where you can edit the item's name.\nstruct ItemDetailsView: View {\n @ObservedRealmObject var item: Item\n var body: some View {\n VStack(alignment: .leading) {\n Text(\"Enter a new name:\")\n // Accept a new name\n TextField(\"New name\", text: $item.name)\n .navigationBarTitle(item.name)\n .navigationBarItems(trailing: Toggle(isOn: $item.isFavorite) {\n Image(systemName: item.isFavorite ? \"heart.fill\" : \"heart\")\n })\n }.padding()\n }\n}\n", "text": "I am trying to perform updates to a realm object in a SwiftUI application but get an error when performing the update.What special magic is required to be able to perform and update to a realm object in response to a Gesture. In the example below from here (https://docs.mongodb.com/realm/sdk/ios/integrations/swiftui/) the Toggle() is bound directly to the items property.I need to be able to call a method to perform multiple updates in response to a gesture - the example below does not show any examples of how to perform a custom update on an observed object. Is there an example that shows how you are supposed to do that with SwiftUI ?Would it be possible to include a simple example of doing that in the example below - just for completeness.Thanks", "username": "Duncan_Groenewald" }, { "code": "", "text": "What is the error that you’re getting? Could you give us an example of what you’re trying to do?", "username": "Jason_Flax" }, { "code": "struct CheckBoxSelectionRealm: View {\n @ObservedRealmObject var item: RealmObject\n var size: CBSize\n \n var sizes: [CBSize : CGFloat] = [.small: 14, .medium: 17, .large: 21]\n \n var width: CGFloat {\n return sizes[size] ?? 17\n }\n var height: CGFloat {\n return sizes[size] ?? 17\n }\n \n var imageName: String {\n return item.isSelected ? \"checkmark.square\" : item.isChildSelected ? \"minus.square\" : \"square\"\n }\n var imageColor: Color {\n return item.isSelected ? Color.accentColor.opacity(0.6) : item.isChildSelected ? Color.accentColor.opacity(0.6) : Color.secondary\n }\n \n var body: some View {\n HStack {\n Text(item.name.count > 0 ? item.name : \"Unlabelled\")\n .foregroundColor(imageColor)\n Spacer()\n Image(systemName: imageName)\n .resizable()\n .frame(width: width, height: height)\n .foregroundColor(imageColor)\n }\n .onTapGesture {\n item.toggleSelection()\n }\n }\n}\n\nextension RealmObject {\n func toggleSelection(){\n \n /// Get object ID so we can access on another thread\n let id = self._id\n \n /// Do long running work - whatever that might be on a background thread\n DispatchQueue.global().async {\n if let realm = Realm.IAMRealm, let selfItem = realm.object(ofType: RealmObject.self, forPrimaryKey: id) {\n \n do {\n try realm.write({\n \n selfItem.isSelected.toggle()\n \n selfItem.parent?.setIsChildSelected()\n \n })\n } catch {\n os_log(.error, \"Error toggling selection for RealmObject \\(self.name)\")\n }\n }\n }\n}\n}", "text": "The error is - can’t access frozen Realm. The observable wrappers obviously take care of the updates via bindings but an example of how to do updates to the objects directly in code would be useful.Also any discussion on whether to do updates on the main thread or not and any side effects of doing so. it seems that creating a new Realm for updates like the one below may be inefficient for very small updates so should be performed on the main thread but what is the risk of blocking the main thread.Pretty cool though what you have done to make Realm integrate well with SwiftUI. We might have to do a rewrite sometime since it is a lot more efficient!! ouch.Something like the following. Or if there is a more efficient way that way.", "username": "Duncan_Groenewald" }, { "code": "DispatchQueue.global().async {\n if let realm = Realm.IAMRealm, let selfItem = realm.object(ofType: RealmObject.self, forPrimaryKey: id) {\n do {\n try realm.write({\n selfItem.isSelected.toggle()\n selfItem.parent?.setIsChildSelected()\n.onTapGesture {\n if let realm = Realm.IAMRealm {\n do {\n try realm.write {\n //item.isSelected.toggle() nope!\n $item.isSelected.wrappedValue = !item.isSelected\n }\n }\n}\nisSelected.toggle()item.isSelected = !item.isSelecteditem.isSelected = checkbox statetoggle()", "text": "There’s really nothing wrong with your code. A couple of thoughtsIt’s not clear if your using Sync or not but if you’re using Realm locally, non sync, then avoid opening a write on both the UI and background threads.If you are using Sync, generally speaking, doing that write on a background thread is best practice. However, you’re updating one property of one object so the chances if it tying up your UI is very very small.A second thing is really a design decision; it looks like you’ve loaded and Item and if the user toggles a checkbox, you want to update one of it’s properties. Then you get the key of that object, load it in and update the property.It doesn’t look like your using objects on different threads - maybe you are? If not, since you already loaded the object, it’s loaded again just to update that property. Why not just update the already loaded item?So instead of thisWhy notThe last thought may go back to the actual issue: your question about the errorcan’t access frozen Realm.Are you freezing objects somewhere? Are the items being frozen and passed around? A bit more info may lead to a (better) answer.Note: @ObservedRealmObject freezes an object.Oh… why isSelected.toggle()? Why not item.isSelected = !item.isSelected or item.isSelected = checkbox state? That allows you to use the code in the guide as is without the need for toggle()", "username": "Jay" }, { "code": "let realm = item.realm...\n@ObservedResults(Taxonomies.self ) var taxonomies\n...\n ScrollView {\n VStack(alignment: .leading, spacing: 0) {\n \n ForEach(tax.taxonomies.sorted(byKeyPath: \"name\")) { group in\n \n TaxonomyGroup(group: group, spacing: 0)\n \n }\n \n }.padding(.leading, 4)\n .padding(.trailing, 6)\n //.onDelete(perform: $taxonomies.items.remove)\n //.onMove(perform: $taxonomies.items.move)\n }.listStyle(SidebarListStyle())\n\n...", "text": "Thanks, yes I though the background thread was unnecessary for these small changes - and yes these items are already loaded into a list using the following - I am not doing any freeze of anything in my code. I assumed that maybe SwiftUI must be doing something. See the snippet below.EDIT: Ah - I was using let realm = item.realm rather than your example getting a new realm = not sure if that could be the reason.", "username": "Duncan_Groenewald" }, { "code": " if let realm = Realm.IAMRealm {\n do {\n try realm.write {\n let new = Taxonomy(name: \"New Group\", parent: item, isLeaf: false)\n $item.children.append(new)\n }\n } catch {\n os_log(.error, \"Error\")\n }\n }\nextension Realm {\n static var IAMRealm: Realm? {\n let configuration = Realm.Configuration(schemaVersion: 2)\n do {\n let realm = try Realm(configuration: configuration)\n return realm\n } catch {\n os_log(.error, \"Error opening realm: \\(error.localizedDescription)\")\n return nil\n }\n }\n}", "text": "I just checked that when I use the items realm I get the \" Can’t perform transactions on a frozen Realm\" error.If I use the followingThen I get the following error:\n\" Cannot modify managed RLMArray outside of a write transaction.\"BTW my Realm extension looks like this", "username": "Duncan_Groenewald" }, { "code": "func toggle(){\n if let realm = Realm.IAMRealm {\n do {\n try realm.write {\n item.isSelected = !item.isSelected\n }\n } catch {\n os_log(.error, \"Error\")\n }\n }\n }\n**Attempting to modify object outside of a write transaction - call beginWriteTransaction on an RLMRealm instance first.**", "text": "And if I try using this as per your suggestionI get the following error", "username": "Duncan_Groenewald" }, { "code": "func toggle2(){\n if let realm = Realm.IAMRealm {\n realm.beginWrite()\n \n item.isSelected = !item.isSelected\n \n do {\n try realm.commitWrite()\n } catch {\n os_log(.error, \"Error\")\n }\n }\n }", "text": "And similarly this fails with the same error", "username": "Duncan_Groenewald" }, { "code": "struct TaxonomyBrowserView: View {\n @ObservedResults(Taxonomies.self ) var taxonomies\n\n....\n var body: some View {\n\n if let tax = taxonomies.first {\n VStack(alignment: .leading) {\n ScrollView {\n VStack(alignment: .leading, spacing: 0) {\n \n ForEach(tax.taxonomies.sorted(byKeyPath: \"name\")) { group in\n \n TaxonomyGroup(group: group, spacing: 0)\n \n }\n \n }\n }\n }\n } else {\n AnyView(Text(\"Setting up taxonomies...\"))\n ...", "text": "Oh and this is the top level view where it is implicitly using the default realm’s objects(Taxonomies.self). Could it be this is a problem since the system is I am using Realm.IAMRealm which uses a configuration elsewhere.Can I get the default realm to use the same configuration somehow ? Overriding Realm.realm in an extension perhaps?", "username": "Duncan_Groenewald" }, { "code": "func toggle2() {\n // this is the simple way\n $item.isSelected.wrappedValue = !item.isSelected\n // OR this is the complex way\n guard let thawed = item.thaw(), let realm = thawed.realm() else {\n os_log(.error, \"Error\")\n return\n }\n try! realm.write {\n thawed.isSelected = !item.isSelected\n }\n}\n", "text": "@Duncan_Groenewald there are a couple of ways to solve your issue. The “why” of the issue is that when you are calling on the @ObservedRealmObject, we freeze it– this gives it temporary immutability so that SwiftUI can use it appropriately. This does make custom mutations slightly more complex for the time being. However, to work with your example, there are two ways to accomplish what you’re trying to do:", "username": "Jason_Flax" }, { "code": "let new = Taxonomy(name: \"New Group\", parent: item, isLeaf: false)\n$item.children.append(new)\nstruct ItemsView: View {\n /// The group is a container for a list of items. Using a group instead of all items\n /// directly allows us to maintain a list order that can be updated in the UI.\n @ObservedRealmObject var group: Group\n\n /// The button to be displayed on the top left.\n var leadingBarButton: AnyView?\n var body: some View {\n NavigationView {\n VStack {\n // The list shows the items in the realm.\n List {\n ForEach(group.items) { item in\n ItemRow(item: item)\n }.onDelete(perform: $group.items.remove)\n .onMove(perform: $group.items.move)\n }.listStyle(GroupedListStyle())\n .navigationBarTitle(\"Items\", displayMode: .large)\n .navigationBarBackButtonHidden(true)\n .navigationBarItems(\n leading: self.leadingBarButton,\n // Edit button on the right to enable rearranging items\n trailing: EditButton())\n // Action bar at bottom contains Add button.\n HStack {\n Spacer()\n Button(action: {\n **// The bound collection automatically**\n** // handles write transactions, so we can**\n** // append directly to it.**\n** $group.items.append(Item())**\n }) { Image(systemName: \"plus\") }\n }.padding()\n }\n }\n }\n }", "text": "So if I need to do a few things, like creating a new item and adding it then I need to use the complex way.What I don’t understand is why your example shows adding children to a list without going through any hoops. What am I doing differently that this does not work. Is it because my items are in a list already and if I want to add an item to their children I can’t. It seems counterintuitive that at the same time you can modify the item directly when bound to a detail view (as per example). Is the RealmSwift taking care of thawing the object under the covers ?I never did watch Ice Age 2 ! ", "username": "Duncan_Groenewald" }, { "code": "guard let thawed = item.thaw(), let realm = thawed.realm() else {\n os_log(.error, \"Error\")\n return\n }\n try! realm.write {\n thawed.isSelected = !item.isSelected\n }\n", "text": "Great these work just fine - a lot less complex than unnecessary background tasks ! But using Sync mainly I am used to doing that for most things anyway.", "username": "Duncan_Groenewald" }, { "code": "func toggle2() {\n // this is the simple way\n $item.isSelected.wrappedValue = !item.isSelected\nwrappedValue$item.isSelected.wrappedValue = !item.isSelected", "text": "Awesome! It would be great is wrappedValue was used somewhere in the docs in this context.Does $item.isSelected.wrappedValue = !item.isSelected need to be within a write?Also, I think that goes hand-in-hand with@ObservedRealmObject, we freeze itWhich would be great to have in the docs as well as it’s something that would probably come up frequently when attempting to update an observed object.", "username": "Jay" } ]
Realm object update with SwiftUI
2021-04-24T07:16:54.722Z
Realm object update with SwiftUI
8,034
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello Everyone!\nI’m new to MongoDB, and I’m still trying to understand what is the best way to do things.In particular, I have the following problem:Let’s assume that I have a list of books and a list of movies organized like this:A. Books\n– i. Fantasy\n---- 1. Lord of the Rings\n---- 2. Shannara\n– ii. Crime\n---- 1. Sherlock Holmes\n---- 2. Agatha ChristieB. Movies\n– i. Sci-Fi\n---- 1. Dune\n---- 2. I, Robot\n– ii. Historical\n---- 1. Ben Hur\n---- 2. Lawrence of ArabiaAnd then I have a series of Users, and Users can select one category of books or movies, and they can rate all of them. So for isntance:Mark:LucyWhat is the best practice to store such data in a MongoDB database?Should I create a collection with the basic lists, and a second collection with the users, and then i will just copy and embed the collection each used select into their specific document, and attach the score to such copy?Or is there a way to keep the actual lists of items in the lists collection and keep the rankings in the Users documents and somehow link the two, but without coping and embedding the lists in the documents of each users selecting them?I sorry if this sounds like a silly questions, but as said, i’m quite new…", "username": "PaniniK" }, { "code": "//Collection users\n\n{\nUserId : .... ,\nUsername : ... ,\n...\n}\n\n// Collection userPreferences\n\n{\nUser : {\nUserId, \nUsername }\nCategory : \"books\",\nFantasy : [ {name : \"...\" , Rating : 1}, {name : \"...\" , Rating : 1}]\n...\n}\n", "text": "Hi @PaniniK,Welcome to MongoDB community.The question of schema design in MongoDB is a common topic when you start. One of the main questions you need to answer before deciding on schema is:Once we answer this we can get a better idea.Just based on what you provided it sounds like the following schema might work:You can update the arrays to be sorted based on calculate rating with $each.Now what I am not sure is if the rating is avg across users or each rating is for one user only?Thanks\nPavel", "username": "Pavel_Duchovny" } ]
How build a database that records user preferences over lists of items
2021-04-24T14:24:09.292Z
How build a database that records user preferences over lists of items
2,561
null
[ "aggregation", "queries", "python" ]
[ { "code": "{'_id': ObjectId('6068da8878fa2e568c42c7f1'),\n 'first': datetime.datetime(2018, 1, 24, 14, 5),\n 'last': datetime.datetime(2018, 1, 24, 15, 5),\n 'maxid13': 12.5,\n 'minid13': 7.5,\n 'nsamples': 13,\n 'samples': [{'c14': 'C',\n 'id1': 3758.0,\n 'id10': 0.0,\n 'id11': 274.0,\n 'id12': 0.0,\n 'id13': 7.5,\n 'id15': 0.0,\n 'id16': 73.0,\n 'id17': 0.0,\n 'id18': 0.342,\n 'id19': 6.3,\n 'id20': 1206.0,\n 'id21': 0.0,\n 'id22': 0.87,\n 'id23': 0.0,\n 'id6': 2.0,\n 'id7': -79.09,\n 'id8': 35.97,\n 'id9': 5.8,\n 'timestamp1': datetime.datetime(2018, 1, 24, 14, 5),\n 'timestamp2': datetime.datetime(2018, 1, 24, 9, 5)},\n {'c14': 'C',\n 'id1': 3758.0,\n 'id10': 0.0,\n 'id11': 288.0,\n 'id12': 0.0,\n 'id13': 8.4,\n 'id15': 0.0,\n 'id16': 71.0,\n 'id17': 0.0,\n 'id18': 0.342,\n 'id19': 6.3,\n 'id20': 1207.0,\n 'id21': 0.0,\n 'id22': 0.69,\n 'id23': 0.0,\n 'id6': 2.0,\n 'id7': -79.09,\n 'id8': 35.97,\n 'id9': 6.2,\n 'timestamp1': datetime.datetime(2018, 1, 24, 14, 10),\n 'timestamp2': datetime.datetime(2018, 1, 24, 9, 10)},\n .\n .\n .\n .\nid13timestamp1 datetime.datetime(2018, 1, 24, 14, 5)Samplesarraycursor = mydb1.mongodbbucket.aggregate(\n [\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n\n {\n \"$project\": {\n\n \"samples.id13\": 1\n }\n },\n ]\n )\nid13", "text": "My data look like this:Can someone help on how to find for example the id13 when timestamp1 is equals to\n datetime.datetime(2018, 1, 24, 14, 5)\nSamples is an array.\nThis is what i have wrote.The ideal output would be id13:7.5", "username": "harris" }, { "code": "{\"samples.$.id13\": 1 }\n\n", "text": "Hi @harris,You need a postional projection using a find operation:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "cursor = mydb1.mongodbbucket.aggregate([ { \"$unwind\": \"$samples\" }, { \"$match\": { \"samples.id13\": { \"$exists\": true }, \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") } } }, { \"$project\": { \"samples.id13\": 1 } } ])", "text": "Thank you @Pavel_Duchovny.This is what i did to fix the problem cursor = mydb1.mongodbbucket.aggregate([ { \"$unwind\": \"$samples\" }, { \"$match\": { \"samples.id13\": { \"$exists\": true }, \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") } } }, { \"$project\": { \"samples.id13\": 1 } } ])", "username": "harris" }, { "code": "", "text": "Hi @harris,I recommend moving match stages to first place to utilize filtering and indexes otherwise each document will need to be unwinded which is not necessary…I recommend testing a positional projection for an optimal solution.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "id13cursor = mydb1.mongodbbucket.aggregate([\n\n {\n \"$match\": {\n\n \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") }\n }\n },\n { \"$unwind\": \"$samples\" },\n {\n \"$project\": {\n \"samples.id13\": 1\n }\n }\n])\n", "text": "@Pavel_Duchovny If i do something like this it prints all the id13 from the samples which is not the ideal…do you mean something else?", "username": "harris" }, { "code": "mydb1.mongodbbucket.find({ \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") }}, {\"samples.$.id13\": 1 });\n", "text": "Hi @harris,You can do match => unwind => match.But best is:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "mydb1.mongodbbucket.find({ \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") }}, {\"samples.$.id13\": 1 });\npymongo.errors.OperationFailure: As of 4.4, it's illegal to specify positional operator in the middle of a path.Positional projection may only be used at the end, for example: a.b.$. If the query previously used a form like a.b.$.d, remove the parts following the '$' and the results will be equivalent., full error: {'ok': 0.0, 'errmsg': \"As of 4.4, it's illegal to specify positional operator in the middle of a path.Positional projection may only be used at the end, for example: a.b.$. If the query previously used a form like a.b.$.d, remove the parts following the '$' and the results will be equivalent.\", 'code': 31394, 'codeName': 'Location31394'}\n", "text": "Hello @Pavel_Duchovny .This is what i get when i do", "username": "harris" }, { "code": "mydb1.mongodbbucket.find({ \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") }}, {\"samples.$.id13\": 1 });{\"samples.id13.$\": 1 }", "text": "mydb1.mongodbbucket.find({ \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") }}, {\"samples.$.id13\": 1 });I did\n{\"samples.id13.$\": 1 } and now prints the right result!Thank you!", "username": "harris" }, { "code": "cursor = mydb1.mongodbbucket.aggregate([\n\n {\n \"$match\": {\n\n \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") }\n }\n },\n { \"$unwind\": \"$samples\" },\n\n \"$match\": {\n\n \"samples.timestamp1\": { \"$eq\": datetime.strptime(\"2018-01-24 14:10:00\", \"%Y-%m-%d %H:%M:%S\") }\n }\n }\n {\n \"$project\": {\n \"samples.id13\": 1\n }\n }\n])\n", "text": "@Pavel_Duchovny Hello.I have one question more.when you say do match->unwind->match do you mean the exact same match?..something like this :Right?", "username": "harris" }, { "code": "", "text": "Yep.But it will unwind only filtered docs. Unwind is an expensive operations to run on all docs.", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you!I appreciate your help a lot!", "username": "harris" }, { "code": "", "text": "Hello @Pavel_Duchovny .Sorry for interapting you.I have one last question.Is it possible to use position projection on aggregate?cause i tried and did not have any result.Thanks in advance!", "username": "harris" }, { "code": "", "text": "I don’t think so, i think you need to do a $filter operator during projection fetching one element.In that case the unwind might be a cleaner solution…", "username": "Pavel_Duchovny" }, { "code": "mydb1.mongodbbucketright.find(\n {\"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\": datetime.strptime(\"2015-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\":{\"$gt\":5}},\n\n {\"samples.$\": 1 })\nmydb1.mongodbbucketright.aggregate([\n\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2015-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\": {\"$gt\": 5}\n }\n },\n { \"$unwind\": \"$samples\" },\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\": datetime.strptime(\"2015-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\": {\"$gt\": 5}\n }\n },\n\n\n])\n", "text": "@Pavel_Duchovny Hello again.I have one question more if i may.\nIs it possible to use position projection in a query like this:It seems that i get less results than expected…Should i do it with an aggregate like this? :", "username": "harris" }, { "code": "", "text": "Hi @harris,Not sure maybe there is a cursor you need to iterate until exhausted…Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Query nested documents mongodb with python pymongo
2021-04-04T13:29:05.052Z
Query nested documents mongodb with python pymongo
6,911
null
[]
[ { "code": "", "text": "after enabling keyfiles in my mongodb cluster, balancer cannot run and the last report error is: Authentication Failed, everything else is working fine !!!\nwhere should i describe balancer authentication information?", "username": "Danial_Akbari" }, { "code": "", "text": "Welcome to the communityWhich lab is this?\nAre you using the same keyfile for all the nodes\nHave you created users on replicaset and config db?\nCan you login with that user\nPlease show output of rs.status(),sh.status() and last few lines from mongos.log", "username": "Ramachandra_Tummala" } ]
Balancer cannot run after using keyfiles
2021-04-24T13:34:18.020Z
Balancer cannot run after using keyfiles
1,592
null
[ "node-js", "crud" ]
[ { "code": " await client.db.userdata.updateOne({\n id: message.author.id,\n }, {\n $inc: {\n inventory: {\n common: {\n quantity: amountAdded\n }\n }\n },\n });\n client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $inc: {\n balance: payAmount,\n },\n });", "text": "Greetings. I’m using MongoDB to store information regarding a project I am doing in javascript. There are multiple values I am trying to increment or decrease periodically and it has been working so far, but for this one I always get a “Cannot increment with non-numeric argument” error, which doesn’t make sense to me because my argument is clearly a number, if I check it with javascript’s “typeof” function, it tells me it’s a number. I convert it from string to number via parseInt() and also it works in other scenarios, just not here for some reason.Can someone help me find out why am I getting the error? My code:const amountAdded = parseInt(args[0], 10)Error:“MongoError: Cannot increment with non-numeric argument: {inventory: { common: { quantity: 1 } }}”Here is an instance where it does work:", "username": "Lord_Wasabi" }, { "code": "\"$inc\" : { \"inventory.common.quantity\" : amountAdded }\n", "text": "The issue, I think, (I am unable to confirm by testing at this time), is that the argument is an object. I would try the dot-notation as follow:", "username": "steevej" }, { "code": "", "text": "That solved it, thank you for your help!", "username": "Lord_Wasabi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
"Cannot increment with non-numeric argument"
2021-04-24T09:14:39.581Z
&ldquo;Cannot increment with non-numeric argument&rdquo;
8,801
null
[]
[ { "code": "", "text": "Is the “MongoDB Cloud Manager” product still supported? When I connect to the product page to start a trial, I am launched on the Atlas page. I would like to use “MongoDB Cloud Manager” to monitor and perform on-premise MongoDB backups. I also did not find any link on the MongoDB main page for the product “MongoDB Cloud Manager”. Has it been discontinued?", "username": "Constantino_Jacob" }, { "code": "", "text": "You have to setup Cloud manager project after login to your account\nPlease go through this link", "username": "Ramachandra_Tummala" }, { "code": "", "text": "That’s right: after you register you can create a new Organization and select “Cloud Manager” for the type.To answer your question: Cloud Manager is still supported, it’s just that MongoDB Atlas offers a transformationally higher value proposition for users so that’s where the growth is centered and where we’re investing in more and more rich scalability capabilities as well as Atlas Search. Triggers, Realm Mobile Sync, Atlas Data Lake, Online Archive, Multi-Cloud, Multi-Region Charts, etc.Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
About MongoDB Cloud Manager
2021-04-20T23:41:49.473Z
About MongoDB Cloud Manager
2,110
null
[]
[ { "code": "\"Build a Cluster\" -> Create Shared Cluster -> Loading screen -> Empty list of clusters\n\"Build a Cluster\" -> Advanced Creation Options -> Under \"Cluster Tier\", select \"M0 sandbox\" -> immediately redirected to empty list of clusters.\n", "text": "I’m trying to create a shared cluster (M0 Sandbox), but when I click through the “Build a Cluster” menu, I am taken to my list of clusters, which is empty. Viewing the project activity feed, no cluster creation event is listed.I have tried two different methods:Edit: method 2 also redirects to empty cluster list when selecting any other shared cluster tier, but not when selecting a dedicated cluster tier.Edit 2: Issue seems to have resolved itself after waiting 1 hour and refreshing webpage.", "username": "Delta_Kapp" }, { "code": "", "text": "Hi Delta, I’m very sorry to hear this happened to you: we’ve never seen an issue like this before. Out of curiosity, what browser are you using?", "username": "Andrew_Davidson" } ]
Atlas failing to create shared cluster
2021-04-21T18:03:58.613Z
Atlas failing to create shared cluster
1,599
null
[ "atlas", "kubernetes-operator" ]
[ { "code": "", "text": "We are excited to announce the release of the MongoDB Atlas Kubernetes Operator (trial version).With MongoDB Atlas Operator you can seamlessly integrate MongoDB Atlas into your current Kubernetes deployment pipeline for a consistent experience across different deployment environments. Leave your workflow uninterrupted using the Atlas Operator to simplify deployment, management, and scaling of your Atlas clusters in Kubernetes.With the Atlas Operator, you can manage Atlas directly with the Kubernetes API to allow for simple and quick cluster and database user configuration so they can easily deploy and manage standardized clusters in any type of environment. The Atlas Operator supports all the standard resources in the MongoDB Atlas API, including projects, clusters, database users, IP access lists, network peering, and more. For a complete list, see the Atlas Operator documentation.Try it out today!For more information, please visit:Our project page on GithubOperator HubOur Product PageFor questions, contact @Andrey_Belik", "username": "Marissa_Jasso" }, { "code": "", "text": "", "username": "system" } ]
Introducing: MongoDB Atlas Operator for Kubernetes (trial version)
2021-04-23T17:10:27.219Z
Introducing: MongoDB Atlas Operator for Kubernetes (trial version)
3,244
null
[ "connecting", "security", "c-driver" ]
[ { "code": " Cannot find certificate in 'file_where_cert_is_stored.pem'", "text": "I’m working on a project where I should connect my client-app to mongodb with cert.My client-app uses the mongo-c-driver, after looking in the doc I found this API mongoc_client_pool_set_ssl_opts which seems to be easy to use.The problem is whenever I pass the path to the .pem (server certificate and CA) (generated with this tutorial) the API return this error mongoc: Cannot find certificate in 'file_where_cert_is_stored.pem'.What am I doing wrong so the API can’t find the certificate ?", "username": "KamelA" }, { "code": "", "text": "The problem was that my file didn’t respect the .pem file format, since I had to “fprintf” my certificat to pass it to this API, it would be great if we can pass directly a buffer containing the certificat to this API instead of a file path.", "username": "KamelA" }, { "code": "frprintf", "text": "@KamelA what frprintf invocation did you use to get this work? I would like to understand how the data differed between the instance where the failure occurred and where it succeeded.", "username": "Roberto_Sanchez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo C Driver can't find certificate file
2021-04-22T17:36:40.563Z
Mongo C Driver can&rsquo;t find certificate file
3,215
null
[]
[ { "code": "", "text": "Hi there. I am currently using MongoDB to store my information that is typed into a page from my website https://www.student-mania.com/query. From here the user inputs info and when they click the button it should send to the https://www.student-mania.com/forumPage to show it outputted to the screen but my team has only got this working on local hosting and our aim is to have it working for use 24/7 globally without running on just localhost. We are hosting our site using AWS amplify and we have tried to use AWS elastic beanstalk and other methods to run globally for anyone to use but keep hitting a brick wall. We are unsure but have done some research on maybe changing the cluster from a free shared tier to a dedicated cluster but would like feedback to see if this will for sure work before committing. Any help or advice would be very much appreciated at this time. Thanks", "username": "Jack_Haugh" }, { "code": "", "text": "Hi Jack, what type of error are you seeing?You should be able to deploy this application through any number of application tier management paradigms. You should absolutely be able to use the MongoDB Atlas free tier as well.If I had to guess, you are likely not opening up the Atlas IP access list and/or are potentially not using database authentication to connect to your Atlas cluster. Can you double check?Cheers\nAndrew", "username": "Andrew_Davidson" } ]
Output Database contents from MongoDB atlas to website for global use
2021-04-17T16:09:24.704Z
Output Database contents from MongoDB atlas to website for global use
1,601